Friday , December 14 2018
Home / Others / A prospective development and validation study

A prospective development and validation study


Methods and results

We have created a mobile web portal to evaluate videos to evaluate 30 behavioral characteristics (eye contact, social smile) used by 8 independent machine learning models for identifying ASDs, each with a > 94% accuracy in cross-validation tests and subsequent independent tests. validation of previous work. We then collected 116 short videos of children with autism (mean age = 4 years and 10 months, SD = 2 years and 3 months) and 46 videos of typical developing children (mean age = 2 years and 11 months, SD = 1 year and 2 months). . Three blind diagnostic evaluators independently measured each of the 30 characteristics of the 8 models, with a median delay of 4 minutes. Although several models (consisting of alternating decision trees, machine support [SVM], logistic regression (LR), radial core and linear SVM worked well, a LR 5 classifier provided the highest accuracy (area under the curve). [AUC]92% [95% CI 88%–97%]) across all ages tested. We used an independent validation set of 66 videos (33 ASD and 33 non-ASD) and 3 independent evaluation measures collected prospectively to validate the result, achieving a lower but comparable accuracy (AUC: 89% [95% CI 81%–95%]). Finally, we applied LR to the matrix of 162 video characteristics to construct an 8-characteristic model, which reached a value of 0.93 AUC (95% CI 0.90-0.97) on the test set retained and 0.86 on the validation game of 66 videos. Validation on children with an existing diagnosis limited the ability to generalize performance to undiagnosed populations.

Author's abstract


Neuropsychiatric disorders are the leading cause of disability due to noncommunicable diseases worldwide, accounting for 14% of the global burden of disease.[[[[1]. Autism Spectrum Disorders (ASD), whose incidence has increased by about 700% since 1996, have largely contributed to this measure. [2,3] and now affects 1 in 59 children in the United States [4,5]. ASD is perhaps one of the biggest health problems in pediatrics, as supporting a person with this condition costs up to $ 2.4 million over the course of his life in the United States. [6] and more than $ 5 billion a year in US health care costs [6].

Like most mental health issues, autism presents a complex range of symptoms.[[[[7]who are diagnosed by behavioral exams. The standard of care (SOC) for the diagnosis of autism uses behavioral tools such as the Autism Diagnosis Observation Schedule (ADOS) [8] and the Diagnostic Interview on Revised Autism (ADI-R) [9]. These standard exams look like others in developmental pediatrics [10] in that they require direct observation from the doctor to the child and take hours to be administered [11–14]. The sharp increase in the incidence of autism, combined with the insuperable nature of SOC, has created pressure on the health system. Wait times for a diagnostic evaluation can reach or exceed 12 months in the United States. [15], and the average age of diagnosis in the United States remains close to 5 years [2, 13], the average age of under-served populations at the time of ASD diagnosis of up to 8 years [16–18]. The wide variability in the availability of diagnostic and therapeutic services is common to most psychiatric and mental health problems in the United States, with a severe shortage of mental health services in 77% of US counties. [19]. Behavioral interventions for ASD have the most impact when administered before or at the age of 5 years [12,20–23]; However, the diagnostic bottleneck that families face severely limits the impact of therapeutic interventions. Scalable measures are needed to eliminate bottlenecks, reduce waiting times for access to treatment, and reach needy populations in need.

To enable rapid and accurate access to ASD care, we used supervised automatic learning approaches to identify minimal sets of behaviors consistent with clinical diagnosis of ASD.[[[[24-30]. We collected and analyzed the ADOS and ADI-R administration element results to train and test the accuracy of a range of classifiers. For ADOS, we focused our analysis on the Ordinal Outcomes data in Modules 1, 2 and 3, which assess children with limited or no vocabulary, articulated language and spoken language, respectively. Each of the 3 ADOS modules uses about 10 activities for a clinical observation of the at-risk child and 28 to 30 additional behavioral measures used to score the child as a result of observation. Our machine learning analyzes focused on archived records of categorical and ordinal data generated from the scoring component of these ADOS exams. Similarly, ADI-R includes 93 multiple choice questions from a clinician from the child's primary care provider during a clinic interview; As with ADOS, we focused our classification task on the ordinal outcome data that resulted from administering the test.

These preliminary studies focused on building models optimized for accuracy, clarity, and interpretation, which differentiate autism from non-autism while managing class imbalance. We chose models with a small number of features, with performance that is equal to or better than a standard error, and interpretable results, such as scores generated by an optimized decision tree or a logistic regression (LR) approach. In total, these studies used the scores of 11,298 autistic individuals (mixed with low, medium and high severity autism) and 1,356 controls (including some children for whom autism could have been suspected but excluded). and identified the following 8 classifiers: a 7-function alternating decision tree (ADTree7)[[[[29], 8-function alternative decision tree (ADTree8) [30], a 12-function support vector machine (SVM12) [26], a 9-function LR classifier (LR9) [26], 5 Function Support Vector Machine (SVM5) [27], a LR function classifier (LR5) [27], a 10-function LR classifier (LR10) [27]and a 10-function support vector machine (SVM10) [27].

Two of these 8 classifiers were independently tested in 4 separate analyzes. In a prospective prospective comparison between clinical outcome and ADTree7 (measured before clinical evaluation and official diagnosis) on 222 children (NOTTSA = 69; NOTcontrols = 153; median = 5.8 years), the performance, measured as an unweighted mean booster (PSU[[[[31]; average sensitivity and specificity), was 84.8% [24]. Separately, Bone and his colleagues [32] tested ADTree7 on a "balanced independent data set" (BID) consisting of data on ADI-R outcomes from 680 participants (462 ASD, mean age = 9.2 years, SD = 3.1 years) and 218 non-ASD (mean age = 9.4 years, SD = 2.9 years) and found that performance was equally high, at 80%. Duda and his colleagues [25] tested the ADTree8 with 2,333 people with autism (mean age = 5.8 years) and 283 "non-autistic" control people (mean age = 6.4 years) and found a performance of 90.2%. Bone and his colleagues [32] also tested this model ADTree8 with 1,033 IDB participants: 858 autistic (mean age = 5.2 years, SD = 3.6 years), 73 spectra (average autism = 3.9 years, SD = 2.4 years) and 102 non-age = 3.4 years, standard deviation = 2.0 years) and found that performance was slightly higher (94%). These independent validation studies indicate the classifier's performance in the accuracy range of the published tests and support the hypothesis that models using a minimal number of features are reliable and accurate for the detection of autism.

Others have conducted similar training and testing experiments to identify the best-ranked features from standard instrument data, including Bone.[[[[33]and Bussu [34]. These approaches have come to similar conclusions, namely that machine learning is an effective way of constructing objective quantitative models with few features to distinguish mild, medium and high autism from children outside the classroom. autistic spectrum, including those with autism. developmental disorders. However, translating such models into clinical practice requires additional steps that have not yet been adequately addressed. Although some of our earlier work has shown that untrained video annotators can measure autism behaviors on personal videos with high reliability and accuracy among traders [35]the question of how to move from minimal behavioral patterns to clinical practice remains.

This study builds on this earlier work to answer this question and the assumption that the features represented in our viable minimum classifiers can be tagged quickly, accurately and reliably from short personal videos by video evaluators. having no formal training in diagnosing autism or developing the child. We have deployed crowdsourcing and real-time video analysis for feature labeling to run and evaluate the accuracy of the 8 auto-learning patterns trained in detection autism in 2 independent video repositories. This procedure allowed us to test the ability to reduce to practice the process of rapid mobile video analysis as a viable method of identifying the symptoms of autism and screening. In addition, since mobile tagging of videos automatically generates a feature-rich matrix, it offers the opportunity to form a new model of artificial intelligence that can be generalized to automatic detection of autism in short video clips. We test this hypothesis by constructing a new video feature classifier and comparing its results to other models in a conserved subset of the original video feature matrix and in an independent external validation set. The results of this work support the hypothesis that autism detection can be performed with mobile devices outside clinical parameters with high efficiency and accuracy.

The methods

Source classifiers for reduced practice tests

We have assembled 8 published machine learning classifiers to test the viability to be used in the fast mobile detection of autism in short personal videos. For the eight models, the training and validation data came from the medical records generated by the administration of one of the two reference instruments for the diagnosis of autism, ADOS or ADI-R. The ADOS has several modules containing about 30 features corresponding to the level of development of the individual evaluated. Module 1 is used for people whose vocabulary is limited or nonexistent. Module 2 is used for people who use a verbal expression but do not speak fluently. Module 3 is used for people who speak fluently. The ADI-R is a parent-led interview that includes more than 90 items requested from the parent, with several choices of answers. Each model has been trained on ADOS and ADI-R elements-level results and optimized for accuracy, rarity of functionality, and interpretability.

For the sake of brevity without omitting details, we chose to create an abbreviation for each model using a basic naming convention. This abbreviation took the form "type_type" – "number of features". For example, we used ADTree8 to designate the use of an Alternate Decision Tree (ADTree) with 8 features developed from medical data from the administration of the ADOS diagnostic tool. Module 1 and LR5 to refer to the LR with 5 behavioral characteristics developed from the analysis of data from the ADOS Module 2 medical file, etc.

Recrutement et collection de vidéos

En vertu d'un protocole IRB approuvé de l'Université Stanford, nous avons développé un portail mobile pour faciliter la collecte de vidéos d'enfants atteints de TSA, à partir duquel les participants ont consenti électroniquement à participer et à télécharger leurs vidéos. Les participants ont été recrutés via des méthodes de crowdsourcing[[[[38–41]ciblé sur les plateformes de médias sociaux et les serveurs de liste de diffusion pour les familles d’enfants autistes. Les participants intéressés ont été dirigés vers un site Web portail vidéo sécurisé et crypté pour donner leur consentement. Nous avons demandé aux participants d’avoir au moins 18 ans et d’être le principal prestataire de soins d’un enfant autiste âgé de 12 mois à 17 ans. Les participants ont fourni des vidéos soit par téléchargement direct sur le portail, soit par référence à une vidéo déjà téléchargée sur YouTube, avec l'âge, le diagnostic et d'autres caractéristiques essentielles. Nous avons considéré les vidéos comme éligibles si (1) elles duraient entre 1 et 5 minutes, (2) montraient le visage et les mains de l’enfant, (3) montraient des opportunités claires ou un engagement social direct, et (4) impliquaient des opportunités pour utilisation d'un objet tel qu'un ustensile, un crayon ou un jouet.

Nous nous sommes appuyés sur des informations auto-déclarées fournies par les parents concernant le diagnostic officiel de l’autisme ou de l’autisme, l’âge de l’enfant au moment de la soumission de la vidéo et des informations démographiques supplémentaires sur les vidéos soumises directement au portail Web. Pour les vidéos fournies via des URL YouTube, nous avons utilisé des métatags YouTube pour confirmer l'âge et le diagnostic de l'enfant dans la vidéo. Si une vidéo n'incluait pas de métabalise pour l'âge de l'enfant dans la vidéo, cet âge a été attribué à la suite d'un accord total entre les estimations de 3 praticiens cliniques en pédiatrie. Pour évaluer l’exactitude de l’auto-évaluation des parents et éviter les biais, nous avons chargé un pédiatre spécialiste certifié d’administrer le système ADOS afin de visionner une sélection aléatoire de 20 vidéos. Nous avons également demandé à un pédiatre spécialisé en développement d'examiner une sélection aléatoire de 10 vidéos supplémentaires sans chevauchement. Ces experts cliniques ont classé chaque vidéo dans la catégorie «TSA» ou «non TSA».

Balisage des vidéos pour l'exécution de modèles d'apprentissage automatique

Nous avons employé un total de 9 évaluateurs vidéo qui étaient soit des étudiants (lycée, premier cycle ou cycles supérieurs), soit des professionnels actifs. Aucun n’avait de formation ou d’accréditation en détection ou diagnostic de l’autisme. Tous ont reçu des instructions sur la manière de baliser les 30 questions et ont été invités à noter 10 exemples de vidéos avant de procéder au balisage indépendant de nouvelles vidéos. Après la formation, nous avons fourni aux évaluateurs des noms d'utilisateur et des mots de passe uniques leur permettant d'accéder au portail en ligne sécurisé, de visionner des vidéos et de répondre à 30 questions pour chaque vidéo requise par les vecteurs de fonctions afin d'exécuter les 8 classificateurs d'apprentissage automatique (tableau 1). Les fonctionnalités ont été présentées aux évaluateurs vidéo sous forme de questions à choix multiples rédigées à un niveau de lecture d'environ la septième année. Les évaluateurs, restés aveugles au diagnostic tout au long de l’étude, ont été chargés de choisir l’une des étiquettes pour chaque caractéristique décrivant le mieux le comportement de l’enfant dans la vidéo. Chaque réponse à une caractéristique a ensuite été mise en correspondance avec un score compris entre 0 et 3, les scores les plus élevés indiquant des caractéristiques plus graves de l'autisme dans le comportement mesuré, ou 8 indiquant que la fonctionnalité n'a pas pu être notée. Les caractéristiques comportementales et le chevauchement entre les modèles sont fournis à la figure 1.

Pour tester la viabilité des vidéos de marquage de fonctions pour la détection rapide et le diagnostic de l'autisme par apprentissage automatique, nous avons identifié de manière empirique un nombre minimal de rapporteurs de vidéos nécessaires pour évaluer les vidéos personnelles fournies par les parents. Nous avons sélectionné un sous-ensemble aléatoire de vidéos à partir de l'ensemble complet de vidéos collectées via notre portail participatif et avons exécuté ADTree8. [30] modèle sur les vecteurs de caractéristiques étiquetés par les 9 évaluateurs. Nous avons choisi de n'utiliser que ADTree8 pour des raisons d'efficacité et parce que ce modèle avait déjà été validé dans 2 études indépendantes. [25,32]. Nous avons utilisé une procédure de permutation échantillon avec remplacement pour mesurer la précision en fonction de l’accord de la majorité des évaluateurs avec la véritable classification diagnostique. Nous avons progressivement augmenté le nombre de notateurs vidéo par essai de 1 évaluateur, en commençant par 1 et en terminant par 9, le tirage avec remplacement 1 000 fois par essai. Lorsqu’on n’a retenu que 2 évaluateurs, nous avions besoin d’un accord de classe parfait entre les évaluateurs. Avec un nombre impair d’évaluateurs, nous avions besoin d’un consensus à la majorité stricte. Lorsqu'un nombre pair d'évaluateurs étaient en désaccord sur la classification, nous avons utilisé un score d'évaluateur indépendant et choisi au hasard pour briser l'égalité.

Après avoir déterminé le nombre minimum d’évaluateurs de vidéo, nous avons utilisé ce minimum pour générer l’ensemble complet de vecteurs de 30 caractéristiques sur toutes les vidéos. Sept des modèles ont été écrits en Python 3 à l'aide du paquet scikit-learn, et un en R. Nous avons exécuté ces 8 modèles sur nos matrices de fonctionnalités après le balisage de fonctionnalités dans des vidéos. We measured the model accuracy through comparison of the raters’ majority classification result with the true diagnosis. We evaluated model performance further by age categories: ≤2 years, >2 to ≤4 years, >4 years to ≤6 years, and >6 years. For each category, we calculated accuracy, sensitivity, and specificity.

We collected timed data from each rater for each video, which began when a video rater pressed “play” on the video and concluded when a video rater finished scoring by clicking “submit” on the video portal. We used these time stamps to calculate the time spent annotating each video. We approximated the time taken to answer the questions by excluding the length of the video from the total time spent to score a video.

Building a video feature classifier

The process of video feature tagging provides an opportunity to generate a crowdsourced collection of independent feature measurements that are specific to the video of the child as well as independent rater impressions of that child’s behaviors. This in turn has the ability to generate a valuable feature matrix to develop models that include video-specific features rather than features identified through analysis on archived data generated through administration of the SOC (as is the case for all classifiers contained in Table 1). To this end, and following the completion of the annotation on all videos by the minimum number of raters, we performed machine learning on our video feature set. We used LR with an elastic net penalty [42] (LR-EN-VF) to predict the autism class from the non-autism class. We randomly split the dataset into training and testing, reserving 20% for the latter while using cross-validation on the training set to tune for hyperparameters. We used cross-validation for model hyperparameter tuning by performing a grid search with different values of alpha (varying penalty weights) and L1 ratio (the mixing parameter determining how much weight to apply to L1 versus L2 penalties). Based on the resulting area under the curve (AUC) and accuracy from each combination, we selected the top-performing pair of hyperparameters. Using this pair, we trained the model using LR and balanced class weights to adjust weights inversely proportional to class frequencies in the input data. After determining the top-ranked features based on the trained model and the resulting coefficients, we validated the model on the reserved test set.


All classifiers used for testing the time and accuracy of mobile video rating had accuracies above 90% (Table 1). The union of features across these 8 classifiers (Table 1) was 23 (Fig 1). These features plus an additional 7 chosen for clinical validity testing were loaded into a mobile video rating portal to enable remote feature tagging by nonclinical video raters.

We collected a total of 193 videos (Table 2) with average video length of 2 minutes 13 seconds (SD = 1 minute 40 seconds). Of the 119 ASD videos, 72 were direct submissions made by the primary caregiver of the child, and 47 were links to an existing video on YouTube. Of the 74 non-ASD videos, 46 non-ASD videos were links to existing YouTube videos, and 28 were direct submissions from the primary caregiver. We excluded 31 videos because of insufficient evidence for the diagnosis (not = 25) or inadequate video quality (not = 6), leaving 162 videos (116 with ASD and 46 non-ASD) which were loaded into our mobile video rating portal for the primary analysis. To validate self-reporting of the presence or absence of an ASD diagnosis, 2 clinical staff trained and certified in autism diagnosis evaluated a random selection of 30 videos (15 with ASD and 15 non-ASD) from the 162 videos. Their classifications had perfect correspondence with the diagnoses provided through self-report by the primary caregiver.

We randomly selected 50 videos (25 ASD and 25 non-ASD) from the total 162 collected videos and had 9 raters feature tag all in an effort to evaluate the potential for an optimal number of raters, with optimal being defined through a balance of scalability and information content. The average video length of this random subset was 1 minute 54 seconds (SD = 46 seconds) for the ASD class and 2 minutes 36 seconds (SD = 1 minute 15 seconds) for the non-ASD class. We then ran the ADTree8 (Table 1) model on the feature vectors generated by the 9 raters. We found the difference in accuracy to be statistically insignificant between 3 raters—the minimum number to have a majority consensus on the classification with no ties—and 9 raters (Fig 2). We therefore elected to use a random selection of 3 raters from the 9 to feature tag all 162 crowdsourced home videos.

Model performance

Three raters performed video screening and feature tagging to generate vectors for each of the 8 machine learning models for comparative evaluation of performance (Fig 3). All classifiers had sensitivity >94.5%. However, only 3 of the 8 models exhibited specificity above 50%. The top-performing classifier was LR5, which showed an accuracy of 88.9%, sensitivity of 94.5%, and specificity of 77.4%. The next-best-performing models were SVM5 with 85.4% accuracy (54.9% specificity) and LR10 with 84.8% accuracy (51% specificity).

LR5 exhibited high accuracy on all age ranges with the exception of children over 6 years old (although note that we had limited examples of non-ASD[[[[not = 1]class in this range). This model performed best on children between the ages of 4 and 6 years, with sensitivity and specificity both above 90% (Fig 4, Table 3). SVM5 and LR10 showed an increase in performance on children ages 2–4 years, both with 100% sensitivity and the former with 66.7% and the latter with 58.8% specificity. The 3 raters agreed unanimously on 116 out of 162 videos (72%) when using the top-performing classifier, LR5. The interrater agreement (IRA) for this model was above 75% in all age ranges with the exception of the youngest age group of children, those under 2 years, for which there was a greater frequency of disagreement. The numbers of non-ASD representatives were small for the older age ranges evaluated (Table 3).

The median time for the 3 raters to watch and score a video was 4 minutes (Table 4). Excluding the time spent watching the video, raters required a median of 2 minutes 16 seconds to tag all 30 features in the analyst portal. We found a significant difference (p = 0.0009) between the average time spent to score the videos of children with ASD and the average time spent to score the non-ASD videos (6 minutes 36 seconds compared with 5 minutes 8 seconds).

Independent validation

To validate the feasibility and accuracy of rapid feature tagging and machine learning on short home videos, we launched a second effort for crowdsourcing videos of children with and without autism to generate an independent replication dataset. We collected 66 videos, 33 of children with autism and 33 non-ASD. This set of videos was comparable to the initial set of 162 videos in terms of gender, age, and video length. The average age for children with ASD was 4 years 5 months (SD = 1 year 9 months), and the average age for non-ASD children was 3 years 11 months (SD = 1 year 7 months). Forty-two percent (not = 14) of the children with ASD were male and 45% (not = 15) of the non-ASD children were male. The average video length was 3 minutes 24 seconds, with an SD of 45 seconds. For this independent replication, we used 3 different raters, each with no official training or experience with developmental pediatrics. The raters required a median time of 6 minutes 48 seconds for complete feature tagging. LR5 again yielded the highest accuracy, with a sensitivity of 87.8% and a specificity of 72.7%. A total of 13 of the 66 videos were misclassified, with 4 false negatives.

Given the higher average time for video evaluation, we hypothesized that the videos contained challenging displays of autism symptoms. Therefore, we examined the probabilities generated by the LR5 model for the 13 misclassified videos. Two of the 4 false negatives and 4 of the 9 false positives had borderline probabilities scores between 0.4 and 0.6. We elected to define a probability threshold between 0.4 and 0.6 to flag videos as inconclusive cases. Twenty-six of the 66 videos fell within this inconclusive group when applying this threshold. When we excluded these 26 from our accuracy analysis, the sensitivity and specificity increased to 91.3% and 88.2%, respectively.

Training a video feature–specific classifier

To build a video feature–specific classifier, we trained an LR-EN-VF model on 528 (3 raters × 176 videos) novel measures of the 30 video features used to distinguish the autism class from the neurotypical cohort. Out of these 176 videos (ASD = 121, non-ASD = 58), 162 (ASD = 116, non-ASD = 46) were from the analysis set, and 14 videos (ASD = 5, non-ASD = 12) were from the set of 66 validation videos. Model hyperparameters (alpha and L1 ratio) identified through 10-fold cross-validation were 0.01 and 0.6, respectively. We used a high L1 ratio to enforce sparsity and to decrease model complexity and the number of features. We had similar proportions (0.60) for non-ASD and ASD measures in the training set and held-out test set, which allowed us to create a model that generalizes well without a significant change in sensitivity or specificity on novel data. The model had an area under the receiver operating characteristic curve (AUC-ROC) of 93.3% and accuracy of 87.7% on the held-out test set. A comparison of LR-EN-VF with LR L2 penalty (no feature reduction) revealed similar results (AUC-ROC: 93.8%, test accuracy: 90.7%) (Fig 5). The top-8 features selected by the model consisted of the following, in order of highest to lowest rank: speech patterns, communicative engagement, understands language, emotion expression, sensory seeking, responsive social smile, stereotyped speech. One of these 8 features—sensory seeking—was not part of the full sets of items on the standard instrument data used in the development and testing of the 8 models depicted in Table 1. We then validated this classifier on the remaining 52 videos (ASD = 28, non-ASD = 21) from the validation set, and the results showed an accuracy of 75.5% and an AUC-ROC of 86.0%.


Previous work [26–29] has shown that machine learning models built on records from standard autism diagnoses can achieve high classification accuracy with a small number of features. Although promising in terms of their minimal feature requirements and ability to generate an accurate risk score, their potential for improving autism diagnosis in practice has remained an open question. The present study tested the ability to reduce these models to the practice of home video evaluation by nonexperts using mobile platforms (e.g., tablets, smartphones). Independent tagging of 30 features by 3 raters blind to diagnosis enabled majority rules machine learning classification of 162 two-minute (average) home videos in a median of 4 minutes at 90% AUC on children ages 20 months to 6 years. This performance was maintained at 89% AUC (95% CI 81%–95%) in a prospectively collected and independent external set of 66 videos each with 3 independent rater measurement vectors. Taking advantage of the probability scores generated by the best-performing model (L1-regularized LR model with 5 features) to flag low-confidence cases, we were able to achieve a 91% AUC, suggesting that the approach could benefit from the use of the scores on a more quantitative scale rather than just as a binary classification outcome.

By using a mobile format that can be accessed online, we showed that it is possible to get multiple independent feature vectors for classification. This has the potential to elevate confidence in classification outcome at the time of diagnosis (i.e., when 3 or more agree on class) while fostering the growth of a novel matrix of features from short home videos. In the second part of our study, we tested the ability for this video feature matrix to enable development of a new model that can generalize to the task of video-based classification of autism. We found that an 8-feature LR model could achieve an AUC of 0.93 on the held-out subset and 0.86 on the prospective independent validation set. One of the features used by this model, sensory seeking, was not used by the instruments on which the original models were trained, suggesting the possibility that alternative features may provide added power for video classification.

These results support the hypothesis that the detection of autism can be done effectively at scale through mobile video analysis and machine learning classification to produce a quantified indicator of autism risk quickly. Such a process could streamline autism diagnosis to enable earlier detection and earlier access to therapy that has the highest impact during earlier windows of social development. Further, this approach could help to reduce the geographic and financial burdens associated with access to diagnostic resources and provide more equal opportunity to underserved populations, including those in developing countries. Further testing and refinement should be conducted to identify the most viable method(s) of crowdsourcing video acquisition and feature tagging. In addition, prospective trials in undiagnosed and in larger, more-balanced cohorts including examples of children with non-autism developmental delays will be needed to better understand the approach’s potential for use in autism diagnosis.


  1. 1.
    Prince M, Patel V, Saxena S, Maj M, Maselko J, Phillips MR, et al. Global mental health 1 – No health without mental health. Lancet. 2007;370(9590):859–77. pmid:17804063
  2. 2
    Baio J, Wiggins L, Christensen DL, Maenner MJ, Daniels J, Warren Z, et al. Prevalence of Autism Spectrum Disorder Among Children Aged 8 Years—Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2014. MMWR Surveillance Summaries. 2018;67(6):1. pmid:29701730. PMCID: PMC5919599.
  3. 3
    Hertz-Picciotto I, Delwiche L. The Rise in Autism and the Role of Age at Diagnosis. Epidemiology. 2009;20(1):84–90. pmid:19234401. PMCID: PMC4113600.
  4. 4
    Christensen DL, Baio J, Van Naarden Braun K, Bilder D, Charles J, Constantino JN, et al. Prevalence and Characteristics of Autism Spectrum Disorder Among Children Aged 8 Years–Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2012. MMWR Surveill Summ. 2016;65(3):1–23. pmid:27031587.
  5. 5
    Christensen DL, Bilder DA, Zahorodny W, Pettygrove S, Durkin MS, Fitzgerald RT, et al. Prevalence and characteristics of autism spectrum disorder among 4-year-old children in the autism and developmental disabilities monitoring network. Journal of Developmental & Behavioral Pediatrics. 2016;37(1):1–8. pmid:26651088.
  6. 6
    Buescher AV, Cidav Z, Knapp M, Mandell DS. Costs of autism spectrum disorders in the United Kingdom and the United States. JAMA Pediatr. 2014;168(8):721–8. pmid:24911948.
  7. 7.
    McPartland JC, Reichow B, Volkmar FR. Sensitivity and specificity of proposed DSM-5 diagnostic criteria for autism spectrum disorder. J Am Acad Child Adolesc Psychiatry. 2012;51(4):368–83. pmid:22449643. PMCID: PMC3424065.
  8. 8.
    Lord C, Rutter M, Goode S, Heemsbergen J, Jordan H, Mawhood L, et al. Austism diagnostic observation schedule: A standardized observation of communicative and social behavior. Journal of autism and developmental disorders. 1989;19(2):185–212. pmid:2745388.
  9. 9.
    Lord C, Rutter M, Le Couteur A. Autism Diagnostic Interview-Revised: a revised version of a diagnostic interview for caregivers of individuals with possible pervasive developmental disorders. Journal of autism and developmental disorders. 1994;24(5):659–85. pmid:7814313.
  10. 10.
    Association AP. Diagnostic and statistical manual of mental disorders (DSM-5®). Arlington, VA: American Psychiatric Pub; 2013.
  11. 11.
    Bernier R, Mao A, Yen J. Psychopathology, families, and culture: autism. Child Adolesc Psychiatr Clin N Am. 2010;19(4):855–67. pmid:21056350.
  12. 12.
    Dawson G. Early behavioral intervention, brain plasticity, and the prevention of autism spectrum disorder. Dev Psychopathol. 2008;20(3):775–803. pmid:18606031.
  13. 13
    Mazurek MO, Handen BL, Wodka EL, Nowinski L, Butter E, Engelhardt CR. Age at first autism spectrum disorder diagnosis: the role of birth cohort, demographic factors, and clinical features. J Dev Behav Pediatr. 2014;35(9):561–9. pmid:25211371.
  14. 14.
    Wiggins LD, Baio J, Rice C. Examination of the time between first evaluation and first autism spectrum diagnosis in a population-based sample. Journal of Developmental and Behavioral Pediatrics. 2006;27(2):S79–S87. pmid:16685189.
  15. 15
    Gordon-Lipkin E, Foster J, Peacock G. Whittling Down the Wait Time: Exploring Models to Minimize the Delay from Initial Concern to Diagnosis and Treatment of Autism Spectrum Disorder. Pediatr Clin North Am. 2016;63(5):851–9. pmid:27565363. PMCID:PMC5583718.
  16. 16.
    Howlin P, Moore A. Diagnosis in autism: A survey of over 1200 patients in the UK. autism. 1997;1(2):135–62.
  17. 17.
    Kogan MD, Strickland BB, Blumberg SJ, Singh GK, Perrin JM, van Dyck PC. A National Profile of the Health Care Experiences and Family Impact of Autism Spectrum Disorder Among Children in the United States, 2005-2006. Pediatrics. 2008;122(6):E1149–E58. pmid:19047216.
  18. 18.
    Siklos S, Kerns KA. Assessing the diagnostic experiences of a small sample of parents of children with autism spectrum disorders. Res Dev Disabil. 2007;28(1):9–22. pmid:16442261.
  19. 19.
    Thomas KC, Ellis AR, Konrad TR, Holzer CE, Morrissey JP. County-level estimates of mental health professional shortage in the United States. Psychiatr Serv. 2009;60(10):1323–8. pmid:19797371.
  20. 20.
    Dawson G, Jones EJH, Merkle K, Venema K, Lowy R, Faja S, et al. Early Behavioral Intervention Is Associated With Normalized Brain Activity in Young Children With Autism. Journal of the American Academy of Child and Adolescent Psychiatry. 2012;51(11):1150–9. pmid:23101741. PMCID: PMC3607427.
  21. 21.
    Dawson G, Rogers S, Munson J, Smith M, Winter J, Greenson J, et al. Randomized, controlled trial of an intervention for toddlers with autism: the Early Start Denver Model. Pediatrics. 2010;125(1):e17–23. pmid:19948568. PMCID: PMC4951085.
  22. 22
    Landa RJ. Efficacy of early interventions for infants and young children with, and at risk for, autism spectrum disorders. International Review of Psychiatry. 2018;30(1):25–39. pmid:29537331. PMCID: PMC6034700.
  23. 23.
    Phillips DA, Shonkoff JP. From neurons to neighborhoods: The science of early childhood development. Washington, D.C.: National Academies Press; 2000. pmid:25077268.
  24. 24.
    Duda M, Daniels J, Wall DP. Clinical Evaluation of a Novel and Mobile Autism Risk Assessment. J Autism Dev Disord. 2016;46(6):1953–61. pmid:26873142. PMCID: PMC4860199.
  25. 25
    Duda M, Kosmicki JA, Wall DP. Testing the accuracy of an observation-based classifier for rapid detection of autism risk. Transl Psychiatry. 2014;4(8):e424. pmid:25116834.
  26. 26
    Kosmicki JA, Sochat V, Duda M, Wall DP. Searching for a minimal set of behaviors for autism detection through feature selection-based machine learning. Translational Psychiatry. 2015;5(2):e514. pmid:25710120. PMCID: PMC4445756.
  27. 27
    Levy S, Duda M, Haber N, Wall DP. Sparsifying machine learning models identify stable subsets of predictive features for behavioral detection of autism. Mol Autism. 2017;8(1):65. pmid:29270283. PMCID: PMC5735531.
  28. 28
    Wall DP, Kosmicki J, DeLuca TF, Harstad E, Fusaro VA. Use of machine learning to shorten observation-based screening and diagnosis of autism. Translational Psychiatry. 2012;2(4):e100. pmid:22832900. PMCID: PMC3337074.
  29. 29
    Wall DP, Dally R, Luyster R, Jung JY, Deluca TF. Use of artificial intelligence to shorten the behavioral diagnosis of autism. PLoS One. 2012;7(8):e43855. pmid:22952789.
  30. 30
    Wall DP, Kosmiscki J, Deluca TF, Harstad L, Fusaro VA. Use of machine learning to shorten observation-based screening and diagnosis of autism. Translational Psychiatry. 2012;2(e100). pmid:22832900. PMCID: PMC3337074.
  31. 31.
    Schuller B, Vlasenko B, Eyben F, Wollmer M, Stuhlsatz A, Wendemuth A, et al. Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies. Ieee Transactions on Affective Computing. 2010;1(2):119–31.
  32. 32
    Bone D, Goodwin MS, Black MP, Lee CC, Audhkhasi K, Narayanan S. Applying machine learning to facilitate autism diagnostics: pitfalls and promises. J Autism Dev Disord. 2015;45(5):1121–36. pmid:25294649. PMCID: PMC4390409.
  33. 33
    Bone D, Bishop SL, Black MP, Goodwin MS, Lord C, Narayanan SS. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi‐instrument fusion. Journal of Child Psychology and Psychiatry. 2016;57(8):927–37. pmid:27090613. PMCID: PMC4958551.
  34. 34
    Bussu G, Jones EJH, Charman T, Johnson MH, Buitelaar JK, Team B. Prediction of Autism at 3 Years from Behavioural and Developmental Measures in High-Risk Infants: A Longitudinal Cross-Domain Classifier Analysis. Journal of Autism and Developmental Disorders. 2018;48(7):2418–33. pmid:29453709. PMCID: PMC5996007.
  35. 35
    Fusaro VA, Daniels J, Duda M, DeLuca TF, D'Angelo O, Tamburello J, et al. The Potential of Accelerating Early Detection of Autism through Content Analysis of YouTube Videos. Plos One. 2014;9(4):e93533. pmid:24740236. PMCID: PMC3989176.
  36. 36
    Freund Y, Schapire RE, editors. Experiments with a new boosting algorithm. Icml; 1996 July 3, 1996; Bari, Italy. San Francisco, CA, USA: Morgan Kaufman Publishers Inc.; 1996.
  37. 37
    Freund Y, Mason L, editors. The alternating decision tree learning algorithm. icml; 1999 June 27, 1999; Bled, Slovenia. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
  38. 38
    Behrend TS, Sharek DJ, Meade AW, Wiebe EN. The viability of crowdsourcing for survey research. Behav Res Methods. 2011;43(3):800–13. pmid:21437749.
  39. 39
    David MM, Babineau BA, Wall DP. Can we accelerate autism discoveries through crowdsourcing? Research in Autism Spectrum Disorders. 2016;32:80–3.
  40. 40
    Ogunseye S, Parsons J, editors. What Makes a Good Crowd? Rethinking the Relationship between Recruitment Strategies and Data Quality in Crowdsourcing. Proceedings of the 16th AIS SIGSAND Symposium; 2017 May 19-20, 2017; Cincinnati, OH.
  41. 41
    Swan M. Crowdsourced health research studies: an important emerging complement to clinical trials in the public health research ecosystem. J Med Internet Res. 2012;14(2):e46. pmid:22397809. PMCID: PMC3376509.
  42. 42
    Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2005;67(2):301–20.

Source link