Objectives/Aims To the best of our knowledge, there are no studies using intelligent sensing to diagnose ADHD. The aim of this interdisciplinary (Medical and Engineering) research is to contribute intelligent sensing based multimodal (audio, video, touch, & text) data and trustworthy and reproducible AI in diagnosing ADHD.
Methods Unmedicated subjects with ADHD were interviewed at the Intelligent Sensing Lab in Newcastle University UK and the multimodal data were captured (figure 1). They completed CANTAB tasks, watched stimulating and neutral videos to elicit hyperfocus and distraction. Other distraction cues like noise, images on a monitor were introduced. Healthy volunteers were used as controls. Data were analysed on speech analysis, action analysis and facial action coding system. We used the existing multimodal signal and information processing algorithms developed at Newcastle University
Results The accuracy of the Audio-Based ADHD Diagnosis reached over 80%. The accuracy of the ADHD Action-Based Diagnosis System reached over 90%. The accuracy of the Facial Action Coding System was 94%.
Conclusions Even with a small number of subject and controls, we were able to develop a proof of the concept system which generates high accuracy results. We are in the process of multimodal fusion and conduct a much larger study.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.