Skip to main content
Research

Digital interventions for neurodivergent children: what the research shows

Why this matters

Over the past decade, digital tools for neurodivergent children have moved from novelty to a serious area of clinical research. Meta-analyses now cover hundreds of studies. Some tools have received FDA authorisation. Others have quietly disappeared after failing to show results.

At DeepSpectrum Lab, we build on this research. So we think it's worth laying out what the evidence actually says – clearly and honestly, including the parts that are inconvenient.

This article reviews the published literature across four areas that are central to our work: social stories, Theory of Mind training, emotion recognition, and executive function support. We also look at what makes digital interventions effective – and where they fall short.

The big picture: do digital interventions work?

The short answer: yes, but with important caveats.

In 2014, Grynszpan and colleagues published a landmark meta-analysis in Autism, reviewing controlled studies of technology-based interventions for autistic individuals. The overall effect size was d = 0.47 – a medium effect – across outcomes including social problem-solving, emotion processing, and facial recognition (Grynszpan et al., 2014). That number has held up well as more studies have been published.

A 2023 systematic review published in JMIR Mental Health looked specifically at technology-based interventions for school-age children with ADHD. Across 19 randomised controlled trials involving 1,843 participants, digital interventions showed improvements in attention and reduced hyperactivity symptoms (Wang et al., 2023). A separate meta-analysis of 31 studies (2,169 participants, ages 4–17) found significant improvements in inattention symptoms and reaction times (Zhang et al., 2023).

A large-scale systematic review screened over 46,000 articles and selected 254 studies for in-depth analysis. Technology-based interventions – particularly socially assistive robots – showed significant improvements in social interaction, emotional regulation, and communication for autistic children.

So: digital interventions can work. The question is which ones, for whom, and under what conditions.

Social Stories: from paper to screen

Social Stories™ were developed by Carol Gray in 1991 as short narratives that describe social situations from multiple perspectives. They've become one of the most widely used approaches in autism support.

The evidence for social stories is positive but mixed in terms of rigour. A systematic review found they are most effective when targeting specific socio-emotional goals rather than teaching general social skills. The challenge has always been consistency – in clinical settings, different practitioners deliver them differently.

That's where digital delivery becomes interesting. A 2020 pilot randomised controlled trial tested digitally-mediated social stories for autistic children. Results showed beneficial changes in behaviour outcomes that were sustained at a six-week follow-up. The digital format allowed for consistent delivery, controlled repetition, and personalisation – things that are hard to achieve with printed stories.

The ASSSIST-2 trial (Wright et al., 2025), a pragmatic randomised controlled trial in UK primary schools, is one of the largest studies of Social Stories in educational settings. It tested the intervention's impact on the social and emotional health of autistic children – providing further evidence on how structured narrative approaches perform in real-world settings.

More recently, researchers have started exploring VR-based social stories with visual novel elements – combining narrative interaction with user choices. A 2025 systematic literature review found promising early results for this approach, though the research is still in its early stages.

Theory of Mind: teaching perspective-taking

Theory of Mind (ToM) – the ability to understand that others have thoughts, feelings, and knowledge different from your own – develops on a different timeline for many autistic children. Training ToM is one of the most researched areas in autism intervention.

The most cited RCT in this area is Begeer et al. (2011), published in the Journal of Autism and Developmental Disorders. In a 16-week training programme with 40 autistic children (ages 8–13), the treatment group showed significant improvement in conceptual ToM skills compared to controls.

Here's the important nuance: while conceptual understanding improved, the study found no significant improvement in self-reported empathy or parent-reported social behaviour. In other words, children could understand ToM concepts better but didn't necessarily transfer that knowledge to everyday social situations.

This is a recurring pattern in the literature. A Cochrane review of ToM interventions for autism (Fletcher-Watson et al., 2014) found similar results: training improves task performance, but generalisation to real-world interactions remains the major challenge.

More recent work has explored robot-assisted ToM training (2025, Scientific Reports), showing that humanoid robots can effectively support ToM skill development – possibly because the predictable, non-threatening nature of robot interaction lowers social anxiety during practice.

Emotion recognition: where technology shines

If there's one area where digital interventions show the most consistent results, it's emotion recognition training. Several named programmes have been tested in controlled studies:

  • FaceSay – Hopkins et al. (2011) tested this programme with 31 school-age autistic children in a randomised controlled trial. Participants who received the intervention improved in affect recognition, mentalising, and social skills.
  • The Transporters – A BBC-produced animation series designed by researchers at the Autism Research Centre, Cambridge. In clinical trials, autistic children who watched 15 minutes daily for four weeks caught up with typically developing peers on emotion recognition tasks.
  • Mind Reading – An RCT with 43 autistic children (ages 7–12) found significantly better emotion decoding and encoding skills after a 12-week programme.
  • Zirkus Empathico – Kirst et al. (2022), published in Behaviour Research and Therapy, tested this parent-assisted serious game in a multicentre RCT with 82 autistic children (ages 5–10). Training effects were observed for empathy (d = 0.71) and emotion recognition (d = 0.50) – moderate to large effects. However, gains were not maintained at follow-up.

The pattern across these studies: emotion recognition can be trained with digital tools, and the effects are often statistically significant. But maintaining those gains over time and generalising them to real social situations remains a challenge.

Executive function and ADHD: the Cogmed and EndeavorRx story

Two programmes have dominated research on digital executive function training for ADHD children:

Cogmed Working Memory Training

Klingberg et al. (2005) published the first major RCT: 53 children with ADHD (ages 7–12) completed a computerised working memory programme. The treatment group improved significantly on visuospatial working memory, response inhibition, verbal working memory, complex reasoning, and parent-reported ADHD symptoms. But a 2023 meta-analysis by the European ADHD Guidelines Group found that long-term clinical effects were “limited to small, setting-specific, short-term effects.”

EndeavorRx (AKL-T01)

In 2020, EndeavorRx became the first FDA-authorised digital therapeutic for ADHD. In the pivotal STARS-ADHD trial, 348 children with confirmed ADHD used the game-based intervention for 25 minutes daily, 5 days a week, for 4 weeks. After one month, roughly half of parents reported clinically meaningful improvement in daily functioning. A follow-up adolescent trial (n = 162) showed robust improvements in attention. No serious adverse events were reported.

The broader meta-analytic picture: a review of 25 cognitive training studies for ADHD children found a large training effect on working memory (g = 0.907) and a moderate effect on planning (g = 0.532). However, transfer to academic performance and everyday behaviour was negligible. Training improves what you train – but doesn't automatically improve everything else.

What makes digital interventions effective

Across the research, a consistent set of design principles emerges for interventions that actually help neurodivergent children:

  • Structured and predictable – clear expectations, consistent interfaces, no surprise changes
  • Repetition with variation – the same concept practised in different contexts, not just drilling the same task
  • Adaptive difficulty – content that adjusts to the child's level rather than following a fixed sequence
  • Non-judgmental feedback – no scores, rankings, or social comparison
  • Short, focused sessions – interventions that work tend to be 15–25 minutes per session, not marathon use
  • Parent or caregiver involvement – the Zirkus Empathico study showed particularly strong effects with a parent-assisted model

One unexpected finding from the Grynszpan meta-analysis: shorter interventions produced larger effect sizes than longer ones. This doesn't mean less is more in every case – but it does suggest that focused, well-designed short sessions can be more effective than prolonged training with diminishing engagement.

What doesn't work – and what we don't know yet

Being honest about limitations is just as important as celebrating results. Here's what the research consistently flags:

  • Generalisation is the biggest challenge. Children often improve on trained tasks but struggle to apply those skills in real social situations. This is true across social stories, ToM training, and emotion recognition.
  • Long-term maintenance is unclear. Most studies measure effects immediately after training or at short follow-ups. The Zirkus Empathico RCT is a good example: strong immediate effects (d = 0.71 for empathy) but gains were not sustained at follow-up.
  • Sample sizes are often small. Many RCTs include 30–80 participants. The EndeavorRx trial (n = 348) is the exception, not the rule.
  • Most research focuses on higher-functioning children. Children with intellectual disabilities or limited verbal skills are underrepresented in the literature.
  • AI personalisation lacks empirical validation. A 2025 systematic review of AI-driven assistive technologies (84 studies, 2018–2024) found that while AI shows promise, “applications for neurodivergent learners remain largely unmapped” in terms of robust clinical evidence.

Where DeepSpectrum Lab fits in

We build on this research. We don't ignore the limitations – we design around them.

The generalisation problem tells us that isolated task training isn't enough. So we embed skills practice in realistic, interactive scenarios rather than abstract exercises. The maintenance problem tells us that one-off training doesn't stick. So we design for ongoing, regular use – as a complement to therapy, not a replacement. The short-session finding tells us more isn't better. So we keep sessions focused and build in breaks.

We also take the AI personalisation question seriously. Rather than claiming AI will solve everything, we use it for what the evidence supports: adaptive difficulty, pace adjustment, and consistent, patient interaction. We don't use AI to make clinical decisions – that's for professionals and families.

The research is clear: digital tools can meaningfully support neurodivergent children. The challenge is building them well, testing them honestly, and being transparent about what they can and cannot do. That's what we're here for.

References

Begeer, S., Gevers, C., Clifford, P., et al. (2011). Theory of Mind training in children with autism: a randomized controlled trial. Journal of Autism and Developmental Disorders, 41, 997–1006.

Fletcher-Watson, S., McConnell, F., Manola, E., & McConachie, H. (2014). Interventions based on the Theory of Mind cognitive model for autism spectrum disorder (ASD). Cochrane Database of Systematic Reviews.

Gray, C. (1991). Social Stories. Jenison Public Schools, Michigan.

Grynszpan, O., Weiss, P. L., Perez-Diaz, F., & Gal, E. (2014). Innovative technology-based interventions for autism spectrum disorders: A meta-analysis. Autism, 18(4), 346–361.

Hopkins, I. M., Gower, M. W., Perez, T. A., et al. (2011). Avatar assistant: improving social skills in students with an ASD through a computer-based intervention. Journal of Autism and Developmental Disorders, 41(11), 1543–1555.

Kirst, S., Diehm, R., Bögl, K., et al. (2022). Fostering socio-emotional competencies in children on the autism spectrum using a parent-assisted serious game: A multicenter randomized controlled trial. Behaviour Research and Therapy, 152, 104068.

Klingberg, T., Fernell, E., Olesen, P. J., et al. (2005). Computerized training of working memory in children with ADHD – a randomized, controlled trial. Journal of the American Academy of Child & Adolescent Psychiatry, 44(2), 177–186.

Wang, S., et al. (2023). Effectiveness of technology-based interventions for school-age children with ADHD: Systematic review and meta-analysis. JMIR Mental Health, 10, e51459.

Wright, B., et al. (2025). ASSSIST-2: a pragmatic randomised controlled trial of the Social Stories™ intervention in UK primary schools. Child and Adolescent Mental Health.

Zhang, Y., et al. (2023). Meta-analysis of the efficacy of digital therapies in children with ADHD. Frontiers in Psychiatry, 14, 1054831.