Cognitive training is structured, repeated practice on tasks that target a specific mental ability: working memory, processing speed, attention, executive function. It is not the same as "playing brain games." The honest answer to whether it works is: yes, for specific outcomes, in specific protocols, and not for the broad cognitive boost the marketing usually promises. This guide separates what has evidence from what doesn't.

What "cognitive training" actually is

The term is loose in marketing and tight in science. Used carefully, cognitive training has three features:

Anything missing those features is a puzzle, a game, or a hobby. They may be enjoyable. They may even be good for you. But they are not what the cognitive-training literature is about.

What the central debate is

The whole field hangs on one distinction: near transfer versus far transfer.

Near transfer is robust and uncontroversial. Far transfer is fragile, contested, and the scientific reason most brain-training claims overreach. As Daniel Simons and colleagues put it in their 2016 consensus review, "We find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance."

If a brain-training app claims it makes you smarter, faster, better at your job, or less likely to develop dementia, it is making a far-transfer claim. The default scientific stance is to be skeptical of all of them.

What ACTIVE actually showed

The single most cited result in this field is the ACTIVE trial (Advanced Cognitive Training for Independent and Vital Elderly). Researchers randomized 2,802 older adults to one of three training conditions (memory, reasoning, or speed-of-processing) or a no-contact control, then followed them for ten years.

The ten-year follow-up (Edwards et al., 2017) found that the speed-of-processing training group had a 29% lower hazard of dementia at the decade mark compared with controls. Memory and reasoning training did not show the same effect.

Two things matter about this result. First, it is the strongest direct evidence we have that any kind of computerized cognitive training reduces dementia risk over a long horizon. Second, the effect was specific to one type of training, in adults already at population baseline risk, with a structured 10-session intervention, not a casual habit. Generalizing it to "any brain-training app reduces dementia risk" is exactly the overclaim the FTC went after Lumosity for.

Speed-of-processing training in ACTIVE produced a 29% lower hazard of dementia at 10 years. That finding does not generalize to "brain games prevent dementia." It generalizes to "this specific protocol, in this specific population, produced this specific result."

What FINGER added

If ACTIVE is the strongest evidence for cognitive training in isolation, the FINGER trial is the strongest evidence for cognitive training as part of a multimodal protocol.

FINGER (Ngandu et al., 2015, Lancet) randomized 1,260 older Finnish adults at elevated dementia risk to either a control group or a two-year intervention combining diet, exercise, cognitive training, and cardiovascular monitoring. The intervention group showed a 25% improvement in overall cognitive performance versus controls, with larger effects on executive function (83%) and processing speed (150%).

The cognitive-training component was not isolated, so we can't attribute the gain to training alone. But the pattern is consistent across follow-up trials (US POINTER, MIND-CHINA, MAPT): brain training is more effective when it sits inside a stack that also addresses sleep, exercise, vascular risk, and social engagement. For the broader stack, see our brain health guide.

What does not have evidence

The negative findings in this field are as important as the positive ones, and the honest version of "what works" requires naming what doesn't.

How the FTC case shaped what apps are allowed to claim

In January 2016, the Federal Trade Commission settled with Lumos Labs, the maker of Lumosity, for $2 million over advertising that promised the games could prevent or delay age-related cognitive decline, dementia, and Alzheimer's. The FTC's complaint did not say the games were useless. It said the company's evidence did not support the specific health claims it was making. Both points matter.

The settlement set the practical line for the entire industry. Today, a brain-training app can legally claim:

It cannot legally claim:

When evaluating any brain-training claim today, the first question to ask is whether the claim names a specific outcome, population, and dosage. If it doesn't, it's marketing.

How to evaluate any cognitive-training app

Use this five-question rubric. We use it for our roundup of the major apps, and you can use it on any new product that crosses your feed:

  1. What specific cognitive ability does it train? "Brain training" is not an answer. "Working memory under interference," "speed of visual search," "task-switching cost" are answers.
  2. Does it adapt difficulty progressively? Or is it the same level on day 30 as day 1?
  3. Are the sessions short and daily, or long and weekly? Brief and daily is what the consolidation literature supports. We dig into the neurochemistry behind this in the five-minutes article.
  4. What does the independent research actually show? Independent meaning not funded by the company. If they cite their own internal studies only, that's a flag.
  5. What outcome are they claiming, and what evidence supports it? "Trains memory" is fine. "Reverses cognitive decline" requires the kind of evidence almost no consumer product has.

Score five out of five and the product is at least honestly designed. Score two out of five and it's a game with a marketing budget. Most consumer apps score in the middle.

Where the working-memory training literature lives now

For ten years, working-memory training was the most actively studied corner of cognitive training, on the hypothesis that working memory underlies fluid intelligence and that training one would lift the other. The finding has been broadly disappointing. Melby-Lervåg's 2016 meta-analysis, the most comprehensive available, concluded that working-memory training did not transfer to general intelligence, reading, or arithmetic.

That is not the same as saying it is useless. Working-memory training reliably improves working memory and shows modest near transfer to closely related tasks. It is plausibly useful for specific clinical populations (ADHD, post-stroke recovery) where the trained skill itself is the limiting factor. It is just not the IQ booster early enthusiasm suggested.

The honest framing for healthy adults is: targeted working-memory training is one piece of a cognitive-fitness routine, alongside aerobic exercise, sleep, and social engagement. Not a replacement for any of them.

How midlife training compares to late-life training

A 2024 systematic review and meta-analysis by Sala and Gobet asked specifically about training effects in midlife (35-65). The pattern: training effects exist and are measurable, but smaller than effects measured in older adults at higher baseline risk. This is consistent with a floor effect: people whose cognition is already operating well have less room to gain on tested measures.

This does not mean midlife training is wasted. The cognitive-reserve literature, covered separately in this guide, supports a different mechanism: midlife mental engagement appears to build buffer that pays off decades later, even if the gains are not visible on a memory test today.

What this means for your daily practice

If you are otherwise healthy, the practical version is short:

What's still uncertain

Even with two decades of well-funded research, several questions are genuinely open:

The goal of an honest cognitive-training program in 2026 is to operate inside what the evidence supports while staying open about the rest. That's the line BrightYears tries to hold, and it's the line we'd ask any product in this space to hold.

A practical bottom line

The cluster posts below dive deeper into specific pieces of this guide.