If you've ever taught reading to more than one child at the same time, you know the core problem. Two kids sit next to each other — same age, same classroom, same lesson — and one of them masters short vowels in a week while the other still mixes up /e/ and /i/ three months later. The same lesson works beautifully for one and misses the other entirely.
The standard ways of handling this aren't great. Grouping students by level helps, but groups go stale quickly as kids progress at different rates. Giving every kid the same pacing guarantees boredom for some and confusion for others. Differentiating by hand is real work, and teachers already have a day job.
Adaptive practice is one attempt to solve this — software that adjusts what each student sees based on what they've shown they know. Done well, it's useful. Done badly, it can make things worse. The gap between good and bad adaptive software is wider than most people realize, and the marketing doesn't help you tell them apart.
What “adaptive” actually has to mean
A lot of software calls itself adaptive because it changes something based on student performance. That's a pretty low bar. The meaningful version of the claim has more to it.
It tracks mastery at the right grain size. For phonics, that means grapheme-phoneme correspondences — the individual sound-letter patterns — not lessons or units. A kid can be solid on /ch/ and shaky on /sh/ even though both got introduced the same week. A system that only knows “lesson 14 passed” misses that. A system that knows “this student has /ch/ at 92% accuracy, /sh/ at 60%” can do something about it.
This is probably the single biggest differentiator between adaptive systems. If the grain size is too coarse, the adaptation is cosmetic. If the grain size matches how phonics actually breaks down into skills, the adaptation can do real work.
It responds to the data. Tracking is only half the job. The system has to change what the student does next. A dashboard that shows detailed mastery data but hands out the same content to everyone isn't adaptive — it's monitored. The difference matters.
It handles mistakes gracefully. This is where a lot of adaptive systems fall down. Wrong answers should cause the system to back up, re-teach, and offer more practice — not trigger an infinite loop of retries or a punishing cooldown. A kid who misses three vowel-team items shouldn't get stuck on vowel-team items for the next hour. The system should widen out, come back later, and try again in a new context.
It gives the student some agency. Not every decision should be made for the kid. Some of the better adaptive products let students choose the topic or setting of a story while the system controls the phonics content, or let them skip something they don't enjoy without counting it against them. The adaptation is in service of the student, not at their expense.
Why mastery-based progression beats time-based
One of the older debates in education technology is mastery versus time. Should a student move on when they've mastered a skill, or when they've spent a set amount of time on it?
The research favors mastery-based progression, for a pretty intuitive reason. A student who has mastered short vowels doesn't benefit from more short vowel practice. They benefit from moving on. A student who hasn't mastered them doesn't benefit from moving on just because the clock ran out. They benefit from staying, with the content re-presented in a new way.
Mastery-based progression is harder to build. You need a way to decide when mastery has actually been reached, which means reliable per-skill assessment, which means you're back to the grain-size question from a minute ago. But when it's done right, no kid sits through practice they don't need and no kid gets pushed past a skill they haven't actually learned.
What makes this hard
Three things, in rough order of difficulty.
The cold-start problem. When a student opens an adaptive app for the first time, the system knows nothing about them. It has to figure out what they know and don't know without subjecting them to a boring placement test. Most systems use a short diagnostic, but a good one weaves the diagnostic into the first few practice items so the student doesn't notice it's happening.
Distinguishing fluency from accuracy. A student who decodes a word correctly after fifteen seconds of sounding it out is in a different place than a student who reads it instantly. Both are “correct,” but only one is fluent. Good adaptive systems track both and don't promote a student to harder content on accuracy alone.
Teaching, not just testing. Adaptive practice is, by nature, assessment-heavy. Every item a student completes is both a practice opportunity and a data point. The risk is that the system drifts into quiz mode, where students feel like they're being constantly tested rather than learning. The best adaptive tools balance this by weaving explicit instruction into the practice stream — a quick visual reminder, a short example, a guided decode — so the system is teaching and not just measuring.
How PhonoLogic approaches it
We're an adaptive phonics tool, so briefly, for context: here's how we handle the questions above.
Our engine tracks mastery at the grapheme-phoneme level. When a student gets several items right in a row on a specific pattern, it drops out of active rotation. When they miss, the pattern stays in rotation and shows up in the next few stories and activities. Stories are the main practice surface — we generate decodable stories in real time, constrained to patterns the student has already encountered and weighted toward the patterns they're still working on.
That's the short version. The general principles in the rest of this post apply to adaptive reading tools broadly; what any individual product does with them is its own question.
The honest limits
A last note on what adaptive practice can't do.
Adaptive tools are good at one specific thing: giving each student the right reps at the right time. They aren't a substitute for explicit teacher instruction, they don't teach reading on their own, and they can't notice when a kid is having a rough day or struggling with something the software can't see. A child using an adaptive phonics tool thirty minutes a day still needs classroom teaching, read-alouds, and an adult who pays attention to them.
The honest case for adaptive tools is narrower than the marketing usually suggests. It's not that they replace teachers or curriculum. It's that they can do the one thing — personalized practice at scale — better than a human teacher can do for 25 kids simultaneously. That frees up a teacher's time and attention for the things only a teacher can do.
If you're evaluating an adaptive phonics tool, the questions in this post are the ones to ask. What's the grain size? How does it handle wrong answers? Does it teach or just test? Can a kid have a bad day without being penalized? The marketing usually won't tell you. The product will.