Any good app will be consistently updated, if not necessarily often. Bugs are fixed, security flaws are fixed, improvements are made, among other things. But DuoLingo recently made a fairly substantial change to their model in a relatively recent update.
Earlier, the visual cue for “mastery” of a lesson was the icon appearing in gold rather than full color.
This has been replaced with a “crowns” level in a given lesson.
Whether this is a better or worse model than the “golden” badges probably comes down to personal psychology. Some people will find it more motivating than the old model, and vice versa. What I personally find annoying is that there seems to be no way to test out of the crown levels (the same way you can test out of the initial levels). Really, DuoLingo, I promise that I’ve mastered reading and writing Cyrillic and Hangul. I shouldn’t need to sit through redundant, tedious review just to prove to the algorithm that “no really, I got this.” This was also true in the old model; you periodically had to refresh your levels even in the very, very basics. But it’s more marked here, I think. Maybe if you get to level 5 in a lesson, DuoLingo considers it “mastered” and you never have to review it again? I haven’t had enough initiative to find out, yet.
My big issue, though, is less with this change and more, after years of using DuoLingo in a variety of languages, that the SRS system underlying the app is surprisingly primitive. It’s static and top-down rather than genuinely responsive.
DuoLingo doesn’t atomize based on individual lexical units, but rather simply on its own lessons. While a given lesson will repeat a question you got wrong (and not let you complete the lesson until you get it right), the system as a whole seems to have no memory of what you’ve messed up over the long term, because it’s only keeping track of the last time you reviewed a particular lesson, not which words or phrases you consistent mess up.
Let’s say that I have a comfortable mastery of 60% of the words in a given lesson, struggle a bit with 30%, and then struggle a lot with the last 10%. A productive review session would focus on that 40% I struggle with and sprinkle the ones I’ve mastered throughout, both to maintain them and also for motivational purposes. That kind of data would be trivial to track: which words do I get right every time; which ones do I almost get, or forget somewhat frequently; which ones do I only get after repeated attempts or provide totally wrong answers for. It would, presumably, also be trivial to come up with an algorithm to prioritize future lessons based on that data. That’s exactly what Anki does when you choose “incorrect” or “hard” rather than “good” or “easy,” after all.
But a DuoLingo review session will simply be 60% “needless” review and 40% productive review (depending exactly on how your own mastery of a lesson breaks down). It’s a wasted to chance to review what actually needs reviewing, and it possibly borders on over-reviewing (which can actually be counterproductive!). The “weak words” that will be tested in the next review aren’t the ones you’ve gotten wrong in the past; it’s all of the material from whatever lesson in the unit has gone the longest without review. It doesn’t matter if half the words in that lesson are ones you actually know well.
The other problem is that simple review (that blue barbell in the corner) doesn’t seem to count towards any crown levels. The XP you earn at least counts towards your daily goal, so you can maintain your streak (a powerful motivator for many Anki users), but it seems silly to not connect those reviews to crown levels as well. But maybe this is simply a bug that will be addressed in a new update.