Kolja Sam Pluemer

Is everyone using SuperMemo2?

Comparison of the SR algorithms of all prominent flash card software

01.04.2023

I’ve ranted about the lack of diversity of Spaced Repetition algorithms before. Since that post, I’ve realized three things:

  • I should probably do some more market research on whether it is actually true that everyone under the sun uses SM-2
  • Spaced Repetition can be understood as a Queue Theory problem
  • There are some people that do build some wild SR algorithms out there

This post is dedicated to the first point. I asked ChatGPT for Spaced Repetition software until it ran out of ideas, and will now attempt to research what algos they are using. It’s likely not exhaustive. Anyways - if such an overview is of value to you, here you go:

Name Algorithm usable/open?
Anki custom SM-2 yes
Super Memo SM-18 not really
Quizlet super naive pseudo SR, each card is studied until you get it right yes because it’s dead simple
Memrise adapted Leitner box: get a card right, it gets moved to a higher interval yes, because it’s Leitner
Brainscape custom “Confidence-Based Repetition” algo I think not? They talk a lot about different algos but little about theirs
The Mnemosyne Project similar to SM-2 sort of
synap some kind of SR no
RemNote some kind of “exponential” SR no
Skritter custom algorithm multiplying the last interval with grade of recall the site’s info may be outdated, but yes
Pleco unknown no
Lingvist custom ACT-R model working with the average forgetting curve of all users original paper goes into detail
Clozemaster apparently some custom SR algorithm no
Mochi interval doubles when card correct, halves when incorrect yes, two lines of code
Note Garden probably modified SM-2 sort of
SmartCards+ unknown no
StudySmarter basic sounding, custom algo no
Glossika custom SR algo with some user choice no

While the research took longer than I expected, the list turned out to be surprisingly short. At some point I started ignoring apps where nothing at all could be found regarding their algos, to be fair.

Another serious limitation is that I lazily used ChatGPT to get the initial list, which at this point has limited knowledge of the post 2018 world, an god knows what other biases.

Anyways:

Key learnings

  • I’m surprised by how simple some algos are, such as Mochi’s. I think a well implemented SR simulator is my next project. I want to see if naive methods like that are actually worse than the outdated but revered SM-2.
  • There is definitely room for improvement in this industry, SR-algorithm-wise.
  • The good stuff is either to be found in papers, in very small or new tools that ChatGPT doesn’t know, or does not exist.

Thanks for reading. Please let me know if you find any errors or omissions. Until next time!