Avid Yak Tack-ers have doubtless pondered an update we made last week, whereby the app asks them to write a given word’s definition. The learner has to recall the meaning of the word in question out of memory phrasing it in their own words. This process triggers something called “the generative effect”; memory is stronger for information we generate versus information we merely read (source). We now intersperse recall and extant recognition tests throughout the spaced-repetition timeline.
Here’s an example of recall-based interaction in action:
Generative AI capabilities exploded over the last two years. Yak Tack, being a language-based application, is well positioned to take advantage of AI capabilities that make recall testing feel natural.
So that’s what we’ve done. We sprinkled in a little more AI magic.
Writing a definition is harder than spotting it among three options (our heretofore exclusive method for testing retention). But that extra effort creates the sort of “productive struggle” that predicts long-term retention far better than a strict recognition-based approach (source).
In other words, more work yields better results.
Mnemonic devices
We’ve also started rolling out a new visual mnemonic device to help with recall. Here’s an example.
Anytime an Unlimited (paying) learner visits a word we create an image-based mnemonic device to facilitate recall (note, paying learners effectively leave mnemonic breadcrumbs for everyone else to enjoy). It triggers a phenomenon called “dual encoding,” which is effectively the pairing a verbal hook with a visual scene. This gives the word two memory retrieval routes as a mnemonic’s “bizarreness” forces deeper processing than a plain definition (source).
I hope you dig these updates. If you have thoughts or suggestions about Yak Tack, I’d love to hear from you. Write to me at jeremy@yaktack.com.
JT