Eluency isn't built on guesses. Every core feature—teacher-authored mobile lessons, interactive practice modes, cumulative quizzing, and progress tracking—is grounded in peer-reviewed research from leading journals in language education.
Below, we present five rigorous studies from ReCALL, TESOL Quarterly, Language Learning & Technology, and the Journal of Computer Assisted Learning that validate the methods Eluency is built on.
Click each study to see the findings, key statistics, and exactly how Eluency applies the research. All journal links open in a new tab.
A large-scale meta-analysis in ReCALL found that mobile apps can meaningfully improve vocabulary learning—especially when learners use them consistently over longer periods (10+ weeks). This supports the idea that teacher-built content delivered through an app can drive real progress outside class, where most practice time happens.
Researchers sent teacher-made vocabulary mini-lessons directly to students' phones and compared results to students using the same materials on the web or on paper. Students who received the mobile "push" lessons learned significantly more, suggesting that delivering teacher content through a phone app can make studying happen more often—and lead to better learning.
In a real university course, students completed the same teacher-made vocabulary activities either on their phones or on computers, and the system logged what happened. The key takeaway for app design: mobile practice works—and it can be integrated into grading and progress tracking—but it must be mobile-friendly so it doesn't feel slower or harder than studying on a computer.
In a semester-long university course, switching from "last week only" quizzes to cumulative quizzes (mixing old + new words) made vocabulary learning dramatically more effective on later tests. This matters for eluency because it shows how teacher-made tests—delivered regularly—can double or even triple long-term learning by building spaced retrieval into the course.
A meta-analysis in Language Learning & Technology found that digital game-based language learning typically leads to small-to-medium improvements in language outcomes. For eluency, this strengthens the case for using motivating "game-like" features—so long as they support the lesson and test goals instead of distracting from them.
A synthesized view of the research designs, sample sizes, effect sizes, and relevance to Eluency's model.
| Study | Venue | Design | Sample | Key Effect | Eluency Relevance |
|---|---|---|---|---|---|
| Zhou & Zhou (2026) | ReCALL | Bayesian meta-analysis | 65 studies (2010–2024) | d = 1.28 (long-term); d ≈ 0.74 (bias-adjusted) | Mobile app delivery supports vocabulary growth |
| Thornton & Houser (2005) | JCAL | Survey + quasi-experiments | N = 68 (Exp 2); N = 333 (poll) | d ≈ 0.76; t(66) = 3.04, p = 0.003 | Teacher "push" lessons outperform web/paper |
| Stockwell (2010) | LLT | Field study / analytics | N = 175 EFL learners | No consistent mobile disadvantage | Teacher-built content + tracking + grades |
| Nakata et al. (2021) | TESOL Quarterly | Cluster quasi-experiment | n = 33 vs n = 34 | 2.06–3.38× more effective on posttests | Cumulative quizzes = spaced retrieval learning |
| Dixon et al. (2022) | LLT | Meta-analysis | Multiple studies, multiple populations | d = 0.50 (between); d = 0.95 (within) | Game-based mechanics yield measurable gains |
None of these studies evaluated Eluency itself. We present them as the most rigorous, closely-aligned evidence for the specific mechanisms Eluency relies on: teacher-authored content delivered through a mobile system, spaced and retrieval-focused practice, gamified engagement, and quizzes that teach as well as assess.
We label them transparently as direct analogues (Thornton & Houser; Stockwell; Nakata et al.) versus supporting umbrella evidence (Zhou & Zhou; Dixon et al.) to reflect what each study actually tested.