Neuroscience & language learning – what happens in the brain when we talk to a chatbot?

You’re in a Spanish class, practicing with a classmate how to check into a hotel. Tomorrow you try the same scenario with a chatbot on your phone. Maybe later you videocall a language partner to practice once more. Same task, same goal, but each situation feels different. Why? It turns out our brains engage differently in each. And there’s empirical evidence to prove it. We know interaction is essential for language learning, and more recently, we’ve learned that interacting with a chatbot is also beneficial. The evidence comes mostly from experimental studies using language tests, and also by asking students how they feel. But sometimes asking our students how they feel, though important, can only tell us so much. [To be honest, I’m not always sure of my mental states… Am I angry? Or just hungry? Often, I can’t tell] So what is actually happening in our brains during these interactions? A recent study set out to find exactly that, measuring English language learners’ brain activity depending on who, or what, they were talking to. How did they study the effects of L2 interactions in the brain? Thirty English learners from a university in Taiwan engaged in conversation scenarios throughout three contexts: What happened in each of these contexts? Students spent 10 minutes role-playing a hotel or restaurant scenario in each setting (face to face, chatbot, virtual), alternating between the customer and the staff member. Each student wore a NeuroSky Mindwave EEG headset: a lightweight, portable device that measures brainwave activity in real time. The device can track five types of brainwaves (delta, theta, alpha, beta, and gamma) and two mental states (attention and meditation). In short, brainwaves refer to patterns of electrical activity in the brain, produced by neurons communicating with each other. Think of them as our brain’s background music, changing its rhythm and tempo depending on whether we’re deeply focused, relaxed, or somewhere in between (Here’s a deep dive into the science of brainwaves). Attention refers to one’s level of mental focus, and meditation indicates a state of calmness and relaxation; both considered favorable conditions for learning. So what happens in our brain? It turns out that students’ levels of attention, meditation, and brainwaves were significantly different in the three contexts. How so? Attention was highest in face-to-face, and so were alpha and beta waves- linked to alertness and active problem-solving. It makes sense: humans are wired for social interaction. The unpredictability of a live conversation, with nonverbal cues, facial expressions, and the back-and-forth, seems to keep the brain actively engaged. In chatbot interactions, meditation (relaxation) was the highest. Students felt noticeably calmer when their conversation partner was a bot, likely because the absence of judgment lowers anxiety. Delta and theta waves, also associated with relaxation and intuition, were dominant. Attention was high as well, lower than in face-to-face but above the virtual environment. Interestingly, the virtual environment produced the lowest scores on attention and meditation. Gamma waves -linked to memory retrieval and complex information processing- peaked here, suggesting students had to work harder cognitively to keep the conversation going. Why does it matter? This study brings neuroscientific evidence to how we understand interaction in language learning. What we feel when talking to a person, or a bot, shows up in brain activity. The findings also matters because: For those of us building AI-powered language learning tools, this study reinforces a core mission: To create a safe space where learners can practice, build confidence, and show up calm and focused Original article Hsu, L. (2022). To CALL or not to CALL: empirical evidence from neuroscience. Computer Assisted Language Learning, 35(4), 792–815. https://doi.org/10.1080/09588221.2020.1750429 Images by Fauxels and Google DeepMind on Pexels
Beyond the Hype: Why Adaptive Dialogue Systems Enhance L2 Learning

You’ve decided to practice a new language. Imagine you could do it anytime, anywhere, without waiting for a nice friend or a 24/7 tutor. Want to rehearse talking about your job or hobbies for the hundredth time? With no worries, no pressure, and a safe space to make mistakes? This is where spoken dialogue systems come in. But how can a bot really support language learning? Can an automated conversation offer the same benefits as talking to a person? Nothing replaces human connection. But access matters. Bots are always available, infinitely patient, and never tired of hearing the same sentence ten times in a row. They can adapt to each learner’s needs, making practice flexible, low-pressure, and personalized. To see if these benefits translate into real learning, Professor Bibauw and colleagues analyzed 17 experimental studies involving 803 learners, synthesizing years of research on conversational agents. Their recent meta-analysis published in Language Learning & Technology provides the strongest evidence to date on how much dialogue systems actually help people learn a language. So what does the evidence say? Do dialogue systems actually improve L2 proficiency? The short answer: Yes, they work Significant lasting learning Students who practice with a conversational agent showed significant improvement (overall effect size d = 0.59). From a cognitive interactionist viewpoint, this makes sense. These meaningful interactions with a bot create opportunities for input and output, noticing, negotiation, and feedback, all necessary ingredients for language acquisition (Gass & Mackey, 2015). Interestingly, beginner and low-intermediate students (A1-A2) benefit the most. As proficiency increases, effects decrease, suggesting that conversational agents are most powerful when students need repeated, low-anxiety, and structured communicative practice. Just as important, dialogue practice leads to long-term learning. Students not only showed immediate improvements in their L2 proficiency after conversations, but these gains remained significant when tested later. The learning that happens during these dialogues sticks. Another important question was asked: What makes some systems better than others? Design choices matter; not all bots are created equal. The meta-analysis highlights some benefits based on the interactional design: Guided, scripted dialogues led to the strongest gains, followed by goal-oriented dialogues. Why? Because structure encourages clearer communicative goals, predictable input, and better targeted feedback. Free chat may feel more “natural,” but it often lacks the scaffolding students need to actually improve. Instructional design matters just as much, if not more, than the latest technology. Corrective feedback makes a difference. Systems that offer corrective feedback outperformed those without it. Both implicit (recasts) and explicit forms of feedback improved learning, with a slight preference for explicit correction, aligned with decades of second language acquisition research (e.g., Nassaji & Kartchava, 2021). The takeaway is clear: conversational practice alone isn’t enough. Pedagogical feedback matters. Gamification boosts learning. Adding gaming elements showed a significantly stronger impact on L2 development than non-gamified ones, highlighting the importance of motivational design in dialogue systems. Rewards, challenges, and progress indicators increase motivation, sustain effort, and help students stay engaged while practicing the L2. Why does this matter? Dialogue systems are powerful tools for language learning, especially when combining strong instructional design with advanced NLP. This meta-analysis provides empirical support for the kind of adaptive, task-based, and feedback-rich conversational experiences that Linguineo builds. It reinforces several principles that match our philosophy: Dialogue systems are not just technology; they are effective when paired with thoughtful learning design, a principle at the heart of Linguineo. Original article Bibauw, S., Van den Noortgate, W., François, F., & Desmet, P. (2022). Dialogue systems for language learning: A meta-analysis. Language Learning & Technology, 26(1), 1–24. https://doi.org/10.64152/10125/73488 Other work cited Gass, S. M., & Mackey, A. (2015). Input, interaction, and output in second language acquisition. In B. Van Patten & J. Williams (Eds.), Theories in second language acquisition (pp. 194–220). Nassaji, H., & Kartchava, E. (Eds.). (2021). The Cambridge handbook of corrective feedback in second language learning and teaching. Cambridge University Press. Post photo by Shantanu Kumar on Unsplash
New Research Behind Real Adaptivity in Language Games

Imagine you’re playing a game while learning a language. You have your cute little owl (don’t worry, ours won’t threaten your streak), gathering intel, talking to characters, making choices, trying to stay fully immersed in this adventure when, without realizing it, the game quietly steps in and gives you a hand. Maybe it whispers the first letters you need. Perhaps it speaks slower? Or gives you an easier quest. Almost as if it knew you were about to hit a (virtual) wall. Too good to be true? (As good as sunshine on a Belgian winter day?) That’s the promise of adaptive learning. Not an empty one though, there’s a new empirical study that proves we can deliver on that promise (the adaptivity one, not the sunshine, yet). Researchers at KU Leuven set out to investigate whether Language Hero, our narrative-based game, can automatically assess students’ language performance in real time and adapt tasks to their level accordingly. And can it do that? [Spoiler] Yes it can. Why is this research important? From a theoretical standpoint, research in second language acquisition is clear: we learn to speak a language by using it in meaningful interaction, whether with another person or with spoken-dialogue systems. But for such systems to truly support learning, they need to adapt to each student’s needs and proficiency in real time (check Bibauw et al., 2022 to learn all about dialogue systems). Everyone promises these adaptive and personalized AI tools. But we need empirical evidence to show how such adaptation works and whether it improves language learning. This is precisely what KU Leuven researchers investigated in their recently published study. To examine how the built-in adaptivity in Language Hero could predict successful task completion, they analyzed students’ oral language using theoretically grounded measures (check out Koizumi & In’nami, (2024) for a deep dive into these measures). More specifically, Then they asked: Do these measures predict if a learner will succeed on the next task? They found out that: And why does this matter? Adaptability can support all learnersData-driven learner models like the one in Language Hero can improve micro-adaptability. It can offer better individualized support. For instance, for lower proficient students, the system can provide more detailed hints, adjust linguistic complexity, or present alternative tasks, all based on real-time indicators of students’ performance. Teachers get data they can useThese models expand pedagogical possibilities by providing interpretable linguistic data through dashboards and visualizations of students’ proficiency and progress. Teachers save time, choose what matters the most, and decide where to focus their attention. AI you can trustThis research offers a transparent, theory-informed learner model (as opposed to an opaque “black box” of off-the-shelf chatbots), that we hope can improve trust in AI-powered applications for language learning Spoken dialogue systems can do more than “talk back” …or provide speaking practice. In a way, they can tell when a learner is about to struggle before they do. And that’s when adaptivity kicks in. This study is a milestone for us. It shows that our evidence-based adaptivity is moving in the right direction. And we intend to keep building it, validating it, and sharing it with all our students. Original article Cornillie, F., Gijpen, J., Metwaly, S., Luypaert, S., & Van den Noortgate, W. (2025). Towards adaptive spoken dialogue systems for language learning: Predicting task completion from learning process data. CALICO Journal, 42(3). https://utppublishing.com/doi/10.3138/calico-2025-0035 Other Work Cited Bibauw, S., François, T., & Desmet, P. (2022). Dialogue Systems for language learning: Chatbots and beyond. In N. Ziegler & M. González-Lloret (Eds.), The Routledge handbook of second language acquisition and technology (pp. 121–135). Routledge. https://doi.org/10.4324/9781351117586 Koizumi, R., & In’nami, Y. (2024). Predicting functional adequacy from complexity, accuracy, and fluency of second-language picture-prompted speaking. System, 120, 103208. https://doi.org/10.1016/j.system.2023.103208
5 things we improved about Linguineo Pro that you need to know

Summer is coming to an end. *Cries in Belgian weather* While we’ve enjoyed the occasional sunny moments, we have been working *read: the whole year* behind the scenes to improve Linguineo Pro. We know, summer isn’t done yet, but we sure are with the update of Linguineo Pro. We have done a major update to Linguineo Pro, incorporating all the most important previous user feedback. Yay! We literally can’t wait to share it with you! So, keep on reading to find out what we improved.
OKAN pupils learn Dutch with voicebot POL

If you move here from another country, the best thing you can do is of course learn the language. Easier said than done? Not with POL, our personalized voicebot tailored to OKAN students.
OKAN stands for ‘Onthaalklas voor anderstalige nieuwkomers’, or classes for non-native newcomers between ages 6 and 18 who have not yet mastered Dutch. The goal is to support them and make their learning experience more enjoyable. We partnered with D-Teach and KU Leuven’s Centrum voor Taal en Onderwijs to do that! Currently, we are in the testing phase of POL, and we would love to share you a little bit more about him.
