You’re in a Spanish class, practicing with a classmate how to check into a hotel. Tomorrow you try the same scenario with a chatbot on your phone. Maybe later you videocall a language partner to practice once more.

Same task, same goal, but each situation feels different. Why?

It turns out our brains engage differently in each. And there’s empirical evidence to prove it.

We know interaction is essential for language learning, and more recently, we’ve learned that interacting with a chatbot is also beneficial. The evidence comes mostly from experimental studies using language tests, and also by asking students how they feel. 

But sometimes asking our students how they feel, though important, can only tell us so much. [To be honest, I’m not always sure of my mental states… Am I angry? Or just hungry? Often, I can’t tell]

So what is actually happening in our brains during these interactions? 

A recent study set out to find exactly that, measuring English language learners’ brain activity depending on who, or what, they were talking to.

How did they study the effects of L2 interactions in the brain? 

Thirty English learners from a university in Taiwan engaged in conversation scenarios throughout three contexts: 

What happened in each of these contexts?

Students spent 10 minutes role-playing a hotel or restaurant scenario in each setting (face to face, chatbot, virtual), alternating between the customer and the staff member.

Each student wore a NeuroSky Mindwave EEG headset: a lightweight, portable device that measures brainwave activity in real time. The device can track five types of brainwaves (delta, theta, alpha, beta, and gamma) and two mental states (attention and meditation).

In short, brainwaves refer to patterns of electrical activity in the brain, produced by neurons communicating with each other. Think of them as our brain’s background music, changing its rhythm and tempo depending on whether we’re deeply focused, relaxed, or somewhere in between (Here’s a deep dive into the science of brainwaves).

Attention refers to one’s level of mental focus, and meditation indicates a state of calmness and relaxation; both considered favorable conditions for learning. 

So what happens in our brain?

It turns out that students’ levels of attention, meditation, and brainwaves were significantly different in the three contexts. 

How so?

Attention was highest in face-to-face, and so were alpha and beta waves- linked to alertness and active problem-solving. It makes sense: humans are wired for social interaction. The unpredictability of a live conversation, with nonverbal cues, facial expressions, and the back-and-forth, seems to keep the brain actively engaged.

In chatbot interactions, meditation (relaxation) was the highest. Students felt noticeably calmer when their conversation partner was a bot, likely because the absence of judgment lowers anxiety. Delta and theta waves, also associated with relaxation and intuition, were dominant. Attention was high as well, lower than in face-to-face but above the virtual environment.

Interestingly, the virtual environment produced the lowest scores on attention and meditation. Gamma waves -linked to memory retrieval and complex information processing- peaked here, suggesting students had to work harder cognitively to keep the conversation going.

Why does it matter?

This study brings neuroscientific evidence to how we understand interaction in language learning. What we feel when talking to a person, or a bot, shows up in brain activity.

The findings also matters because:

For those of us building AI-powered language learning tools, this study reinforces a core mission: To create a safe space where learners can practice, build confidence, and show up calm and focused


Original article

Hsu, L. (2022). To CALL or not to CALL: empirical evidence from neuroscience. Computer Assisted Language Learning35(4), 792–815. https://doi.org/10.1080/09588221.2020.1750429

Images by Fauxels and Google DeepMind on Pexels