top of page

ChatGPT, generative AI, and language learning


iStock.com/Supatman

By Ron Darvin, Ph.D., Department of Language and Literacy Education, UBC

 

The arrival of ChatGPT and other artificial intelligence (AI) powered platforms, like Midjourney and DALL-E, has raised many questions about how AI will transform the way we learn and teach. To think about the pedagogical implications of these tools, their affordances and constraints, we need to start by understanding the particular type of AI these platforms are based on and how they work. 

 

AI is a broad, encompassing term that involves a wide range of applications and techniques, like machine learning, natural language processing, computer vision, and robotics. When we Google something, use Kahoot! during lessons, or recommend translation and grammar-check tools to students, we are using assistive technologies that involve AI. Chatbots like ChatGPT are generative AI tools trained for a specific task: creating new texts based on training data they’ve received and prompts written by users.

 

What is generative AI and how does it work?

Generative AI (or GenAI) is a subfield of AI that focuses on creating new content, whether it be text, images, videos, music, or code, to respond to prompts written by users. Through natural language processing, GenAI is trained through data collected online (webpages, books, news articles, Wikipedia entries, social media conversations, etc.) to find patterns in language and to know which words can be strung together. Algorithms analyze these patterns that enable GenAI tools to predict what words can follow others and to respond to prompts in a way that is meaningful, contextually appropriate, and grammatical.

 

GenAI can also go beyond written language to include visuals and images. Midjourney and DALL-E are text-to-image GenAI platforms that generate new digital images based on prompts like “draw a purple unicorn having coffee in outer space.” ChatGPT Plus, the paid version of the platform, now enables users to have an actual conversation with the tool and to use images as part of a prompt. For instance, you can upload a picture of what’s in your fridge and ask what kind of recipes you can prepare based on what’s available.


Students need to be aware that chatbots can make mistakes and replicate biases, and, while useful for understanding some concepts, information generated by AI shouldn’t be taken at face value.

What are the affordances of GenAI in terms of language learning?

Apart from getting simplified explanations of complex ideas, students can benefit from using chatbots, like ChatGPT, for different language-learning purposes: using the tool as a conversation partner in a target language, getting feedback on their writing, receiving suggestions on how to present about a specific topic in class, or simulating conversations for different contexts, e.g., at a restaurant or asking for directions. ChatGPT is fluent in English, Spanish, French, German, Chinese, Japanese, Russian, Italian, Dutch, and Portuguese, but can also handle a number of other languages. It can understand when users switch from one language to another, even within a sentence. ChatGPT can create bedtime stories that involve characters imagined by kids and that parents can read aloud to them. Teachers can ask ChatGPT for creative ways to explain the elements of a novel to a Grade 5 student or for classroom activities to teach the elements of a persuasive essay.


What are the limitations and potential pitfalls of GenAI?

ChatGPT, and other large language models, is a predictive tool. It produces human-like text based on the linguistic patterns it has learned, so it tends to produce formulaic structures and tropes that limit students’ exposure to the nuances of human-created texts. It doesn’t know how to recognize nuances of values or worldviews, and it can produce texts with biases or stereotypes embedded in the data the tool was trained on. Chatbots have also been known to “hallucinate” and provide false information while sounding pretty credible. Students need to be aware that chatbots can make mistakes and replicate biases, and, while useful for understanding some concepts, information generated by AI shouldn’t be taken at face value.

 

It’s also easy for students to cut and paste generated texts and pass them off as their work. Because chatbots don’t acknowledge sources, submitting work generated by a chatbot without proper attribution complicates further this issue of academic integrity and originality. The other challenge is that, despite the rise of different AI-generated text detectors, these detectors cannot reliably determine if a text is written by a human or a chatbot. 

 

If students use chatbots to do assignments or answer questions for them without knowing how to evaluate the output, a student may not develop strong critical thinking and problem-solving skills. Discovering ideas through hands-on experimentation and connecting ideas from different sources enables students to engage in more complex thinking processes, and a reliance on AI to simply feed them the answers can make them less motivated to explore and learn independently.

 

Interacting with chatbots, like ChatGPT, that don’t have any age-verification mechanism has privacy risks. These interactions can involve sharing personal information and the collection of data, and parents and educators should be aware of what data is being collected, how it’s being used, and whether it’s shared with third parties.

 

...generative AI is here to stay...rather than shielding students from it, we need to make sure they’re equipped with the digital literacies needed to use it strategically and responsibly...

Does GenAI have a place in the language classroom?

There has been a mix of responses from K–12 schools in terms of the role of these chatbots in educational institutions. Some schools have blocked student access to ChatGPT on school devices and Wi-Fi, while some have recognized that such a ban can be inequitable, because wealthier students with phones or laptops of their own, and home connectivity, can easily access these tools at any time.

 

Other schools have accepted that generative AI is here to stay, and that rather than shielding students from it, we need to make sure they’re equipped with the digital literacies needed to use it strategically and responsibly, understanding both its benefits and limitations. This involves knowing how to craft prompts effectively, recognize the genre structures and conventions of generated texts, evaluate the accuracy of generated texts, and detect embedded biases.

 

How generative AI will transform learning as we know it is still unknown. It’s possible that this “AI turn” can steer us toward more “flipped classrooms,” where students learn content outside of class and focus on practical applications and problem-solving activities in the classroom. In terms of assessment, teachers may need to design more oral exams and presentations, while essay writing may need to involve more self-reflection, which integrates personal details that a chatbot can’t invent.

 

These are the things that we as educators will need to think about as these technologies become even more powerful and popular. In the workshops I’ve designed for K–12 teachers in BC, we discuss the different affordances and constraints of these tools so that we can engage in dialogue about whether these tools have a place in the classroom and to what extent they can be part of the learning process. One thing is certain: generative AI is here to stay, and it will continue to transform the way people consume and produce knowledge. How we as teachers respond to this change will shape the evolving literacy landscape that students occupy.

Comments


bottom of page