aerial view photography of road between green grass

The Intersection of AI and Pedagogy

In November 2022, the company OpenAI released a chatbot interface for its latest artificial intelligence language model (GPT 3.5). This tool, ChatGPT, makes it possible to interact with a powerful AI in a conversational manner. While there has been a lot of attention paid to the release of ChatGPT and what the implications are across many sectors of society, it’s worth acknowledging that it’s not a “new” phenomenon. In fact, a year earlier, OpenAI made a playground site available that allowed users to experiment with an earlier GPT model. ChatGPT has generated a great deal of attention, however, because of the informal chat interface which presumably makes the use of AI more intuitive and “user-friendly.”

Since ChatGPT, and any highly evolved AI, ultimately explores the limits of technology-mediated knowledge sharing, communication of information, and automated generation of content, it’s not surprising that educators are trying to understand the implications in the classroom. As you grapple with this new and emerging technology, the PSU Open CoLab has put together this brief guide to answer common questions and provide possible approaches to dealing with AI in your courses.

Who has access to these tools and how do they use them?

Currently, ChatGPT is in a “Free Research Period” which means, for now, essentially anyone can use it at no cost. However, OpenAI has indicated that this is a temporary offering; eventually, the company will provide access to the engine behind ChatGPT via a programming interface, for a fee. At that time, we can expect companies to begin exploring how to integrate it into their own business models/services.

Using ChatGPT is as simple as going to the website, setting up an account with OpenAI (you can use an existing Google account for this), and starting to “talk” to the interface. We encourage anyone interested in (or concerned about) the implications of ChatGPT in the classroom to spend time using it and getting a better understanding of how it works.

What exactly does ChatGPT do?

ChatGPT provides a conversational “bot” interface to an artificial intelligence project known as a “large language model.” Essentially, this is a technology that has been “trained” how to respond to certain prompts through exposure to a huge collection of texts and data. Some of this training is done through automated programming; some of it requires human intervention and correction. Whereas in the past, chatbots had a limited ability to respond to queries (essentially they could only respond if they had a clear programmed answer to a specific question), the language model behind GPT is far more comprehensive and flexible.

When you type in a question or prompt to ChatGPT, the technology assembles a response based on what it knows about the question you’ve asked. You can even fine-tune its response; for example, you can ask for it to reply only in the form of a rhyming poem or in the style of a particular author.

While human intervention is part of the training of the bot, there is no human involved in the process of responding to your question. The answers you receive are entirely artificial.

What can’t ChatGPT do?

It’s important to note that ChatGPT is only capable of responding to what it knows through its training. It can’t go out and “seek” answers to your questions in real time. As a result, it may not know about recent events, and the cut-off for ChatGPT’s knowledge is before 2021.

It has also been trained not to perpetuate misinformation. If you ask it to tell you about something that is a blatant lie it will correct you. Although it can tell you a story about what would have happened if events in the past were altered.

There are also certain topics ChatGPT is not allowed to talk about; for example, it won’t tell you how to successfully commit a crime. And, unlike previous AI chatbots, like Microsoft’s Tay, at this time, ChatGPT can’t learn bad behavior or misinformation from its users.

How should I handle all of this in my classes?

The most common concern we’ve heard from educators about this new technology is how students can use it to cheat. Theoretically, any question you ask (on a homework assignment, test, essay prompt, etc.) can be inputted into ChatGPT. The response may well be accurate and even clearly written. So, yes, it is possible for students to use this technology to cheat on classwork. That said, if you spend some time using ChatGPT, you may find that the answers, while technically accurate, are often formulaic and even, at times, a bit simplistic. ChatGPT is not a gifted writer; it has been trained to follow standardized formats for its output, and these formats are often recognizable.

On the one hand, this means you may be able to easily identify work that has been generated using ChatGPT; on the other hand, no one really wants to spend their time grading (and failing) assignments that appear to be artificially generated. To that end, this is our best advice about how to handle ChatGPT in your classes

Talk to your students about ChatGPT

Some professors have expressed concern about talking openly to their students about ChatGPT. If students aren’t yet aware of its existence, perhaps drawing attention to it is a bad idea. While it is possible that some of your students haven’t yet heard about this new tool, we think it’s inevitable that they will all know about it soon. For that reason, we encourage you to be as transparent as possible with students about what it is, what your experience has been using it, and what concerns you have.

Think about approaching this as a conversation, not just a lecture. If possible, bring ChatGPT up on a screen in class and spend some time feeding it course-related prompts and talking about what it generates. You will be best positioned to explain to your students why the responses, which may look adequate at first glance, cannot stand in for real work produced by them. This is, of course, an opportunity to talk about academic integrity more generally and the importance of an authentic learning process.

You may wish to include a statement in your syllabus about this topic (although, again, we recommend you also talk to your students about this at greater length). PSU doesn’t have a specific statement about ChatGPT, but the academic integrity policy certainly addresses these kinds of concerns.

Retool your assignments and assessments

In his article, “Freaking Out about ChatGPT,” John Warner points out that the kinds of assignments that ChatGPT is best poised to answer well are prompts that can be answered with rote, formulaic responses. Can you tweak assignments so they require more synthesis, personal exploration, and examination, or discussion of the learning/writing/research process? ChatGPT cannot do this kind of work for your students, making it virtually impossible for them to rely upon it for all the answers.

On the other hand, it’s worth acknowledging that the world our students will be living in is increasingly going to be inhabited by technologically advanced tools and artificial intelligences. As a result, the skills they need are different than the ones we needed when we graduated from college. Can you find ways to authentically integrate the use of tools like ChatGPT into the work of your course? When can this tool help save your students time, for example, by allowing them to focus on higher-order work or deeper exploration?

If you do decide to encourage some “legitimate” use of ChatGPT in your classes, we would be remiss if we didn’t point out that there are ethical concerns about the tool, in general: from whose intellectual property is used to train the tool in the first place, to the reliance on underpaid, overseas workers to conduct the human-led training (which often involves handling traumatic input/output), to the potential collection and use/abuse of user data when they interact with the bot. All of these concerns are not without merit, and we absolutely encourage you to educate yourself about the tool and talk to your students about these issues. Your decision about what role (if any) ChatGPT plays in your pedagogy should be driven by your own technical and ethical understanding of the tool.

Finally, related to all of this is how you grade and assess your students’ work. Can you rethink assessment so that it de-emphasizes product and instead focuses on process? If students are sharing their individual learning/writing/creative/research process with you, that’s not something any AI can make up.

Study AI with your students

It’s not enough to simply tell students they shouldn’t use ChatGPT. Help them understand the scope of what it can do and what its limitations are. Again, the landscape is changing and our students will need to contend with how these technologies impact their future jobs and careers. Take some time to familiarize yourself with how ChatGPT deals with your specific discipline and domains of knowledge and then build discussion/assignments about this into the curriculum.

As mentioned above, one important aspect of ChatGPT that you may wish to spend some time talking about and studying with your students is the origin story of its data and training. Reports are emerging about the outsourcing of much of this work to poorly-paid overseas workers; in addition, the text and data that is used to train ChatGPT comes from all of us. This might be an opportunity to talk to students about the complexity of intellectual property rights and how AI further complicates the landscape.

Can’t I just tell students they’re not allowed to use ChatGPT and then use tools to identify when they have used it?

Your instinct may be to use policy and policing to address the use of artificial intelligence in your classes, and we understand where this comes from. You have a lot on your plate; concerning yourself now with whether students are using robots to complete their homework may just feel like a bridge too far. That said, we do not believe it is possible to police or policy our way out of this conundrum.

First, anytime we rely too heavily on policing or heavy-handed policies we run the risk of undermining trust in our classrooms. And research shows that building trust in the educational setting is an important component of student learning and success. Secondly, the reality is that any attempt to police this kind of activity is likely to turn into an arms race. Artificial intelligence tools aren’t going to go away, and they are only going to get better. While there are tools out there that can review text and make a prediction about whether or not student work has been artificially generated, they are not foolproof, and as they become better, we can probably be certain AI will evolve to outwit them.

Great! Now I know everything about ChatGPT, right?

Nope. Not by a long shot. This is a very basic primer on the topic, and there is lots more commentary and research out there that is worth exploring. To that end, you may wish to look at our slides from PSU’s 2023 Technology Jumpstart which includes some slides about this topic, or our Zotero collection of links. In the latter, you’ll find more readings and resources, including some that have other ideas for alternative assignments that may help you think about how ChatGPT intersects with your teaching.

As always, we encourage all PSU faculty to reach out to us for an appointment! We would love to meet and talk more about how this topic (or anything else) is impacting your pedagogy.

Scroll to Top