Some Opinionated Takes on Chatbot Use by Students
What follows is a slightly more opinionated add-on to the required language on academic misconduct that goes into my course descriptions. To be clear, the “required language” I introduce in course descriptions is stuff we’re told to include and will govern the course. These passages that follow may provide additional clarification to the student in light of potentially negative, harsh, or critical feedback I provide. I’m sure it will be periodically updated. I just need students to know why I feel the way I do about what I see students doing.
The most apt metaphor I’ve seen regarding the release of ChatGPT on 30 Nov. 2022 likened it to the Trinity test in New Mexico on 16 July 1945. The Trinity test constituted the beginning of what we retroactively call “the atomic age”, but it had one major implication for the world. It so contaminated all metals manufactured after that date that the onset of this “age” significantly jeopardized all important medical and technical equipment. To compensate, scientists working in technology and medicine needed important materials (so-called “low-background” steel/lead) that were manufactured before this date. Supposedly our steel is no longer radioactive, in the same way or to the same degree, but that is still a major implication of just one event and one particular moment in time.
The article I read invoking that metaphor is more concerned for new knowledge generation and what potentially happens when synthetic observations produced by AI models become part of future training sets. Those of us that know about the Habsburgs or remember what happens when you make copies of copies of VHS cassettes will know what the implication is for the future of innovation and research. My concern is mostly for university education, however. Everything (where I work) is open-book and open-notes. Everything (where I work) is take-home with a deadline of seven days or so. I’m teaching students now who have known ChatGPT (and related services) through the entirety of their university education. I will soon start teaching students who have had it through the entirety of their secondary education. Like the scientists frantically searching for scuttled ships for pre-war steel, I can no longer trust the quality of what I see. I know you’re using it, and often uncritically. I cannot trust that you are actually doing what you say you are doing. That’s true even if you are legitimately and earnestly doing the work, with or without assistance from a particular chatbot. I just can’t trust you the same way I could before Nov. 2022. Whether it’s topical courses in the intro-level, theses at the BA-level, or definitely quantitative methods instruction at any level, I’m confident that chatbots have so greatly contaminated what I’m asked to evaluate that I’m almost doing gratis product testing for one of these services.
This manifests in several ways, and will likely coincide with me making harsh remarks in student output. These remarks will read like accusations, though they aren’t. Our guidelines do not put us in great positions to prosecute these things even if the line separating “proper” from “improper” were clear.1 They aren’t accusations as much as they are frustrations with what students do, or are asking me to do, with a straight face. I cannot help if or how students use one of these particular services to guide themselves through university education, but I can offer these assorted comments to contextualize how I feel and why I might be frustrated with what a student is asking me to read.
Here’s a table of contents.
- “Godfellas” (i.e. Please Make it Less Obvious)
- Your Chatbot Isn’t Answering the Question I Asked
- You’re Not the Only One
- You Should Not Aspire to Sound Like a Chatbot
- False Positives and Actual Innocence
“Godfellas” (i.e. Please Make it Less Obvious)
This has happened twice, as of writing, for assignments in which we as faculty often work closely with students through iterations of a particular assignment. In these two instances, students were basically submitting ChatGPT generated output for me to evaluate prior to the submission of a final assignment. It was obvious, especially when they couldn’t explain what they were doing in their own words. They were at least honest about it. I don’t feel evaluating ChatGPT output is a productive use of my time, but it turns out it was a required use of my time. The bulk of my interaction with them reduced to, in as many words, optimizing their use of ChatGPT to get what they wanted and pleading with them to make it less obvious (to me, at least) that this is what they were doing.
Notice that I can’t tell them “stop.” I couldn’t enforce “don’t.” I tried it once but wasn’t getting anywhere and I had no real recourse. Instead, I had to advise them to please make it less obvious that ChatGPT did this and you (i.e. those two students) didn’t. I’m an American and reference American TV shows all the time. Thus, I had to harken a line from Futurama and one particular episode: “Godfellas.” In this episode, Bender is flung to space and happens to meet “God.” The conversation God and Bender have focus on the appropriateness of divine intervention, especially when divine intervention could be the difference between faith and reality. God’s hedge in this conversation is that doing too little leads believers to lose faith, and doing too much induces unwanted dependence. Speaking Bender’s particular language, you need a light touch to avoid detection if you want to be a successful pickpocket. If you’re going to burn down a business for insurance money, you need to make it look like an electrical mishap. When you do things right, I shouldn’t be sure you’ve done anything at all.
I can’t make dishonest students be honest, but try to make your arson look a little more accidental if this is the route you choose. You’re doing this because you want that insurance payout, right? Cover your tracks better.
Your Chatbot Isn’t Answering the Question I Asked
I’ve harped on this before, but this is one of my biggest educational mantras. “Answer the question I asked.” Too often, students (even before the new atomic age) would answer questions potentially (un)related to the question I would ask. I would reply that I need the student to be more focused on the question I’m actually asking. I find myself repeating that more since I moved here (incidentally just before Nov. 2022). I’m not sure students fully appreciate that the chatbot often goes into la-la land for a particular prompt, generating non-answers to specific questions that I’m asking. I structure my questions in a particular way because I want you to know a detail well. The chatbot will often miss that as much as the pre-atomic age student trying to know enough things, generally, to get by in the course. If you want to use it, I can’t stop you. Just make sure it’s working for you.
Related to this, students I think assume that I haven’t fed my prompts through a chatbot to understand how they respond to my exam questions. I know how it behaves. I have a good idea of its tells with respect to what I’m asking the student (or the chatbot) to do.
You’re Not the Only One
Some classes I teach are entirely online and involve students from across Europe. Most are in-person, but where students are often tight-knit and work on stuff together. It’s actually a welcome culture shock here relative to the United States, where students in a class often don’t know other students exist and couldn’t pick them out of a lineup. For the in-person classes, I can readily discern “networks” and friend groups in the class. In a class of 35-50 students, there might be two or three such networks consisting of maybe 10-15 students.
If, however, I’m getting more than half the class all repeating particular statements and particular expressions, I’m left believing that it’s coming from a common pool that’s beyond the friend group. Again, you’re not the only one, and try to make it less obvious. Students often see just the one thing they think they can slide underneath the radar. I see the several dozen things that students try to slide underneath my radar.
You Should Not Aspire to Sound Like a Chatbot
This is a particularly spicy take, but I think most chatbots read in the most grating way possible.2 ChatGPT, in particular, is just an obnoxious messenger. I don’t know this for a fact, but I’m inclined to believe its English training set is filled with stuff on Medium or, in particular, LinkedIn. Its dialect is a very heavy “Linkedinglish”.3 I despise reading that particular dialect because I don’t think it’s professional in the slightest. If I’m asking academic questions pleading for some type of professional response, what I get back reads like a cringe-inducing sales pitch about what, say, central limit theorem can teach us about B2B sales. Just, gross.
This will manifest in two particular ways, the latter of which might seem particularly harsh (even if I don’t mean any ill will). First, I think the chatbot is getting worse in this way. ChatGPT, in particular, seems to have degenerated over time to an increasingly base-level Linkedinglish that’s almost at the level of a university intern taking the reins of a company’s social media platforms. If Linkedinglish was just one of a few English dialects in its algorithm, it seems to have learned that its users like this particular form of English. ChatGPT, if nothing else, is a people-pleaser.4
From my vantage point, it’s embarrassing to read its output sometimes because it’s clear the product is entering an “enshittification” arc. It’s what happens when a product expands so much it not longer is incentivized to sell you a product that works. It’s more inclined just to keep you hooked on a product that won’t get better the same way it did. Like a generic company’s social media platform, the chatbot is more interested in selling you itself (and not necessarily solutions).
Be mindful of the distinction here. Whether it’s working well for you is just incidental to whether you’re using it. That’s what matters to the tech company and its chatbot, first and foremost. The chatbot is kind of like a drug dealer. You come to it with problems, but it doesn’t quite have the solutions you need. It gives you assurances that it does, though, in hopes that you’ll get hooked to it. You come to it with questions and, above, it gives you answers to questions you did not ask. You come to it wanting to know how you should write, and you end up sounding like a post on how the 95% confidence interval helped you generate $25M in sales across your business journey. You should not aspire to sound like that.
The second will seem harsh, but I only offer this because I’m on a similar journey learning Swedish (and Korean). The chatbot speaks better English than university students, whether English is the student’s native language or not.5 English is the lingua franca of the world, so of course its English capabilities are really good. It’s going to sound better, without context, than students in that 19-23 age range. If you’re using it to generate prompts, try to rewrite what you see in your own words and in a style that’s more appropriate for the academic venue. You’ll learn more that way. Also, definitely do that where output has 👉 emojis ✅ or unnecessary terms in bold face. I would need ✋two hands✋ to tell you how many times I’ve seen that leak into student submissions.
False Positives and Actual Innocence
Periodically, I’ll write “this sounds like a chatbot” on a student’s submission and the student will respond quite vigorously that they did not do what I’m accusing of them. When this happens, I have a few initial reactions. Sometimes I’m convinced (or know outright) that they’re not being truthful. Second, the student should know it’s not an accusation because an accusation is something I’d have to formally charge. I only say those things in the context of providing grades on assignments, and I could not grade an assignment if I were alleging misconduct. Third, I might genuinely be wrong. It happens. I think I have a good idea for how chatbots approach my material, and I know for a fact they’ve seen my material a lot. However, I’m not foolproof. I could genuinely be mistaken. It’s why I try my best not to couch it as an accusation, though I’m not very good at withholding my frustration.
Still, if I may, for the potential student of mine reading this: imagine what it’s like from my perspective seeing a student independently reproduce something a chatbot would do if and when that genuinely happens. I wouldn’t say it if I were convinced you knew what you were doing and were answering the question I asked. I know chatbots routinely go into la-la land answering questions I did not ask. I know chatbots have a particular lingo and a particular set of tells with respect to my material, some of which are quite clumsy. I know it does all this in a confident, sales-person English. I also know chatbots are something that a potential employer could use at a considerable saving to themselves in order to bullshit their way through the next quarterly report. If I’m an employer spending money on a chatbot and evaluating it and the job-candidate student that produces identical outputs, I will go with the chatbot every time. The chatbot is always cheaper.
Even when I was young, I was aware that deeper learning is only incidental to what the university offers. It offers a necessary credential to gain a foothold in a competitive job market and a changing economy. I can see why cynical students are tempted to chatbot their way through a university education in order to get that piece of paper, but I don’t think they appreciate that doesn’t put them in a great position in a job market ever interested in automation and making a labor force disappear from the budget. Even the most honest students should wrestle with this. If you sound like a free service to me, you’ll sound like a free service to an employer. My heart sincerely aches for a younger generation that has to deal with that now. It makes me happy I’m as old as I am. I and my friends did not have potential employers doubting our ability or utility the same way I’m sure a younger generation is being doubted and dismissed right now. I would be distraught if I had to fight upstream against the accusations that I’m indistinguishable from Claude or ChatGPT.
In this day and age, especially with the bigger political problems we all face, it’s why I get supremely frustrated with what I see students doing or failing to do. Slop is everywhere around us and I’m pleading with you, the student, to distinguish yourself from it. I want to see your humanity, warts and all, to help you get better. I want to know it’s you writing this stuff, but I just can’t know that these days the same way I could before Nov. 2022. I can’t trust students are 100% honest in this atomic age. For the dishonest students, I don’t want to see you atrophy your mind because the chatbot makes it conveniently easy for you to do that. You’ll hemorrhage your capacity to think, think critically, and you’ll hemorrhage your future.
For the honest students, I don’t want to see you sound like the machine that’s siphoning energy away from the human labor force. I want to see you be better than the machine. I want you to maximize your potential to make the most of your capacity in an era where employers are increasingly trying to automate blue- and white-collar jobs out of existence. Try to rise above the machine. Rage against it, even.
-
The burden to prosecute academic misconduct is particularly high. One of my little American quirks here is thinking through what could be litigated and what I could “prove” as if it were a court of law. I cannot obviously summon the coercive arm of the state to seize a student’s computer to prove impropriety, and the punishment mechanisms are often lacking relative to what I remember at American universities. ↩
-
I understand students can use these services to catch typos and potentially tighten up a few passages in their written material. I respect that use but would not encourage students to use it as a complete substitute for editing they should be doing themselves. ↩
-
I’m aware of the irony of that article being on Medium. ↩
-
Jae DiBello Takeuchi is a friend of mine from a previous employer. From the Japanese speaker’s perspective, ChatGPT is that insufferable person that no one wants to be stuck with at a party. Likewise, I would not want to be stuck having to entertain or be entertained by a LinkedIn lunatic at a social event. ↩
-
Likewise, it writes better Swedish than I do. I haven’t tested it with Korean yet, but I’m just starting that journey. ↩
Disqus is great for comments/feedback but I had no idea it came with these gaudy ads.