The previous post in this GenAI and Critical Thinking series reports on interviews I did with three of my former students. In the post, I identify parts of my composition curriculum that students say made them think critically. I then suggest that writing instructors can move toward incorporating some of those parts into their own GenAI-era curriculum. However, we know from reporting in many pieces, like the Hua Hsu article that inspired this series, that not all students will respond to the course setup like Amara, Haniya, and Lucy who decided not to offload their critical thinking to AI. To better ascertain what motivates students, I also asked my interviewees about what made them choose to do the work themselves as opposed to turning to GenAI.
Two of the three interviewees named intrinsic reasons that they chose to resist AI-use: they possess a fundamental desire to learn new things and to protect their original voices. All three, however, also brought up extrinsic factors that supported their choice to really practice thinking critically. Specifically, they chose to think hard because (1) the course material was of deep, personal interest to them and (2) the built-in low-stakes assessments and scaffolding invited authentic investment and gave them room to fail without huge penalty.
Intrinsic Motivations for Critical Thinking
Many students already come to class seeking to be challenged, to do hard work, and to leave having created something authentic to themselves. This might be because they recognize they must put in effort in order to get maximum value out of their investment in higher education. Lucy, an honors student, said that she stayed away from GenAI in my class mostly because she is paying for her education and wants to get something for her money.
Somewhat differently, Amara notes that her desire to be authentic and unique guided her decision making about GenAI:
“When you have your own idea, it’s yours. It’s not something else. Sometimes when I ask [GenAI] one question, it gives another person the same answer. But I crave ownership of my work. I don’t want to give the same answer that everyone else is giving.”
Lucy and Amara help us see that some students are simply less susceptible to the allure of shortchanging their learning by offloading cognitive burdens to GenAI. They have clear reasons for attending school or have personal codes that dissuade them from using the tool in ways that decrease or eliminate critical thinking.
Despite their intrinsic motivations for learning, however, all three interviewees cited specific features of the composition course that increased their motivation or made it easy to choose not to use GenAI.
Extrinsic Motivations: Removing Barriers that Prevent Students from Enjoying the Course Topic
Across the board, students agreed that because the course topic was interesting to them, they simply wanted to take the time to think about it. In the class, students research and write about how various social media platforms intervene in and affect issues such as personal relationships, activism, learning, the economy, social and political norms, and more. At the beginning of the course, students read, watched, discussed, and wrote about the same social media-related topics. For every assignment, we took time to create personal connections to the material and to link the content to current issues. This early time in course gave students an opportunity to thoroughly explore possible avenues of research so that, later on, they felt ready to form groups and compose research questions around specific topics of interest.
It’s neither new nor revolutionary to suggest that instructors make their classes interesting. Most instructors work hard to open student eyes to how their course topics intersect with important issues in the world and to pitch students on why they should care. However, instructors can examine whether there are barriers in the way of students accessing the interesting parts of their course. Haniya shares how it wasn’t until she learned critical reading skills that she was really able to connect with the course topic:
“When I actually tried and I had to read [the assigned chapter], it got interesting. When the reading was starting to click, then your work wasn’t boring. I thought, ‘Ok, I’m getting interested in this.”
Here Haniya reveals that she wasn’t able to enjoy the work until “the reading was starting to click.” In the last post, I share how Haniya explains that annotating course texts unlocked a new level of reading comprehension that allowed her to think more critically as she read. She first needed resources and strategies that allowed her to understand the course content. When we spent time in the class discussing and practicing text annotation, she didn’t simply begin to think more critically; she began to enjoy the topic more. If enjoyment is central to a student’s choice to think critically, then we should do our best to make sure students have a clear path to find enjoyment in our work.
Low-stakes Assignments, Inviting Authentic Expression, and Project Scaffolding
The interviewees also expressed that they were more likely to turn in personal, authentic, self-generated work because of the course’s low-stress workflow that invited creativity and gave room for improvement.
Lucy said she “felt motivated because the deadlines made sense.” In 102, students work on a major project over several weeks, turning in small pieces along the way that do not feel overwhelming but that lead to rich, complex, nuanced work. For her primary research activity, Lucy conducted several interviews and had a ton of data to analyze and synthesize. She was able to complete that work systematically as I asked for the basic coding of the interviews as a homework assignment, a proposal with a thesis draft as another, a low-stakes half draft of the research paper, a full draft, and then, after giving feedback on all the pieces along the way, a final polished draft. In other words, none of these assignments alone put immense pressure on the students to be perfect. Students can include creative, off-beat, unique responses that we can collectively shape into something usable for a final draft.
Echoing Lucy’s comment that the workflow motivated her to do her own work, Haniya says that “using ChatGPT wasn’t going to work for your class because of the way you set it up.” She explains that:
“I didn’t like using ChatGPT because if we used it on an assignment, it would give you the basic answers. But for your class, you give us a lot of room to be creative with our answers. There’s no need for me to be putting in basic stuff when I know what you’re looking for. Our answers could not make sense but you would make them make sense, because you’re the type of teacher that likes the stuff off the head.”
This response tells me that the low-stakes scaffolding and the personal feedback made her feel safe to give me her authentic responses instead of trying to give me polished, rote, “basic” content that might come from genAI. When she says that there is “a lot of room to be creative” and I, as the instructor “would make [her responses] make sense,” she seems to refer to the recursive nature of the writing process that the course set-up mirrors. She would authentically respond to some assignment, perhaps drafting a working thesis statement. I would give feedback that advised her how to make that working thesis more audience-friendly or more closely aligned with her evidence. That new, more polished thesis statement would then allow her to build a stronger draft using her own responses.
Can instructors actually motivate students NOT to use GenAI?
Plenty of voices suggest that even the most motivated of students will eventually decide to let A.I. generate writing for them. For example, in his recent New York Times editorial NYU vice provost Clay Shirky suggests that instructors should never assign writing for homework but that all writing should happen in the classroom, largely in blue books, and perhaps even in classrooms that have no internet access. Shirky advocates for turning away from writing and moving towards oral exams as the gold standard for evaluating student knowledge.
I agree that our pedagogical arsenal should include some of these low- or no-tech assessments, and I, too, am exploring how extemporaneous speaking might play a role in assessing student learning. However, I find it facile to make a blanket decision to mistrust all of our students and suppose that they will always turn to GenAI if given the chance. Shirky cites as his main example a Philosophy professor from the Abu Dhabi NYU campus who claims that “even the good students” in his classes “were using A.I. to avoid work outside of class.” Yet we know nothing about the kind of work the professor was assigning, how much he was asking them to do, what support students had within and outside the classroom to help them accomplish it, and how much work he had done to make the course and the assignments meaningful to the students.
Recent polling shows that there are still students who still come to us with intrinsic motivation to learn and even some who come with skepticism about AI. We can be clear-eyed about how a simple admonishment not to use the tech is insufficient to protect students critical thinking. But we can also do real work to transform our pedagogical practices to meet the moment without throwing away writing altogether and sowing seeds of mistrust with our students.
