Student Perspective Part II: Craft Supportive, Intentional AI Pedagogy

By: Noa Guest, Junior Computer Science Major

In my last blog post, I wrote about my experience as a Computer Science student in a job and academic environment that is rapidly pushing GenAI to be the “next big thing.” I discussed how professors should implement GenAI based on the level of the course. Yet I find that, as a student, we don’t simply need a green or red light when it comes to using AI. We also need professors to strike a balance in their rhetoric about AI and the future of the job market. Professors and instructors should avoid doomsday pronouncements about how our jobs post-graduation will be “stolen by AI” and instead should focus on providing them a skillset that will help them navigate the changing employment landscape.

Students cannot avoid hearing the “your job will belong to AI” rhetoric anywhere in universities or society in general these days. The more a student hears about how they are fighting a losing battle against large language models, the more they will feel inclined to use GenAI “because it does not matter anyway.” Professors should realize that they are teaching the first generation of students who will be competing with computers for jobs. Therefore, they should design assignments that help students develop skills they can use to outperform AI.

Not only do we need assignments that help us develop skills in an AI job market, we need to hear professors articulate meaningful reasons why we should or shouldn’t use the tool. The most helpful words from my English classes are that AI-use in the course should help us work with AI because it is not going anywhere. In one of my favorite English assignments, we were instructed to find flaws in GenAI output so that we can better understand when it does not work. What made this effective was that our professor explained the reason we were doing the task; partly, we were learning to spot inaccuracies in the output so we can fix them, and partly we were learning about how to make a compelling argument to an employer for why we should avoid AI for some task. Comparatively, it feels like my Computer Science professors have almost given up, suggesting we use it on assignments where it is unnecessary or to prepare for internship interviews. Young people want advice on how to work with GenAI or how to actively move a workplace away from it. When professors give students the option to use AI, they should explain the advantages and disadvantages of using it for that purpose. For any task a student must complete with AI, the professors should encourage students to outperform AI or to notice the downsides of incorporating it into a professional workflow.

Many of my classmates have come up with very inventive ways to study with AI or to complete menial parts of their assignments. But they do not usually have the skills to explain why it is bad for a particular task, and many do not believe they can outperform AI. The AI jobs argument is based on the idea that the “simplistic” jobs will be outsourced to AI to save costs. Jobs that are often cited as on-the-line are junior developer, professional writing jobs, journalism, graphic design, and more. Many of these are jobs the average college student would be taking immediately after graduation. But AI cannot outperform skilled graduates if they are given the skills to counter AI and the secure foundations needed to be great at what they do. These skills and this confidence must come from their instructors and mentors.

If there is one thing to take away from how quickly GenAI rose in the professional and academic worlds, it is that educators can do a big part in developing young people into being better than the machines trying to “replace them.” AI must appear in assignments that encourage personal growth rather than reliance on GenAI. Professors and educators are the younger generation’s biggest weapon against AI, and working together on this effort is imperative to the future of our workforce.

Student Perspective Part I: Keep GenAI OUT of Lower-Level Courses

–By: Noa Guest, Junior Computer Science Major

In my junior year high school Computer Science class, I wrote an assignment making fun of generative artificial intelligence (GenAI)for not being advanced enough to be helpful. A year later, I relied on it in my Java programming class, leaning on it as a crutch to help me navigate the turbulent parts of higher-level coding languages I had no experience with. As a result, when I got to college, I felt overwhelmed trying to relearn the foundations that I previously allowed GenAI to do for me. I am not the only one who has gone down this path. I know plenty of students turn to AI to manage increasingly overwhelming course loads in both Computer Science and other fields. This is a problem as the overuse of AI will almost certainly stunt student’s abilities to hone the foundational skills needed for their chosen careers. But I believe that professors and instructors can make improvements to how they implement its usage in both lower and high-level courses to help protect student learning.

College students are feeling the temptations of AI, and it has become a common topic among my classmates. Some obviously use AI with discussion posts that contain all the hallmarks of “AI” writing (overuse of em-dashes, “it’s not just this, it’s this,” an overly joyful tone). Others are staunchly anti-AI and would rather write a 900-page paper by hand than ever prompt a large-language model to write even a sentence for them. In Computer Science, it’s not so cut-and-dry. Sometimes we talk about helpful uses, like study guides based on professor-provided mock tests on niche material. Other times, I hear my classmates discussing their uses of it on their crucial homework or lab assignment. There are so many ways GenAI can be useful, but when I had to go into college coding courses missing so many basic skills I was supposed to have because I let AI develop those skills for me, it served as a significant wake up call.

Beyond just student uses, professors have begun to incorporate GenAI into assignments. Sometimes, it appears as optional, or “recommended.” In others, it is required. When speaking to my classmates, they often agree that professors and instructors can make significant change in the way they incorporate AI into their courses. In lower-level courses, I believe professors should totally disallow its use. In my college Java course, GenAI’s use is banned entirely. With this policy, it is entirely up to the students to control their success. Of course, outright banning it from does not prevent students from using it (and it is hard to detect AI-generated code now, anyway), but the expectation is set. Because it is not allowed, students will be forced to build up their coding skills to write their programs on their own and use that knowledge on exams. This can also be applied to a humanities class where AI-detection in writing is possible so that students must learn the knowledge then be tested in some way whether it is an in-class essay or an exam.

When it comes to higher-level courses, where students have already had the opportunity to learn that foundation because AI is prohibited, GenAI can be used as a background tool to make menial tasks easier or teach students how to live with AI to prepare them for the workforce. Unlike my 200-level Java course, Computer Science’s 300-and-above courses require AI usage. By that time, students can properly build foundations without AI, but this only works if it is banned from lower-level courses.

With increased research into sustainable AI, it could revolutionize parts of academia and assist with major medical or research breakthroughs that are centered around computing. But when it comes to the foundations, AI takes away cognitive skills needed to grow in programming and learn the important skills needed for the workforce. Without GenAI, I am a stronger coder than ever now. That is not to say AI is all bad, and I think many of my classmates (and many companies with tech departments) would agree that it has its uses in speeding up menial tasks and studying for high-level classes. But college students should be able to avoid AI in the foundational parts of our degree path and in turn, we will theoretically be able to use AI to our advantage far better than we otherwise would be able to if it had been required or suggested in the beginning.

Two-Part Student Perspective: How Professors Can Best Support Students in the GenAI-Age

By: Noa Guest, Junior Computer Science Major

As a college student, I find that generative artificial intelligence (GenAI) is unavoidable. Some of the biggest GenAI-haters I know have admitted to using it to cheat in their online classes. Many students have given up and have turned to AI to manage large course loads. Professors are even encouraging students to use it in assignments. Used by both professors and students, GenAI is as prevalent as ever in universities. But should it be? In this two-post series, I explore how professors can change the way they use AI in assignments and how academia can better encourage students to flourish in an AI-driven workforce.

In my first post, I describe my experience using AI as a Computer Science student. I explore how professors could optimize their AI implementations in assignments, excluding it from foundational courses and focusing its use in upper-level courses. There, it can be useful in speeding up slower tasks, studying high-level material, and teaching students to be better, in terms of skill, than AI.

My second post expands on course implementation from the first, but I focus on the “AI will take student’s jobs” argument. In my experience, instructors and professors talk about AI in ways that make students feel unable to “beat the machine.” I talk about how professors could improve their rhetoric around AI, emphasizing that their intention is to arm students with the skills necessary to outperform AI or to negotiate when to use it and when not to in the workplace.

With these posts, it is my hope that professors and instructors at universities will be able to better incorporate AI in future courses, whether that is complete omission or a tactful implementation designed to teach counter-AI skills. My classmates and peers at universities all around the country do not feel like we are adequately represented in the college and AI conversation. Professors and college students should be a team in building a workforce that is equipped to combat AI job outsourcing and other harmful uses of the technology, and I firmly believe this team will influence the future if students are given the resources by professors to do so.

Student Perspective: Five Ways Alabama Residents Can Help Offset GenAI’s Environmental Footprint

By: Joseph Axworthy, Junior Writing and Media Major

In my last blog, I talked about the need to look for ways that we can help preserve our planet to balance GenAI’s huge environmental footprint. However, looking for ways to get involved can feel laborious, so as an Alabama resident, I’ve done the work for you. Here are five ways that Alabamians can get involved in helping keep our planet healthy. 

  1. The Alabama River Alliance (ARA) is an organization that focuses on keeping the rivers of Alabama clean and ecologically healthy.

Why is this important? 

In order to sustain AI’s workload data centers need to use water to cool their servers. Large data centers can use as much as 5 million gallons of water a day, preserving water sources can help combat these huge costs. Alabama has few laws that help protect the immense amount of biodiversity that the state holds in its fresh water sources. ARA fills the role of a governmental agency to help advocate for clean water and riverways throughout the state. 

What can I do? 

  1. Greater-Birmingham Alliance to Stop Pollution (GASP) is a non-profit that’s focused on keeping Alabama’s air clean through education.

Why is this important?
Data centers are a huge consumer of energy. The US uses fossil fuels for around 82% of their energy which produces large amounts of air pollutants. GASP helps not only advocate against bills that allow the release of pollutants into the air but also works on a community level, educating people on the topic. 

What can I do? 

  1. The Fresh Water Land Trust is focused on two goals: Conserving land and connecting the Red Rock Trail System through Jefferson County.

Why is this important?

With the increased demand for datacenters we have seen an uptick in deforestation. The Fresh Water Land Trust’s goals to help conserve important land with native species grows more important with every center built. Their efforts can also help protect land that houses native species from being available for the data centers to be built on. 

What can I do? 

  1. Keep Alabama Beautiful is an organization that believes everyone should have clean and healthy living environments.

Why is this important?
Keeping communities clean and healthy is a great way to make a difference without going far. Fighting pollution locally can help keep your city clean and green. 

What can I do? 

  • Keep up to date via their website on projects around your city
  • Apply to get free resources to clean your neighborhood 
  • Join their “Adopt a Mile” program to keep a mile stretch clean in your county 
  • Keep up with their calendar to join one of their Planting Weeks or Beautification Days
  1. The University of Alabama at Birmingham  Sustainability (UAB) offers ways for students and faculty to get involved in sustainability as well. 

Why is this important? 

UAB as one of the largest employers in Alabama has a major impact on the environment. Making sure there are available resources for students and faculty to educate themselves and utilize is important. Oftentimes students and workers have busy schedules, having an option on campus makes getting involved easy and accessible. 

What Can I Do?

The Big Picture 

As data centers greedily drink our water and we clear another 100 acres of forest, preserving our environment has never been more important. I know that most of us live busy lives; we work, study, and maintain relationships. But taking just a few hours from our schedules to volunteer or support a bill to preserve our future is worth it. So write it on a to-do-list, plan to volunteer with friends, or set an alarm on your phone to remind you to call lawmakers and advocate for environmental protection. Now you have the knowledge of where to go and what you can do. All that’s left is to act. 

Engaging topics, creative authenticity, and room to fail: What motivates students to resist lazy GenAI-use and think critically instead

The previous post in this GenAI and Critical Thinking series reports on interviews I did with three of my former students. In the post, I identify parts of my composition curriculum that students say made them think critically. I then suggest that writing instructors can move toward incorporating some of those parts into their own GenAI-era curriculum. However, we know from reporting in many pieces, like the Hua Hsu article that inspired this series, that not all students will respond to the course setup like Amara, Haniya, and Lucy who decided not to offload their critical thinking to AI. To better ascertain what motivates students, I also asked my interviewees about what made them choose to do the work themselves as opposed to turning to GenAI. 

Two of the three interviewees named intrinsic reasons that they chose to resist AI-use: they possess a fundamental desire to learn new things and to protect their original voices. All three, however, also brought up extrinsic factors that supported their choice to really practice thinking critically. Specifically, they chose to think hard because (1) the course material was of deep, personal interest to them and (2) the built-in low-stakes assessments and scaffolding invited authentic investment and gave them room to fail without huge penalty. 

Intrinsic Motivations for Critical Thinking

Many students already come to class seeking to be challenged, to do hard work, and to leave having created something authentic to themselves. This might be because they recognize they must put in effort in order to get maximum value out of their investment in higher education. Lucy, an honors student, said that she stayed away from GenAI in my class mostly because she is paying for her education and wants to get something for her money. 

Somewhat differently, Amara notes that her desire to be authentic and unique guided her decision making about GenAI:

“When you have your own idea, it’s yours. It’s not something else. Sometimes when I ask [GenAI] one question, it gives another person the same answer. But I crave ownership of my work. I don’t want to give the same answer that everyone else is giving.”

Lucy and Amara help us see that some students are simply less susceptible to the allure of shortchanging their learning by offloading cognitive burdens to GenAI. They have clear reasons for attending school or have personal codes that dissuade them from using the tool in ways that decrease or eliminate critical thinking.

Despite their intrinsic motivations for learning, however, all three interviewees cited specific features of the composition course that increased their motivation or made it easy to choose not to use GenAI. 

Extrinsic Motivations: Removing Barriers that Prevent Students from Enjoying the Course Topic

Across the board, students agreed that because the course topic was interesting to them, they simply wanted to take the time to think about it. In the class, students research and write about how various social media platforms intervene in and affect issues such as personal relationships, activism, learning, the economy, social and political norms, and more. At the beginning of the course, students read, watched, discussed, and wrote about the same social media-related topics. For every assignment, we took time to create personal connections to the material and to link the content to current issues. This early time in course gave students an opportunity to thoroughly explore possible avenues of research so that, later on, they felt ready to form groups and compose research questions around specific topics of interest. 

It’s neither new nor revolutionary to suggest that instructors make their classes interesting. Most instructors work hard to open student eyes to how their course topics intersect with important issues in the world and to pitch students on why they should care. However, instructors can examine whether there are barriers in the way of students accessing the interesting parts of their course. Haniya shares how it wasn’t until she learned critical reading skills that she was really able to connect with the course topic:

“When I actually tried and I had to read [the assigned chapter], it got interesting. When the reading was starting to click, then your work wasn’t boring. I thought, ‘Ok, I’m getting interested in this.”

Here Haniya reveals that she wasn’t able to enjoy the work until “the reading was starting to click.” In the last post, I share how Haniya explains that annotating course texts unlocked a new level of reading comprehension that allowed her to think more critically as she read. She first needed resources and strategies that allowed her to understand the course content. When we spent time in the class discussing and practicing text annotation, she didn’t simply begin to think more critically; she began to enjoy the topic more. If enjoyment is central to a student’s choice to think critically, then we should do our best to make sure students have a clear path to find enjoyment in our work.   

Low-stakes Assignments, Inviting Authentic Expression, and Project Scaffolding 

The interviewees also expressed that they were more likely to turn in personal, authentic, self-generated work because of the course’s low-stress workflow that invited creativity and gave room for improvement. 

Lucy said she “felt motivated because the deadlines made sense.” In 102, students work on a major project over several weeks, turning in small pieces along the way that do not feel overwhelming but that lead to rich, complex, nuanced work. For her primary research activity, Lucy conducted several interviews and had a ton of data to analyze and synthesize. She was able to complete that work systematically as I asked for the basic coding of the interviews as a homework assignment, a proposal with a thesis draft as another, a low-stakes half draft of the research paper, a full draft, and then, after giving feedback on all the pieces along the way, a final polished draft. In other words, none of these assignments alone put immense pressure on the students to be perfect. Students can include creative, off-beat, unique responses that we can collectively shape into something usable for a final draft.

Echoing Lucy’s comment that the workflow motivated her to do her own work, Haniya says that “using ChatGPT wasn’t going to work for your class because of the way you set it up.” She explains that:

“I didn’t like using ChatGPT because if we used it on an assignment, it would give you the basic answers. But for your class, you give us a lot of room to be creative with our answers. There’s no need for me to be putting in basic stuff when I know what you’re looking for. Our answers could not make sense but you would make them make sense, because you’re the type of teacher that likes the stuff off the head.”

This response tells me that the low-stakes scaffolding and the personal feedback made her feel safe to give me her authentic responses instead of trying to give me polished, rote, “basic” content that might come from genAI. When she says that there is “a lot of room to be creative” and I, as the instructor “would make [her responses] make sense,” she seems to refer to the recursive nature of the writing process that the course set-up mirrors. She would authentically respond to some assignment, perhaps drafting a working thesis statement. I would give feedback that advised her how to make that working thesis more audience-friendly or more closely aligned with her evidence. That new, more polished thesis statement would then allow her to build a stronger draft using her own responses. 

Can instructors actually motivate students NOT to use GenAI?

Plenty of voices suggest that even the most motivated of students will eventually decide to let A.I. generate writing for them. For example, in his recent New York Times editorial NYU vice provost Clay Shirky suggests that instructors should never assign writing for homework but that all writing should happen in the classroom, largely in blue books, and perhaps even in classrooms that have no internet access. Shirky advocates for turning away from writing and moving towards oral exams as the gold standard for evaluating student knowledge. 

I agree that our pedagogical arsenal should include some of these low- or no-tech assessments, and I, too, am exploring how extemporaneous speaking might play a role in assessing student learning. However, I find it facile to make a blanket decision to mistrust all of our students and suppose that they will always turn to GenAI if given the chance. Shirky cites as his main example a Philosophy professor from the Abu Dhabi NYU campus who claims that “even the good students” in his classes “were using A.I. to avoid work outside of class.” Yet we know nothing about the kind of work the professor was assigning, how much he was asking them to do, what support students had within and outside the classroom to help them accomplish it, and how much work he had done to make the course and the assignments meaningful to the students. 

Recent polling shows that there are still students who still come to us with intrinsic motivation to learn and even some who come with skepticism about AI. We can be clear-eyed about how a simple admonishment not to use the tech is insufficient to protect students critical thinking. But we can also do real work to transform our pedagogical practices to meet the moment without throwing away writing altogether and sowing seeds of mistrust with our students.

 

Critical Thinking Superchargers (according to students): Group work, Text Annotation, and Primary Research

 

Using GenAI to compose may lead to fewer opportunities for students to think critically during the writing process. One response for instructors is to identify the writing tasks that require the most critical thought and then emphasize those in the curriculum. I designed my own first-year composition curriculum based on this principle of emphasizing critical thinking both within the protected classroom hours and for out-of-class assignments. Yet how do I really know what parts of the process make students think the hardest? 

To discern what parts of my revised curriculum led to the most critical thinking, and to better understand what motivates students to think critically, I interviewed three students–I’ll call them Amara, Haniya, and Lucy–from the spring 2025 section of my EH102 class. EH102 is the second course in a two-course sequence of required first-year writing curriculum, and it focuses on research writing. While the students I interviewed did use GenAI during class when I led them through experiments and activities with the tech, each of my interviewees report that they abstained from using it during the research or drafting phase of our major essay.*

For the purposes of the interviews, I told each student that we would define critical thinking as an umbrella term to encompass cognitive activity such as: actively questioning an idea or text, analyzing and/or evaluation information, and putting together existing insights to come to new, reasonable judgments. I emphasized that critical thinking was something that took time and an engaged mind. This small sample of students revealed that they were most engaged in critical thinking during group work, when annotating readings, and while crafting surveys for their primary research.

Group Work:
All three interviewees concluded that working in groups prompted them to engage more in critical thinking than when they worked alone. Haniya explained the most basic way that group work facilitated her critical thinking: it provided sounding boards for making sense of each assignment step-by-step. A sophomore in the pre-nursing track, Haniya had taken my EH101 class during her first semester as a freshman. Unfortunately, she was not able to complete the course but tried again with me later in her academic career and did really well. As a result, she decided to sign up for my 102 course. When I interviewed her about how and when she engaged in critical thinking in my class, she expressed that she has a tendency to “give up” when she feels she doesn’t understand something; however, having a group to work with gave her one way to push through confusion and think collectively about what to do. She admits that for some of the assignments in 102, “my group was confused. We were all confused.” However, she says:

“We would come to class, you would give us insights like ‘try looking here, go here and find this,’ and so we would be texting in the group chat and we would pull up the instructions, think critically, and take in everyone’s input.” 

This collective effort, she says, was what helped them go from confusion to activation. Group work, then, might be a basic support that gives students the resilience they need to push their minds into that more difficult place of critical thinking. 

The other interviewees emphasized that group work led them to elevate and enhance their own thinking. Lucy, a BLANK major, said that her groupmates would help her take a “half born idea” and then “boost it to the level we wanted.” Whereas she may have stopped short of thinking through an interesting angle while working on her own, the combined efforts of her group allowed her to explore and refine ideas more thoroughly. 

Amara, an international student from Nigeria studying psychology, noticed that she had to think more critically to explain her work to fellow groupmates. In one assignment, students had to answer complex questions about a long text and submit a single set of responses as a group. They had time in class to compare responses, discuss the strengths and weaknesses of each, and then finally merge their answers for the final submission. Amara noticed that “people kept asking questions about why [she answered] that way,” and she had to think deeply about how to justify her work. Ultimately, this back and forth conversation brought her to better understand the text.

Required Text Annotations for Homework Readings: Both Haniya and Lucy noticed how the act of annotating readings forced them to think more deeply as they read complex texts for the course. Originally an idea I got from a colleague, Amy Cates, I often require that students submit a PDF of their readings on which they have made a certain number or a certain kind of textual annotations. While there are some specific parameters for what counts as a valid annotation for each assignment, these annotations should represent what students are thinking as they work through reading the text: questions they have, summaries of what they think a passage means, connections between the passage and something they think or wonder, and more.

Haniya cites textual annotation as the strategy that has most revolutionized her ability to understand complex readings. She says:

“Usually, I can’t read books because I can’t read it and understand it in that moment. I’m seeing it, but it doesn’t register in my mind. But I like annotating because it was making me think about what was actually being said….I had to think on that.” 

In other words, for someone who might not have experience pushing through complex texts, annotating becomes a helpful strategy to scaffold that process, demonstrating to students that you do not have to understand a text the first time you read it, but that it is normal to need to rephrase it, ask questions, and think more deeply about it. Annotation not only becomes a means of pushing students to read more actively and curiously, but it leads them to becoming more resilient readers.

Lucy notes a different way that annotating contributes to critical thinking: the accumulation of annotations constitutes another body of data to consider when synthesizing all information to come to research conclusions. She said she engaged in the most critical thinking while reading academic studies for her research project in part because of how she annotated the text as she read then used those annotations to make an additional set of notes on a separate document. Finally, she considered both the annotations and the other notes, making connections and drawing conclusions. Her textual annotations formed the basis of an ever-widening pool of information that she needed to incorporate into her larger thinking about the project.   

Primary Research Protocols: Finally, two of three interviewees highlighted that when they were crafting surveys aimed at generating data for their project’s research question, their critical thinking was especially supercharged. Writing survey questions is not a linear process; it requires thinking about multiple steps within the context of one overarching goal. Students must consider not only how to phrase the question in a neutral way aimed at soliciting truthful responses from their audiences, but they must also consider how and in what way the question will actually generate information that helps answer the research question. I scaffold this process in class by having each group critique another group’s survey draft, noticing how questions may be biased or misleading and commenting on the alignment of questions and overall research goals. I also give feedback, and the groups consider all input when revising their surveys before they launch.

Amara said that writing survey questions required a ton of critical thinking because, she says:

 “I was writing something completely different than what I thought I was writing. I really had to do research about what is a survey question, then I really had to think about where do you want the person answering the question to get to, and how do you lead them there? How do you get into the heads of your survey people…and lead them without suggesting the answer?”

Haniya agreed that the dual process of considering how to neutrally word survey questions while also working to elicit specific information from an audience forced her to engage in critical thinking. Both Haniya and Amara, however, said that it was specifically professor and peer feedback that amped up their critical thinking in the survey-writing process. Instructors, then, should designate time in class for students to practice applying feedback to work they may have been asked to complete outside of class. 

What’s Next?
For those who already weave group work, textual annotation, and primary research into your curriculum, look for strategic ways to bring that into protected classroom time as much as possible so that students can think with their brains instead of with GenAI. That might look like offering students in-class time to complete readings and annotations. Perhaps it means releasing your feedback on some assignment right when students get into the classroom and then asking them to meet in groups to address that feedback, requiring that they demonstrate what they’ve changed by the end of the class period. 

For those who are less familiar with this kind of work, you can take a look at current curricular resources on this site for supporting group work and incorporating primary research into the composition classroom. 

Look forward to Part II of the series where you’ll hear from Amara, Haniya, and Lucy again. I’ll share their explanations  about why they chose to take on the cognitive load and invest the time in critical thinking without GenAI for some of the work in my class. Furthermore, I’ll discuss when and for what classes they do choose to use GenAI.

Student Perspective: Data Centers, Deforestation, and Doing Our Part

A dirt road created by a power company cutting hundreds of native plants in the process at my family’s farm in Brazil.

By: Joseph Axworthy, Junior Writing and Media Major

I can still smell the smoke of machinery and hear the whirring of chainsaws, watching as a power company plowed through thousands of native trees in order to put up cables that I would never even see the benefit of. I was living on a farm in Brazil watching it all happen. It was the first time that I really witnessed deforestation. That event was formative in the way that I look at the world around me and how my actions affect the environment. However, a lot of people in Alabama haven’t really had to experience a similar event, and that can make these issues seem far off and in turn easy to ignore.  But with the rise of GenAI and large language models (LLM), that’s all changing and at a rapid pace. One example of this change is “Project Marvel,”a 4.5 million square foot data center that is planned to be built in Bessemer, AL.

The data center is projected to be one of the largest in the United States. Along with building the massive concrete structures, the project would occupy 700 acres of land to accommodate the buildings. This would mean the deforestation of thousands of native trees and destruction of animal habitats and ecosystems. Along with the destruction of flora and fauna, the residents of the area have also voiced their concerns. Many are worried about losing their peaceful green space for a concrete jungle of servers and cooling systems. 

This is a tragedy that’s happening not just in Bessemer but all over the United States. Texas data centers have already been reported to be causing issues with droughts due to their high water consumption. GenAI and LLMs are a big contributor to the increase in data center construction, including Project Marvel. In order to function, Gen AI  requires a massive amount of memory to run the complex calculations needed to be able to deliver its responses. This also means that it contributes in a large way to energy consumption. Some estimates state that a ChatGPT request can use as much as 10 times the amount of energy as a google search. Since around 82% of United States energy comes from fossil fuels, this increase in energy use is obviously very bad for the environment. This increased need for energy means plants like James H. Miller, Jr. Electric Generating Plant located around Birmingham might be pumping more air pollutants into our city’s air.

But what can we do? Do we refuse to use AI and start filling out paper forms at the dentist again? By “going paperless” as a society and relying on cloud computing, we no longer depend on storing information on forms in filing cabinets. Instead, we rely on storing information in data centers which requires a perpetual output of energy. From government websites to ordering foods and medications, so much of our lives use digital, cloud-based data that needs to be stored somewhere. Not only would it be incredibly difficult to undo this digital infrastructure, but it’s unlikely that anyone would vote for it.

For that reason, the solution to AI’s environmental impact can’t be to simply stop using it or stop storing so much digital data but to be more intentional on how we care for the environment. If we are destroying forests to build data centers, we need to increase green spaces in cities. If we are upping our energy consumption, we need to be using cleaner options of energy and finding more efficient cooling methods. The solution lies in trying to be more conscious about environmental solutions that can help balance the damage we are causing. We should also try to be intentional about making the process that we already have established more sustainable. For example, we could use closed loop water cooling to combat evaporation loss in data centers or switch to more renewable energies like wind and solar to power these massive structures.

Trying to figure out what we as individuals can do is daunting, but there are options out there available to us. Oftentimes reading about pollutants and deforestation is hard, but life goes on after we put the article down. We worry about finals, we let our mind fall into a hum as we go through our nine-to-fives. But it’s harder to ignore when it’s happening right in our city. However Alabama, and Birmingham specifically, have a variety of programs and organizations that focus on conservation , sustainability, and keeping our water sources clean. The Alabama Rivers Alliance org is currently looking into these data centers specifically Project Marvel.Even outside of volunteering, we should begin supporting campaigns that work with clean energy and public transit like the Green New Deal for Birmingham developed by the group GASP

Ultimately it’s unlikely that the continued expansion of data centers is going to stop. But as the trees in our state are chopped down to accommodate our ever growing need for storage, we all have a responsibility to try and do our part in keeping our future green.

Look for more in my next post about the concrete ways you can get involved in promoting environmental protection in Birmingham.

 

Student Perspective: The GenAI Problems That Academics Ignore

By: Joseph Axworthy, Junior Writing and Media Major

The rise of GenAI in the work and academic setting is controversial. When we are asked to choose between the exploitation of people and planet or streamlining that email to Becky who’s two cubicles down, the answer seems obvious. Then why do so many people still choose the latter? One answer to this is that people lack information about the negative effects of GenAI. Educators and businesses alike have raised concerns about the use of GenAI and if it diminishes people’s capacity to create original content. And while that is a valid concern, it dwarfs in comparison to the many other ethical issues that arise when using it. The academic community doesn’t seem to bring up issues like its energy consumption, carbon footprint, and the exploitation of people across the globe, which is a problem.

While initially I had hoped to ignore GenAI completely, I quickly learned that was not going to be an option. My classes not only exposed me to it but also taught me to use GenAI in an ethical manner that maintained academic integrity and credibility. The following semesters, I even had classes that required its use for some projects. By the end of the semester, I still felt conflicted but had a much better understanding of how GenAI could be used even in my own writing.  Regardless of how I felt initially, it was evident that it could be a useful tool, especially with the growing expectations of efficiency in the corporate world. 

Despite having an understanding and ability to use Gen AI to create better works more efficiently, it still feels wrong to me. Academics’ pushback against it is heavily rooted in credibility and if the works produced by GenAI can be seen as original content. Universities set guidelines on its usage to help with the issue of original content, but this does little to help with the problems that GenAI poses through sustainability and exploitation . Since the academic guidelines on AI use don’t take sustainability issues into account, this means students are unlikely to think about the environmental and exploitative effects of using GenAI. Most people using it probably don’t even know that they are contributing towards them at all. 

Unfortunately, it’s here to stay, and it’s useful to the hyper-efficient culture of many workplaces. It would be a lie to claim that GenAI doesn’t show exceptional potential in streamlining work, especially in writing fields. The further that I get into my degree at UAB the more I feel like GenAI is pushed and encouraged more and more, and I’m sure the future workforce will be no different. This semester, pushing back against GenAI feels like a losing battle. I can’t say no to GenAI like I might to a plastic bag at the store. It’s created a choice: limit myself by taking longer to complete tasks that could be streamlined with AI or abandon some of my morals and use tools that I know are contributing to unethical causes. 

While there doesn’t seem to be a remedy for this looming feeling of dread, I believe that it’s important for everyone who uses GenAI to be at least aware of the effects. We should push to find alternative ways of streamlining tasks and finding healthy mediums of boosting our efficiency that are less problematic in the long run. In the meantime, the best we can do is to educate ourselves on the topic as it becomes a part of our day-to-day life.



 

Introducing a Multi-Part Series: Infusing Critical Thinking in the GenAI-Era Classroom

A new study from MIT, still under peer-review, appears to confirm what many educators have intuited since GenAI’s inception. Among other conclusions, the study finds that students who rely mainly on large language models (LLMs) like ChatGPT to compose essays significantly lighten their cognitive load and engage in less critical thinking than those who use their brain only for the bulk of the initial drafting. 

I admit that I experienced some satisfaction, vindication, and a little relief when I began seeing titles for articles doing initial reporting on these findings. For example, the headline for Time’s press release on this study reads, “ChatGPT May Be Eroding Critical Thinking Skills.” It frames researchers’ findings in a frankly satisfying way for those of us who have more intense misgivings about GenAI and learning than others. Some voices spend most of their time heralding the exciting and boundless potential of GenAI to transform education. I recently reviewed a chapter of a student-facing AI and Writing textbook (for a publisher that will remain nameless) and felt uneasy when I only found enumerations of the many benefits GenAI brings to student learning and no cautionary tales about how to resist the temptation to give over one’s thinking to the machine. While we are wise to explore its potential, the smoking-gun-like headlines about the MIT study return us to the fundamental question: how will we get students to engage in critical thinking in an age of GenAI?

In response to this question, I’m launching a post-series entitled “Infusing Critical Thinking in the GenAI-Era Classroom” that will examine ways writing instructors can alter their courses to motivate students to do deep thinking. The series is inspired by, along with the MIT study, Hua Hsu’s New Yorker piece “What Happens After A.I. Destroys College Writing.” In it, Hsu, hears from a range of students who detail exactly how they use GenAI to avoid spending time on homework for classes that don’t resonate with them. One student, Alex, details how he successfully used Claude to write a paper that scored an A minus for an art history class noting “I’m trying to do the least work possible, because this is a class I’m not hella fucking with.” Hsu catalogues the variety of ways writing professors have responded to this reality including the return to the Blue Book exam, an emphasis on process writing and relationship-building in the classroom, as well as dumbfounded resignation. The posts in my series will isolate and dig deeper into some of the responses Hsu and others articulate, looking for ways we can do our part to encourage students to engage critically with writing. 

In the first posts of the series, I’ll bring in the voices of students from my previous classes to hear how group writing assignments did–and in some cases didn’t–lead them to think more critically about their research. In the next part of the series, I will examine the way teachers of writing, including myself, are turning to traditional methods of quizzes, exams, and timed essay writing to supplement writing projects that occur outside the classroom. Lastly, posts will discuss the way that oratory might once again play a role in contemporary rhetoric and composition curriculum.

As this series might suggest, I do believe educators have an obligation to stay informed about the state of their own field and experiment with even small changes to their pedagogy in order to remain relevant. However, Hsu includes an important reminder for me and for any instructor of writing in the GenAI era. He writes that “none of the potential fixes” for the problem of motivating students to think critically “can turn back the preconditions of American youth” which lead students to take advantage of tools like GenAI. Ultimately, he thinks “professors can reconceive of the classroom, but there is only so much we control.” Because I agree wholeheartedly with this conclusion, I revised the title of the series from Safeguarding to Infusing Critical Thinking in the GenAI-Era Classroom. There are no safeguards against the likelihood that writing will require less critical thought than it once did; but we can still do our best to make our classrooms centers of interactive, rich, deep thinking.

 

How do students FEEL about my new grading policy for the age of GenAI?

Last week, I had my First Year Composition (FYC) students complete an anonymous survey that asked them how they felt about a new assessment strategy I am piloting this semester. The policy, which I detail in a previous post, is designed to resist using plagiarism checkers while still holding students accountable for thinking critically to produce good writing. 

The survey results, which I discuss below, demonstrate a range of reactions. Many students do admit experiencing anxiety when I introduced the policy, but several indicate that their anxiety subsided throughout the process of completing the first project. Most report that, having gone through the process once, they feel confident about their ability to meet these expectations on future projects. Finally, they provided helpful feedback on how to improve the way I introduce and explain the policy.

The Policy

Shortly before completing the survey, students received their overall grade for Project 1 which combines individual scores on several deliverables: process documents, a first and final draft, and a short, post-writing reflection. Rubrics for the first and final drafts included language outlining the policy which states that any draft would lose significant points if it:

  1. includes content that is clearly false (describes events, concepts, ideas, characters, plots, processes incorrectly; attributes fake quotes to real people, etc.)
  2. demonstrates a level of understanding of some topic that far exceeds what we normally see in work from the average to advanced student
  3. uses an authorial voice that deviates from the student’s prior writing
  4. employs a rigidly formulaic structure akin to the tripartite formula that basic ChatGPT prompting yields
  5. has incredibly generic content/lacks specificity
  6. includes excessively flowery or unnecessary jargon

The Survey Results

The survey solicited feedback on how this policy–which we discussed several times at the beginning stage of their writing process for Project 1– affected their emotional state and their writing process. 

Of 22 students in an honors section of English 101 (HON), over 50% reported that they did feel some anxiety after our initial conversation about the grading policy. See Table 1 below.

Table 1: HON Feelings

However, of the 19 students who responded from my next class, a section of English 101 (EH101), only 32% reported that they felt anxiety after I introduced the policy. See Table 2 below.

Table 2: EH101 Feelings

The HON students also appeared to spend more time actively thinking about the policy as they drafted. See Table 3 below.

Table 3: HON Consideration

For the 19 respondents from the EH101 class, no one reported thinking about the traits constantly, and only 11% indicated that they considered the policy often. Most, 58%, said that they thought about it some. See Table 4 below.

Table 4: EH101 Consideration

Despite early anxiety, the majority of both cohorts report feeling either somewhat or very confident about their ability to avoid the traits in future projects. However, more HON students marked feeling “very” confident, perhaps due to spending more time considering the traits during the writing process. See Tables 5 and 6 below.

Table 5: HON Confidence

Table 6: EH101 Confidence

Feedback on how to improve to policy

Finally, I asked “What feedback do you have for how to make the list of traits clearer or for how to make the grading fairer?” I noticed a few trends out of the 28 students who provided a written response:

The policy is clear, fair, and helpful

15 out of 28 students endorsed the current policy. Some had simple responses like “I think the list is pretty clear, so I think it’s fine the way it is.” One person noted that they “really like the list” even though “it does cause me a bit of anxiety in the back of my mind when I’m writing but I think it’s very useful because I know what to avoid when writing and gives me a different way to think about the way I write.” In other words, the anxiety was not debilitating but generative. Another student noted that they:

appreciate having the expectations of what will be flagged as Gen AI shared with us as students, so we know what to avoid while writing. It is VERY easy to meet these expectations by just doing the assignment yourself and there is no reason to worry about getting flagged UNLESS you actually went in and used Gen AI. There are some moments where I was curious if my work could maybe get falsely flagged because of my writing structure that I learned in school (introduction, body paragraphs 1, 2, 3, conclusion) but after writing the assignment I knew it wouldn’t.

The premise of the policy is fine, but students need examples and clarification for certain features

Of the 13 students who had critical, constructive feedback, 5 noted that it would be helpful to see concrete examples of each of the features. One said it would be helpful for me to “give us examples on sentences or paragraphs that may include these features. Try to let us actually learn and visualize the bad features in an active sentence rather than it being an instruction.”

I completely agree with this suggestion. It makes so much sense, and I’m eager to craft activities and assignments that help students identify these kinds of features so they’ll know what exactly to avoid.

Certain features seem unclear or unfair

7 students expressed confusion over specific features. 4 felt that feature #2 (demonstrates a level of understanding of some topic that far exceeds what we normally see in work from the average to advanced student) was not just unclear but potentially unfair. They noted that it feels unjust to deduct points for student writing that seemed advanced because “some students might actually be able to make these connections and have a deeper understanding of the text without an AI generated response.” However, 3 of these 4 thought that providing examples and discussing what exactly feature 2 aims to avoid would help clarify the rule for them.

2 students worried about the flowery language and jargon rule, noticing that they have a natural impulse to use “diverse vocabulary” and that they “don’t want to be afraid to make those additions in my pieces.” However, one did note that “seeing as I didn’t have points marked off of my project 1 first or final draft, I am confident that I know the difference between the two and am not worried anymore.”

Real worry

Of the students who provided written feedback, one did express a great deal of worry about the system in general:

I just get paranoid that somehow I am going to get flagged for using GenAI when I know I am not using it. I also feel like some of the points are hard to avoid. For example, the deep understanding point. What if it is something you have studied a lot in your free time, or something that intrigues you, so you go above and beyond. I also feel that sometimes my voice in writing varies. I get worried that my voice won’t be the same as last time and I will get points off for that. I also get anxious because we are expected to have strong structure, but what if the structure is too structured and then you get points off.

Moving Forward

Thanks to this feedback, I have new ideas for how to strengthen the way I introduce, explain, and demonstrate the policy. Look out for future posts detailing how this grading strategy evolves.