How do students FEEL about my new grading policy for the age of GenAI?

Last week, I had my First Year Composition (FYC) students complete an anonymous survey that asked them how they felt about a new assessment strategy I am piloting this semester. The policy, which I detail in a previous post, is designed to resist using plagiarism checkers while still holding students accountable for thinking critically to produce good writing. 

The survey results, which I discuss below, demonstrate a range of reactions. Many students do admit experiencing anxiety when I introduced the policy, but several indicate that their anxiety subsided throughout the process of completing the first project. Most report that, having gone through the process once, they feel confident about their ability to meet these expectations on future projects. Finally, they provided helpful feedback on how to improve the way I introduce and explain the policy.

The Policy

Shortly before completing the survey, students received their overall grade for Project 1 which combines individual scores on several deliverables: process documents, a first and final draft, and a short, post-writing reflection. Rubrics for the first and final drafts included language outlining the policy which states that any draft would lose significant points if it:

  1. includes content that is clearly false (describes events, concepts, ideas, characters, plots, processes incorrectly; attributes fake quotes to real people, etc.)
  2. demonstrates a level of understanding of some topic that far exceeds what we normally see in work from the average to advanced student
  3. uses an authorial voice that deviates from the student’s prior writing
  4. employs a rigidly formulaic structure akin to the tripartite formula that basic ChatGPT prompting yields
  5. has incredibly generic content/lacks specificity
  6. includes excessively flowery or unnecessary jargon

The Survey Results

The survey solicited feedback on how this policy–which we discussed several times at the beginning stage of their writing process for Project 1– affected their emotional state and their writing process. 

Of 22 students in an honors section of English 101 (HON), over 50% reported that they did feel some anxiety after our initial conversation about the grading policy. See Table 1 below.

Table 1: HON Feelings

However, of the 19 students who responded from my next class, a section of English 101 (EH101), only 32% reported that they felt anxiety after I introduced the policy. See Table 2 below.

Table 2: EH101 Feelings

The HON students also appeared to spend more time actively thinking about the policy as they drafted. See Table 3 below.

Table 3: HON Consideration

For the 19 respondents from the EH101 class, no one reported thinking about the traits constantly, and only 11% indicated that they considered the policy often. Most, 58%, said that they thought about it some. See Table 4 below.

Table 4: EH101 Consideration

Despite early anxiety, the majority of both cohorts report feeling either somewhat or very confident about their ability to avoid the traits in future projects. However, more HON students marked feeling “very” confident, perhaps due to spending more time considering the traits during the writing process. See Tables 5 and 6 below.

Table 5: HON Confidence

Table 6: EH101 Confidence

Feedback on how to improve to policy

Finally, I asked “What feedback do you have for how to make the list of traits clearer or for how to make the grading fairer?” I noticed a few trends out of the 28 students who provided a written response:

The policy is clear, fair, and helpful

15 out of 28 students endorsed the current policy. Some had simple responses like “I think the list is pretty clear, so I think it’s fine the way it is.” One person noted that they “really like the list” even though “it does cause me a bit of anxiety in the back of my mind when I’m writing but I think it’s very useful because I know what to avoid when writing and gives me a different way to think about the way I write.” In other words, the anxiety was not debilitating but generative. Another student noted that they:

appreciate having the expectations of what will be flagged as Gen AI shared with us as students, so we know what to avoid while writing. It is VERY easy to meet these expectations by just doing the assignment yourself and there is no reason to worry about getting flagged UNLESS you actually went in and used Gen AI. There are some moments where I was curious if my work could maybe get falsely flagged because of my writing structure that I learned in school (introduction, body paragraphs 1, 2, 3, conclusion) but after writing the assignment I knew it wouldn’t.

The premise of the policy is fine, but students need examples and clarification for certain features

Of the 13 students who had critical, constructive feedback, 5 noted that it would be helpful to see concrete examples of each of the features. One said it would be helpful for me to “give us examples on sentences or paragraphs that may include these features. Try to let us actually learn and visualize the bad features in an active sentence rather than it being an instruction.”

I completely agree with this suggestion. It makes so much sense, and I’m eager to craft activities and assignments that help students identify these kinds of features so they’ll know what exactly to avoid.

Certain features seem unclear or unfair

7 students expressed confusion over specific features. 4 felt that feature #2 (demonstrates a level of understanding of some topic that far exceeds what we normally see in work from the average to advanced student) was not just unclear but potentially unfair. They noted that it feels unjust to deduct points for student writing that seemed advanced because “some students might actually be able to make these connections and have a deeper understanding of the text without an AI generated response.” However, 3 of these 4 thought that providing examples and discussing what exactly feature 2 aims to avoid would help clarify the rule for them.

2 students worried about the flowery language and jargon rule, noticing that they have a natural impulse to use “diverse vocabulary” and that they “don’t want to be afraid to make those additions in my pieces.” However, one did note that “seeing as I didn’t have points marked off of my project 1 first or final draft, I am confident that I know the difference between the two and am not worried anymore.”

Real worry

Of the students who provided written feedback, one did express a great deal of worry about the system in general:

I just get paranoid that somehow I am going to get flagged for using GenAI when I know I am not using it. I also feel like some of the points are hard to avoid. For example, the deep understanding point. What if it is something you have studied a lot in your free time, or something that intrigues you, so you go above and beyond. I also feel that sometimes my voice in writing varies. I get worried that my voice won’t be the same as last time and I will get points off for that. I also get anxious because we are expected to have strong structure, but what if the structure is too structured and then you get points off.

Moving Forward

Thanks to this feedback, I have new ideas for how to strengthen the way I introduce, explain, and demonstrate the policy. Look out for future posts detailing how this grading strategy evolves.