AI-Proof Learning: Assessing Process, Not Product
Generative AI is already an everyday tool for many of our students’ future practices and industries. These tools can write prose, produce imagery, and write code indistinguishable from human-made content, raising the question: how do instructors assess learning when we cannot tell who (or what) produced the work? The answer is instructors must shift our assessment from a final paper, a JavaScript program, or an image to the process involved in creating those outcomes. When learning outcomes prioritize process, students are more likely to develop critical thinking skills for determining what Generative AI outputs are appropriate for the assigned activity. Process-centric outcomes also establish how a final product, such as a paper, is less valuable than the thinking and skills required to create it. To take this a step further, grading models that allow multiple revisions and micro feedback from instructors, such as specifications grading, underscore the primacy of the process in developing competency.
In this session, I’ll share examples of process-centric learning outcomes that dilute generative AI’s use and emphasize a learner’s ability to apply it. I will demonstrate how to convert existing outcomes that prioritize products into ones that give feedback on the student’s applied skills and thinking. I will also briefly introduce specifications grading, an alternative grading model I have used for three semesters that emphasizes iteration and improvement by responding to feedback. Participants will be invited to discuss and develop strategies for facilitating human development and growth when generative AI is a standard tool in our fields.