Below is a mini framework that I developed early in the genAI craze. Rather than getting caught up in the furor of the initial excitement, I completed a scoping review of prior research clusters (such as multimodality, multiliteracies, design and design pedagogies, writing transfer, metacognition, and others) to learn what prior work into emerging pedagogical technologies said about how students learn with these tools. The goal was to use existing knowledge to start wrapping my arms around the pedagogical and research possibilities with AI. Looking across disciplinary expertise allowed me to trace patterns in the literature from different perspectives. There is a much longer document detailing the results from this scoping review, but this "two-page synthesis" captures the essentials.
The framework has held up better than I anticipated as I have brought AI into my teaching and as I've completed AI/writing research. Part 3 has proven not as useful as I had initially thought. It can't be ignored; it's just not as prominent as anticipated. Otherwise, I still find this a useful way to think about pedagogical and assessment design for writing projects. It seems to align with emerging research on AI tools, though I'd have to dig a little deeper to verify this for sure. Regardless, it's comforting to know that the foundations established by prior research still hold up for AI-influenced contexts.
I include the framework synthesis here in case it helps others as they think through AI in the classroom.
------
Four Pillars
1. Understanding how writers think with the technology and adapt their textual design according to their writing-related knowledge.
2. Examining both moments where the technology enhances learners’ writing AND where writers show little improvement and/or metacognition.
3. Evaluating the whole ecosystem of texts (in all their modes, variations, and conceptions) and the ecosystem of modes used to design those texts.
4. Capturing moments in the design process, not only products. To properly understand how learners are using a technology to support writing, it is necessary to see into how they design.
A Framework in 3 Parts
Part 1: Metacognition
Metacognitive evaluation framework (adapted from Gorzelski et al., 2017):
Metacognitive Subcomponent | Definition |
Person (knowledge of cognition) | Knowledge of oneself as a writer, including one’s (un)successful use of genres, conventions, and rhetorical and writing process strategies |
Task (knowledge of cognition) | Understanding of affordances and constraints posed by a project and its circumstances |
Strategy (knowledge of cognition) and how learners evaluate their thinking around a project (regulation of cognition). | Knowledge of the range of approaches one might effectively use to complete a project. The change in approaches and adaptations throughout the design and re-design process |
Planning (regulation of cognition) and how choices are made (regulation of cognition) | Identifying a problem, analyzing it, and choosing a strategy to address it. Describing how and why choices are made over others. |
Constructive metacognition | Developing a series of process documents that capture the design from the author’s perspective and that target specific evaluative components that an instructor desires (aligning with course learning outcomes). |
· The first two, person and task, remain relatively unchanged from Gorzelski et al. (2017). They also align well with social literacies from Kress & van Leuuwen (2001), Street et al. (2015), etc. Strategy and planning are also still closely aligned for evaluation purposes, but, because they subsume other subcomponents, it might be useful to evaluate them separately. Constructive metacognition stands alone now because of the shift to process documents and the different dimensions that each of these can play.
· Developing process documents is key to capture the design process and target specific course evaluation pieces.
Part 2: Transfer
To really understand how ideas are transferring, it is important to capture moments in the process. Collecting documents, interview data, etc. is important to understand how the design happens. There is a tendency in writing (and academia for that matter) to evaluate products and outcomes. The system is set up to do so. But capturing moments within the design offers another dimension of information that can inform why certain choices are being made and how, specifically, learners are responding to digital tools.
For transfer purposes, it might also be useful to switch up modes in the process. AI-mediated writing is not just about the AI generated text, like an essay or report. Transfer between drafts or transfer of skills between modes (e.g., how they compare the modes, how they adapt one mode to fit into another) might also indicate some intriguing moments of transfer between modes. For example, having learners design an infographic or image or table (or multiple) to fit into their paper might ask them to use a new digital mode. A process document that captures their work with that mode, how their new design fits into the main design, and why they made the decisions that they did might offer tremendous insight into their ability to piece together multiple components of a design; it would capture how they adapt various pieces of their writing toolkits for the task. It would be intriguing to know whether the learner’s approach in one mode might inform the approach in another mode.
Part 3: Getting Granular—Specific Language Competencies
Because metacognition and transfer are so closely related, it is important to combine the wider metacognitive markers (e.g., writer dispositions, prior experiences) with a look at specific features (e.g., genre, source integration, etc.). To understand how something transfers, it is important to understand what transfers or what is desired for learners to transfer.
In an AI-context, it is important that learners recognize features that are critical to their final design that are not produced well by the digital tool. Getting granular looks different for each project and context, though I have a sample rubric that could be adapted.
Putting It All Together
The framework is meant to be sees as a series of interconnected parts. Transfer relies on metacognition and skill adaptation. Each of these factor into the iterative design process that AI-mediated writing projects require. The parts do not stand individually, and one is not more important than another. Rather, they feed into each other, and projects evaluating metacognition in AI might come back to each part multiple times to inform the next part, like so:
Comments