AI policies are everywhere at the moment. They're also really fluid as instructors grapple with the changes and influence of AI in the classroom. I'm including an AI policy that I've developed for my AI text design course at UTM in case it is useful to folks. The course is called "Designing Text and Making Meaning with AI Tools."
The course actively encourages AI experimentation for text design, so I needed a flexible policy that could allow AI but also impart the importance for learners' own intellectual development and the epistemological depth of their designs.
With this policy, I've opted for something more extensive to highlight connections to emerging AI research. This way the material isn't seeming "made up." The policy is modelling the intellectual work that I want learners to do in our course. The list of suggested ways AI may apply that ends the policy is designed to offer learners an idea of how AI might usefully apply to their work without letting the tool take over.
There were three dimensions that I prioritized in developing this policy:
Having something that could transcend the syllabus to be enacted in the classroom to support learning, not penalize learners when convenient. The policy sets a class ethos that can be enacted.
Connecting to emergent literature--literally via hyperlinks--to both model the work learners should do but also to ground the policy is something that has been validated.
Offer pathways for learners to enact the policy and see a pathway to use AI well in our classroom community.
The policy will inevitably evolve, especially as SoTL AI research advances. But, for now, this is what we're working with. I'm happy to chat more if folks have questions
---
This course subscribes to Eaton’s (2023) postplagiarism framework [no relation, but great name!]. The idea is that we need to account for the reality that advanced technologies (like AI) cannot be decoupled from day-to-day-life. This does not mean that integrity and ethics do not exist; they are incredibly important dimensions to how we work and design meaning in the modern world. They just don’t necessarily fit traditional academic integrity policies. One central tenet of Eaton’s framework is that humans can relinquish some control of but not responsibility for their work. This is incredibly important because there are things that cannot be verified or traced with AI. Bearman and Ajjawi (2023) describe this as the “black box” where AI requires that we learn and work within situations that are opaque, partial, and ambiguous.
We’re going to engage with these ideas throughout the term and put them into practice. We’re going to, as Mollick (2024) would argue [please don’t purchase, it’s just a citation], bring AI to the table but keep the human in the loop. We’re going to use AI tools, test their outputs, and determine how they affect writing processes—ultimately putting Graham’s (2023) writing process re-evaluation to the test. We’re also going to hone our skills in evaluative judgment (Bearman et al., 2024) to determine the limits of our own competence, what we can verify, and what cannot be known with AI outputs.
But none of this means that we’re going to simply use AI language models and call it a day, relinquishing the important conceptual work to the bot. Folks who do this will obtain substandard results, and the grade will reflect those results. Being intentional about communication design and meaning making means doing considerable conceptual work. It means being intentional about how we stitch together various digital tools and various writing skillsets to develop a fantastic final product. Folks who do this work will achieve the results that they seek and refine their AI skillsets.
Here are some ways that AI could be used responsibly in our course:
To establish structure for the report and the literature review. Do not offload the thinking. Synthesis and making suggestions depend on profound personal knowledge and the ability to apply this knowledge to specific contexts, audiences, and the demands of the communicative context.
To help develop the multimodal components. Just remember that the output must adhere to central principles of multimodal design.
To triangulate one’s knowledge of a concept encountered in a reading—connecting your own reading and understanding, your understanding of core course concepts and contexts, and the AI output. Do not simply develop a summary of a pdf and assume you have the content covered. Used well, AI should add time to reading, not diminish it.
To test ideas and gauge other possible perspectives. Be aware of the point where your thinking and the AI output merge to ensure that you are still in control of the thinking and of your process.
To generate feedback on a project. This feedback can be useful and instantaneous. Also remember that it’s only one such avenue for receiving feedback. The instructor, peers, and the RGASC are all highly recommended and can offer another dimension to revising your paper.
Comments