In This Section
- CIRTL Homepage
- Meet our Team
- Resources
- Short Guides
- Short Guide 1: Starting Well
- Short Guide 2: Scaffolding Learning
- Short Guide 3: Icebreakers
- Short Guide 5: Discussions for Online Learning
- Short Guide 4: Visualising Thinking
- Short Guide 6: Universal Design for Learning
- Short Guide 7: Group Work
- Short Guide 8: Reimagining Practicals
- Short Guide 9: Assessment in the Age of AI
- Sustainable Development Goals Toolkit
- Group Work
- Connected Curriculum
- Civic Engagement Toolkit
- Learning Outcomes
- DigiEd Reading List
- Ethical Use of GenAI Toolkit
- Short Guides
- Professional Development
- Events
This CIRTL Short Guide provides a brief overview of AI in the context of Higher Education and some specific suggestions of ways to design assessments less vulnerable to AI interference. It closes with some examples of ways that staff at UCC have designed assessments which embrace and/or mitigate AI.
What is AI?
鈥淎I鈥 in the context of education usually refers to generative AI, such as chatbots (e.g. ChatGPT) which uses machine learning to create text that could conceivably have been written by a human, to write code, or to solve equations. Users type a question or prompt into the chat and the AI will generate a response which can then be further refined (in real time) through conversation with the user.
AI chatbots like ChatGPT are a type of 鈥榯rained鈥 on vast quantities of text from the internet. Responses are generated through recognising patterns and then predicting the most likely responses 鈥 similarly to how predictive text on a mobile device will suggest the next word in a sentence.
If you鈥檇 like to read more on generative AI in education, we recommend starting with either this or
Limits of AI
Watching coherent, grammatical prose on practically any topic appear on the screen as if by magic can make AI seem like an unstoppable force. But, like all technology, it has its limits. Most importantly, AI is only as good/accurate as its training materials (e.g. the free version of ChatGPT鈥檚 鈥榢nowledge鈥 stops in 2021 while paid versions have access to the internet and more up-to-date materials) and the prompts and clarifications entered by the user. It doesn鈥檛 鈥榬emember鈥 conversations as such, so entering the same prompt at different times can result in dramatically different responses.
Additionally, AI can be incredibly biased (depending on the material it has been trained on) and has been known to 鈥渉allucinate鈥 and produce entirely erroneous material 鈥 including citations to texts that do not exist 鈥 so anything it produces should be taken with a grain of salt and, ideally, checked for accuracy and coherence. Which, of course, requires a degree of expertise and knowledge of the topic.
Lastly, AI really struggles with less standard requests requiring creativity and forward thinking. As it relies on predictive modelling to construct its responses, anything with a very standardised format is much 鈥榚asier鈥 for generative AI than something unique or unexpected.
AI and Assessment
Conveniently, we can use these limitations to help design assessments which are less vulnerable to AI-powered academic dishonesty. After all, as early attempts demonstrated, attempts to ban AI entirely will almost always fail. New tools are constantly appearing while old ones are evolving; it鈥檚 nearly impossible to accurately identify their outputs, and as generative AI is integrated into tools such as Microsoft Word or Google Docs, it鈥檚 becoming increasingly difficult to determine where the line between AI- and person-created content exists. Lastly, AI will almost certainly be part of students鈥 lives once they leave UCC, so it鈥檚 worth thinking seriously about how to provide them with the tools to best navigate the world they鈥檒l find upon graduation.
So What Can We Do?
describe three broad approaches to AI in assessment:
- Avoid (e.g. vivas or in-person exams)
- Outrun (e.g. AI detection software)
- Embrace (and adapt) (e.g. redesign assessments)
In most contexts, the third approach is the most practical and sustainable and so some suggestions on how to (re)design assessments are provided below.
Types of Assessments
One response to the increased availability and sophistication of AI tools is to create more assessments which explicitly demonstrate the student鈥檚 intellectual ownership over their work 鈥 whether through in-person assessments such as presentations, vivas, invigilated exams, etc. or through assessments which document process such as the below 鈥 but it is not always appropriate (or advisable) to assess student learning in these ways, so it鈥檚 worth considering alternatives.
When writing an early draft of this short guide, I asked ChatGPT to suggest some assessments which would 鈥渓imit its usefulness鈥 and found its response really interesting. Not only did it suggest many of the same assessments or approaches we鈥檇 recommend, it also included a brief explanation of why it would potentially struggle with each. Some of the most broadly applicable ones are below (you鈥檒l notice that ChatGPT used US English for its responses) and if you鈥檇 like to see the full list.
- Emphasize : Focus on assessing higher-order thinking skills such as analysis, synthesis, evaluation, and application rather than simple recall or factual knowledge. Design questions that require students to demonstrate their ability to think critically, solve problems, and make connections between different concepts. These types of questions are less likely to have direct answers that can be easily generated by ChatGPT.
- Use scenario-based or case-based assessments: Present students with real-life scenarios or case studies related to the course material. Ask them to analyze the situation, identify relevant concepts, and apply their knowledge to propose solutions or make informed decisions. By using contextualized assessments, you challenge students to demonstrate their understanding in a practical context that may be less predictable for ChatGPT.
- Incorporate collaborative elements: Encourage collaboration and group work in your assessments. Assign tasks that require students to work together, discuss ideas, and synthesize information from multiple perspectives. This approach not only promotes active engagement but also reduces the reliance on ChatGPT as students are required to contribute their own insights and thoughts.
- Include authentic assessments: Authentic assessments mirror real-world applications of knowledge and skills. Assign projects, presentations, or research papers that require students to delve deep into the subject matter, apply critical thinking, and demonstrate their ability to synthesize information. Authentic assessments are inherently difficult for an AI model like ChatGPT to replicate as they often require creativity, originality, and context-specific understanding.
- Provide opportunities for reflection and self-assessment: Incorporate reflective elements in your assessments, such as asking students to evaluate their own work or explain their reasoning process. This encourages students to think metacognitively about their learning and helps them develop a deeper understanding of the subject matter beyond what ChatGPT can provide.
Beyond Assessment
While assessment is the focus of this short guide, it鈥檚 important to remember that assessment is just one part of Teaching and Learning. Therefore, it might be worth considering some of the ways that you could use AI in your teaching:
- quickly generate MCQs with answers and distractors
- develop a draft lesson plan
- produce a piece of writing for students to analyse
- play the role of an interviewee in role plays
- solve a mathematical problem
- write code
- provide an initial draft to be edited
Of course, if you do use ChatGPT or similar tools in your teaching, make sure to model best practices for your students by explicitly citing your use of AI and discussing how/why you used it.
AI and Academic Integrity
Whatever assessment decisions you make, please seriously consider at least one conversation with your students about your goals for the assessment (ideally linking it back to the module and programme learning outcomes as well as discipline-specific skills and knowledge) and about where and how AI fits. If you decide to forbid the use of AI, explain why to your students and if you decide to allow limited use 鈥 or, indeed, design an assignment actively requiring students to engage with AI 鈥 explain your reasoning behind those decisions, too. And, of course, give students guidance on how to document and cite their use of AI (with specific examples of how to do so as ) as UCC鈥檚 updated plagiarism policy explicitly includes AI/ChatGPT. If you use ChatGPT in a professional capacity, model this use for your students, explaining why and how you use it and acknowledge it.
The more transparency there is around the rationale behind assessment decisions and the lines around acceptable or forbidden AI use, the better students will be able to meet these expectations. Because, at the end of the day, AI is a tool and, as such, is neither inherently good nor inherently bad 鈥 what matters is how it is used. But to use it effectively 鈥 or to follow advice to avoid it completely 鈥 students need to understand what AI is, why it is or is not appropriate, and, perhaps most importantly, to see the ways their own experience and expertise shape its use and effectiveness.
Examples from UCC
UCC staff have designed 鈥 or redesigned 鈥 their assessments in the wake of these developments with some embracing AI and others seeking to limit its applicability to their assignments. As noted above, there is no one-size-fits-all approach to designing assessment in the face of AI advances, but the examples below are a great reminder that it can be done 鈥 and done well!
Gillian Barrett (Management and Marketing) and Ciara Fitzgerald (Business Information Systems)
In our final year undergraduate entrepreneurship module, we conducted an experiential assessment with 130 students. Students were asked to choose a 鈥榣ocal鈥 small and medium sized organisation (SME), interview a manager to assess and analyse current business model(s) and to propose a complementary business model for future growth opportunities.
Students were then required to use ChatGPT and to prompt ChatGPT on the current and future business models of their chosen SME. The role of the student was threefold. First, to understand the importance of 鈥榓sking the right question鈥 to improve their learning (Abdelghani et al., 2022). Second, to help the student to evaluate and to critically analyse the ChatGPT output (Mollick and Mollick, 2022). Finally, to encourage 鈥榬esponsible use鈥 of ChatGPT (Cacciamani, Collins and Inderbir, 2023).
The students were initially surprised at the inclusion of ChatGPT (given its novelty and disruptive nature); however, this assessment element helped students to leverage their learning and overall, to affirm confidence in their knowledge.
Joel Walmsley (Philosophy)
In spring 2023, I redesigned my essay assignments so that students were required to use ChatGPT 鈥 and to document their process 鈥 as part of the work. Doing so still meets the learning objectives of the module, and still requires that students engage with the texts and ideas that we discussed in class. But it also provides an introduction to the technology itself, by enabling students to learn, and demonstrate, best practice in using it.
In my module on Philosophy of AI, I usually assign one essay question on the Turing Test, and another on Descartes鈥檚 contention (from 1637!) that it is 鈥渋nconceivable鈥 that a machine could use language in the way that humans do. But this year, instead of the more familiar process of analysing the literature and examining the objections, students were required either to conduct a Turing Test with ChatGPT, or else to get it to answer in the style of Descartes and develop a dialogue accordingly. In both cases, students had to document their process with screenshots and commentary, drawing on both the texts and ideas we鈥檇 discussed and their own ideas about the philosophical content and the prompts they chose.
The resulting essays were a delight to read. Students produced creative, insightful, and highly original essays that demonstrated exactly the kind of engagement and understanding I was hoping for. Furthermore, one student told me that the thought of 鈥渃heating鈥 hadn鈥檛 even crossed their mind, because the project was so much fun. I was pleasantly reminded of this recent tweet from Andrew Ng (former head of Google Brain):
Shane Crowley (Food Science)
The below is extracted from a longer .
Students are asked to write because writing is considered useful. Muddled thoughts can be clarified once an attempt is made to write them down. Extracting key information on a topic and combining them into an overview is often far more effective than merely reading about the topic.
Although writing is a process, student work is often corrected as a static artifact. The final version is assessed and deemed a reflection of that underlying process. In an era of Large Language Models, there is an increasing probability that this final version is the only version and it was drafted by an undetectable AI.
Perhaps then the process of writing 鈥 what is often valued 鈥 needs to be re-emphasised. An analog solution is simply to require students to perform their writing in-person. However, this is not always a fair, accessible or practical approach. Serious essays, literature reviews and group projects often involve many hours of work, dead-ends and revisions. It is not feasible to migrate such projects to the classroom, and it is also not aligned with an increasingly digitalised and distributed workplace.
To be implemented effectively, versioning has to be explained to students and to teachers. Students have to approach their work in a disciplined, considered manner. Teachers need to account for how the work was produced and not merely how it appears in its final form. The structure and transparency that this can introduce may create more clear expectations for student writing and improve how it is assessed.
Additional Resources
- (Jisc, 2023)
- (National Academic Integrity Network, NAIN)
- (an excellent short Canvas course put together by staff and students at the 深夜亚洲福利久久 of Sydney which explains how generative AI works, the different tools, ways to use it in an educational setting, and how to reference/credit AI)
- (Tertiary Education Quality and Standards Agency, Australia)
- (open-access eBook edited by Chrissi Nerantzi; Sandra Abegglen; Marianna Karatsiori; Antonio Mart铆nez-Arboleda)
- (UCC Skills Centre)
- (Center for Teaching, Vanderbilt 深夜亚洲福利久久)
- (AI)2ed Project (UCC staff and students paired to evaluate assessment and experiment with ChatGPT - Toolkit for the Ethical Use of AI forthcoming in Sept./Oct. 2023)
- (a Zotero library curated by Lee Skallerup Bessette, Center for New Designs in Learning and Scholarship at Georgetown 深夜亚洲福利久久 )
- (Monclaire State 深夜亚洲福利久久)
- UCC's Academic Integrity for Examination and Assessment Policy
Resources for UCC Staff
(note: you may need to be logged into your UCC account to access some of these resources)
- UCC Plagiarism Policy
- Fostering Academic Integrity in Learning and Teaching (UCC Digital Badge with information/resources on artificial intelligence and academic integrity)
- Derek Bridge, UCC (Computer Science) contextualising ChatGPT in February 2023 ( or ; must be logged into Panopto with your UCC credentials to view)
- CIRTL seminar series on assessment design (recordings and resources from Spring, 2023 sessions)
Questions? Suggestions
AI is a constantly-evolving area, so we'll be updating this Short Guide often and will also be creating more AI Short Guides as our expertise expands. If you have suggestions for future AI Short Guides or resources, please let us know! You can also use the same form to ask any general AI questions you may have and we'll integrate the answers into this (and future) Short Guides. (Please note that this is an anonymous survey so we can only respond to you directly if you include your contact details with your responses!)
. Or email Dr Sarah Thelen (author of this Short Guide) directly
Assessment in the Age of Generative AI (CIRTL Short Guide #9) 漏 2023 by Sarah Thelen is licensed under