Frequently Asked Questions regarding Assessment Design in the AI-Age

Main page > FAQs on Assessment Design in the AI-Age

Please note: The use of AI within teaching and learning is a rapidly developing area. This information reflects our current thinking in March 2024, but please check back regularly as we will be updating the information on this page as things develop.

What is GenAI/ChatGPT – where has it come from?

Check out the introduction video on this page on the iPark and this primer document from Jisc

ChatGPT is just one kind of Generative Artificial Intelligence, the name for applications which take large data sets and re-order them to generate content. There is a lot of information about the history of GenAI, what it is and what it can do on the Jisc website. 

Which tools are there and how do I use it?

GenAI tools can do a lot of different things such as producing text, images, databases and writing computer code.

ChatGPT was the first widely available tool, and a version of that tool is available to all staff and students in Microsoft Bing. It is important to be aware of the University’s guidance around the limitations and ethical considerations when using AI tools in your everyday work and our Principles for embracing AI in teaching, learning and assessment.

Here are some suggested training resources and the QAA have produced some useful advice. 

Where can University staff find information about our approach?

Information about the University’s approach to AI can be found on the AI section of iPark

What do I need to do in terms of my assessments?

Some kinds of assessment are vulnerable to students using AI to create their submissions, for example essays and online exams. It is a good idea to review your assessment tasks and assignment briefs as set out below: 

  • decide to either design the use of AI out by developing more authentic assessment activities or embrace the use of AI in some form in your assessment tasks. For help on how to do this, please see our iPark AI resources. If you have suggestions and ideas for new forms of assessment which take account of AI and which you would like to share with us, please drop us a line on stlt@hud.ac.uk 
  • think about adding scaffolded assessments  to provide ways of students submitting something early for feedback, so that you can assess evidence of the process rather than just the final product.
  • So that it is clear to students on the extent to which AI use is acceptable in assessment, we have created labels for you to use in your assignment briefs. Please make sure that ALL assessments carry the appropriate label in the format as set out in the guidance document. It is really important that it is clear to students where use of AI is permitted in assignments and where it is prohibited. 

Also make sure you direct your students to complete the Academic Integrity module on Brightspace – they will be told to check the AI labelling, so do make sure you have included it in your assignment briefs. 

What support or advice have we offered to students about this?

  • Added this page on the student hub to support students – this links to a set of principles, the updated University regulations, the reference builder tools and guidance for users of assistive technology. 
  • Added a link in Brightspace modules in the assessment area to the student hub page and to the regulations. 
  • Added information to the Huddersfield Essentials induction module. 
  • Added information and extra quiz questions on the Academic Integrity module which all students take.  

Please make sure you direct your students to this as it is essential for all students to complete the Academic Integrity module. 

  • Created some flying start resources for staff to use for discussions with students on AI. 
  • Created the text descriptors for use in assignment briefs to make it really clear to students about the amount of AI use that is permitted in each assessment. 

The free version of ChatGPT has some limitations. Is the University going to pay for access?

We get access to the corporate/education version of Microsoft Bing Copilot through the University Microsoft account. This is based on GPT4 and also provides Dall-E image generation tools.

Although this corporate version states that data is protected, this has not been verified by the University, and as such we continue to say that confidential or personal data should not be entered into Generative AI platforms.

How does the AI tool work in Turnitin and how do I interpret the results?

The AI tool from Turnitin is developmental, and similarly to the use of the similarity tool, we should always apply our academic judgement when deciding whether a student has committed a breach of our academic integrity expectations. Unlike the similarity report, the AI report does not offer concrete evidence so it should not be used in isolation; it is a prompt for further investigation and additional evidence will be required to progress to an Academic Misconduct case. Please check out detailed guidance on Turnitin and the AI tool

What do I do if I get a high Turnitin AI score? 

Please see our guidance on how to interpret the Turnitin AI Score, and also information in the question below on how to address these concerns in an academic misconduct meeting. 

Why do I see — instead of an AI score on Turnitin?

The most common cause of this is that the student’s work exceeds the word limit of 15,000 words. This is the size as counted by Turnitin and reported at the bottom of the Similarity report. The work also needs a minimum of 300 words in a long-form writing style (ie paragraphs rather than bullet points) to be able to get an AI report.

Sadly we don’t have any good options for work above this 15,000 word limit other asking students to split larger work into smaller chunks before submitting, but it is recognised that this is not poor solution and we continue to ask Turnitin to remove this restriction.

Can students see the Turnitin AI report?

No. Turnitin have chosen not to make the AI report available to students, either through Draft Coach or if you choose to allow students to see the report through Brightspace.

Turnitin haven’t shared the thinking behind this decision, but from our initial experience with the AI detection it seems likely that seeing the report would prompt a lot of questions, confusion and worry from students.

One argument is that the scenarios in which a student could inadvertently hand in something AI generated are much fewer than for legitimate referencing mistakes, so the need for students to have such a tool is less, though this may change as AI becomes more and more embedded into mainstream software.

What to ask students in an academic integrity meeting about the possible use of AI Tools in their work? 

As per any normal AM meeting, you will need to ask questions about the content, the sources used (if applicable) and to ask them to provide previous drafts of their work to support the work is their own. You need to take into consideration their usual writing style if you have previous assessments related to this and be mindful that Turnitin can produce false positives in the report so some additional cause for concern will usually be needed before getting to this stage. The viva part of the meeting is just as important as the Turnitin report to ascertain what may have happened and if the work is authentic. 

We strongly encourage students to save previous drafts of their work to ensure they can easily defend their position if you do ask them to meet with you to settle concerns about the possible misuse of AI in completing their work. If you have allowed some use of AI tools to complete their assessment (should be noted in the module descriptor), they will need to show how they have not acted beyond these means, but be realistic. There is very little which a student can offer as evidence that they have not used these AI tools.

We recognise that in the early part of their studies students may make mistakes but we expect them to learn from these mistakes and not to repeat them. It is in their best interest to be honest in the meeting so you can understand what happened and take into consideration all the circumstances surrounding the breach.  If the circumstances suggest that they intended to gain an unfair advantage, this should be taken more seriously and may require a more severe penalty. 

Have our University Regulations changed in light of the developments in GenAI?

The Academic Misconduct Regulations have been updated in section 10.1 to take account of developments in GenAI. In addition, the Human and Computer Proof Reading Policy has been updated to give parity between the support permitted from software or AI and a human proof reader.

How do I stay up to date on the University’s position on this as it is rapidly developing?

The staff information on iPark will be reviewed regularly and updated. 

The student support information page on the Student Hub will also be updated. 

Any important/urgent developments will be communicated via your Associate Dean of Teaching and Learning or your Director of Teaching and Learning, or directly from the Pro Vice-Chancellor for Teaching and Learning, Professor Jane Owen-Lynch.

If you have queries about developments in this space, please get in touch with the Strategic Teaching and Learning Team stlt@hud.ac.uk  

What is the advice to students on using tools like Grammarly?

Writing software, such as Grammarly, can help with spelling and grammar.  However, many of these tools have started to incorporate Artificial Intelligence (AI) features, which may not always be clearly labelled as AI.   
 
Writing software can be used by students to help present and format their own original work, but it should not be used to rewrite whole sections. It is not acceptable to use the software to generate content that is submitted for assessment, unless this is allowed in the assessment brief and correctly referenced. The work students submit must be their own, but it is acceptable to make some use of assistive technology to help prepare it.

The University Human and Computer proof-reading policy was updated in January 2024 to clarify the boundaries on using proofing tools, and to bring the limitations of the assistance which can be provided by software in line with the restrictions on human proof readers.

For more information and how this specifically applies to students with Personal Learning Support Plans, see the guidance for users of assistive technology.

What is Copilot all about?

Copilot is the branding that Microsoft are using for their Generative AI products. At the University, we have Bing Copilot, which is a chatbot built into the Bing search engine.

You will notice that when you log in with your University account, Bing claims that data you enter into it is “protected” and not going to be disclosed anywhere; this claim has not been verified by the University so we still suggest avoiding putting University or confidential data into the platform.

Copilot is also being built into other Microsoft products like the Office Suite, where it can provide assistance while you work. This is a paid-for addition to Office which the University does not intend to provide at the moment for cost reasons, but some students may have access on their personal devices.

What is Gemini?

Gemini is Google’s Generative AI platform. It was originally called Bard but was rebranded in early 2024.

If you have further questions about AI at the University of Huddersfield, please speak to your Associate Dean or Director of Teaching and Learning or email stlt@hud.ac.uk