Main Page > Upskill yourself for an AI world > Principles for using AI for University staff
Please note: The use of AI within teaching and learning is a rapidly developing area. This information reflects our current thinking in August 2024, but please check back regularly as we will be updating the information on this page as things develop.
These principles are about using AI as a tool to help with everyday working life and accompany a more specific set of principles for using AI in teaching and learning. There is a similar set of principles for students.
1. Just another tool
Generative artificial Intelligence tools are an exciting new development in large language model technology to help source information or produce content in a range of different formats such as text, images, computer code, video, music and more. The tools are different to traditional internet search engines in that, rather than producing a list of separate search results, they amalgamate results into content or give a direct response to a question. To do this, many tools operate in a conversational way where a user creates a text ‘prompt’ to receive an answer or multimedia output, which can then be further refined. AI tools have the potential to help improve efficiency in many everyday tasks.
The responses are a result of the AI tools scraping textual, visual or audio fragments of information from a huge internet database of websites, open access publications, newspapers, blogs, images and other freely available sources. On the surface, this appears to create new content, yet it is not original, only the product of this huge database of the collective work of others. AI tools cannot think, reason or critically evaluate; they function by statistical prediction, providing the next most likely word according to a particular context or likely sequence. However, they can ‘learn’ from the input that is given to them by a user, which can improve the accuracy and acceptability of their output. Such AI tools are rapidly developing and still in their infancy, but are currently being tested by their human creators, companies and society as users.
2. Check for inaccuracy and bias
Text based Generative AI tools have a tendance to “hallucinate,” that is to produce output which seems plausible and authoritative but is factually incorrect. This can be both in the body of the work and in the references which are frequently made up but featuring established authors and journal titles.
Large Language Models are trained on a vast amount of published material, which is used to generate the output. The output is only as good as the training database, which means that it can reflect stereotypes and biases (e.g., political, gender, cultural or racial) in the training material.
The onus is on you to verify the output before you use it.
3. Be aware of data protection
Generative AI modules generally don’t have safeguards to prevent you from breaching GDPR. The prompts you use with generative AI tools can become part of the model. This makes it dangerous to use it with sensitive or confidential information because that information could end up as part of the answer given to somebody else.
For example, this means that using generic AI tools to process research data can be problematic, especially if it is not anonymised or has other sensitivity because the information could be released into the public domain. Also using AI to process student or applicant data or any other potentially sensitive information could result in data breaches and should be avoided.
As of July 2024, the University recommends Bing CoPilot Chat Enterprise, which has safeguards to protect the data you provide in prompts.
4. Be aware of copyright and intellectual property
The copyright position with Generative AI tools is unclear. Some experts claim that all of the text produced is potentially infringing on the copyright of the owners of the works used to train the model – and by extension potentially plagiarised.
AI tools which generate images are particularly controversial because they combine elements of other images without crediting the creator or gaining their permission.
Remember that taught students own the intellectual property in their work, so whilst getting ChatGPT to do your marking sounds like a nice idea, the reality is that including students’ work in your prompt would be legally problematic.
5. Model good practice
If you use AI tools to produce material which you will share with students or third parties, please try to model the ethical practice we expect. In particular, please acknowledge and reference your use of the tools so that students get used to seeing this kind of citation and third parties are aware of your use of the tools.
If you have used AI tools to produce something for publication, please remember to check the individual publication’s policies before submitting.
6. Become AI literate
Artificial intelligence is starting to have a growing influence in society, business and the working world and it will be important for us all to develop skills in using such technology effectively, efficiently and ethically. Alongside information and digital literacy, AI literacy will become an important graduate attribute for both the immediate and future landscape ahead and could help you in your work.
We are curating resources from LinkedIn Learning and others on iPark which you can use to increase your knowledge and understanding of the opportunities and challenges which generative AI brings.