Skip to Main Content

Generative AI for students

How to evaluate generative AI outputs

One of the key skills you will develop at university is to critically evaluate everything you use as evidence in your assignments.  

It is well documented that generative AI output can be inaccurate (sometimes called 'hallucinations').  You may get results that look accurate (and even include citations!), but how do you know that they are credible? Therefore, when reviewing content generated by AI, you should do a critical evaluation just as you would do with results from a Google or academic database search.

Here are two tools that you can employ to help you think critically about your outputs:

  • the CRAAP test is a useful framework to review the output (results) from your prompts.
  • the EVERY test is a process tool to improve the prompt you put in, and therefore, your output.

Click the letters below to explore how to approach the CRAAP test for a Generative AI tool's output.

 

Further reading:
Referencing Generative AI and why students should take the CRAAP test: Advice from the Library

 

Click the letters below to explore the EVERY framework for using Generative AI tools effectively.

Go further