One of the key skills you will develop at university is to critically evaluate everything you use as evidence in your assignments.
It is well documented that generative AI output can be inaccurate (sometimes called 'hallucinations'). You may get results that look accurate (and even include citations!), but how do you know that they are credible? Therefore, when reviewing content generated by AI, you should evaluate critically, just as you would do with results from a Google or academic database search.
Here are two tools that you can employ to help you think critically about your outputs:
- the CRAAP test is a useful framework to review the output (results) from your prompts.
- the EVERY test is a process tool to improve the prompt you put in, and therefore, your output.