By Geoffrey Stanford, Headmaster of Newcastle Royal Grammar School
Much has been written on the subject of Artificial Intelligence and the effect it will have on so many areas of our lives. Some remain cautious of its growing presence but I remember speaking to an Oxford professor who said that, when he was doing his engineering finals in 1973, people were debating whether calculators should be allowed. By 1974 everybody had them.
Uptake of AI over the course of the last few months has shown that it is already even more pervasive with the potential for substantially greater impact. For better or worse, AI is here to stay and we need to learn to live with it, and seize the opportunities.
There are many ways AI and data technologies can be used in school, including providing individualised instruction or feedback to students to support intervention from the teacher. More advanced systems can automatically adapt to the level of challenge to keep a learner motivated and on task, while students can use AI programs to help generate and expand ideas. Indeed, AI will soon be a part of most productivity and creativity tools, blending with human output. At the RGS, we therefore aim to harness the power of AI to enhance education, supporting both students and teachers, and create inclusive learning environments.
Our approach is to teach pupils to use it with integrity within their research and task completion. In doing so, we aim to help them use AI effectively and make good decisions, embracing AI’s opportunities to work together to be creators and problem solvers, not just content generators.
While recognising the power of AI for educational benefits, we also have to be mindful about its limitations that derive from both the data AI drawns upon and the ethical overlay that humans have added. We need to use AI and data technologies in a manner that is fair and does not lead to discrimination. This includes ensuring that these technologies do not reinforce existing biases or create new ones, which means that all users of AI, including students, teachers and administrators are expected to use the technology in a responsible and ethical manner. This means we will continually need to stay abreast of any potential biases in these technologies, take steps to mitigate them and retain a critical lens.
Clearly the submission of AI-generated answers could be a form of plagiarism and the capacity of AI to ‘steal the struggle’ from students needs to be avoided.
While it may be possible to use AI ‘plagiarism detectors’ or even professional academic judgment to identify use of AI, it seems highly likely that this concern will influence the development of how we assess students in the future. At the moment, exam boards only allow AI tools under certain conditions where the student is able to demonstrate that the final submission is the product of their own independent thought and effort. While handwriting exams is, for now, a simple solution, it remains to be seen how long this will survive the march of time as, for many students, typing is likely to become their typical way of working, and so we will need to adapt to more varied forms of assessment.
Our job as educators is to help students understand that the material generated by AI programs may be inaccurate, incomplete, or otherwise problematic so they should check their ideas and answers against other, reputable source materials. Large language models tend to make up incorrect facts and fake citations while code generation models tend to produce inaccurate outputs and image generation models can produce biased or offensive products. Truly responsible use of AI should therefore encourage active engagement and independent thinking, thereby developing skills for life.