AI: The double-edged sword?

Should we put AI back in the box before it is too late?

18 months ago, I started Teacher Prompts, a weekly newsletter designed for teachers and leaders to grapple with the ever-changing advancements of AI, best practices, policy and, most importantly, where needed, a dose of healthy scepticism. It is that healthy scepticism that I want to focus on.

Here are three quotations around technology, education and learning that are pertinent to the discussion.

"A child who does not learn morse code will find himself at a serious disadvantage in communication."

"A typewriter will be an indispensable tool of the modern office - no young person should leave school untrained in its use."

"A short-wave radio will soon be in every home and every classroom, making it an essential part of a child's education."

These quotes were devised by Jared Cooney Hovarth on a recent LinkedIn post, yet it is not beyond the imagination that at some point a high ranking official at the War Office espoused the importance of morse code, or an executive of a large business proclaimed that proficiency of the typewriter will be a skill of the 20th century that should be taught in school or would fundamentally change schooling. These people would have campaigned for curricula change and would have been proven wrong if they could see what a technological society we have become. Email and instant messaging removed the need for morse code. The personal computer, touch screens and text-to-speech improvements removed the need to learn how to navigate a typewriter and the short-wave radio has been replaced by on-demand podcasts that get streamed via Wi-Fi. Each technological advancement of the time (with perhaps the exception of the printing press?) thought that being the technological tool of the day would lead to them being cemented as a timeless tool for educating or being a purpose for becoming educated. 

In 2014, when the coalition government’s national curriculum reforms took effect, ICT was changed to computing and computer science was to be taught at primary school for the first time. This was a time when software engineering industry was growing rapidly. Now just a short ten years later, before any pupil would have progressed through the entire programme of study from Year 1 through to Year 11 or beyond and entered the world of work, we see that the software industry is rapidly decreasing from 125,906 roles in January 2020 to 82,629 roles in January 2025. So much for the ‘jobs of the future’ argument…. I can now, with no coding experience or understanding, ask Claude to create a playable game, like the classic Snake that took up so much of my teenage years, in seconds.

The relationship between education and technology has always been fraught. The technologists have always claimed that the latest tech will fundamentally change schooling, but the early adoption of 1:1 iPads failed, Sweden is going back to paper and there is evidence that drawing or writing synchronises the areas of the brain associated with memory and encoding new information, but when typing these areas are desynchronised. Despite all this, Bill Gates still believes that AI will replace teachers in ten years.

None of this is to say that we should forget technology at all, but as we seem to open up more of the education space to generative AI-led technology, I think it would do well to remember that AI does not fundamentally change human cognitive architecture, and the rise of AI will not change the way that humans come to learn things: We still need to be motivated, we still need to pay attention to the right things; we still need to be aware of the limits of working memory, we still need to encode and organise disciplinary and procedural knowledge into our long-term memories, and we still need to actively recall this knowledge or risk ‘forgetting’ it.

So let’s not pretend that right now the rise of AI will cause a learning revolution. In fact, around 47% of university students that use Claude do so for a ‘direct’ purpose where they ask it to solve something or create something. Such an approach highlights the lack of cognitive engagement, which reduces the chance of learning.

Whether people like it or not, AI is here to stay and pupils and teachers are using it. The American author, educator, media theorist, and cultural critic Neil Postman argued in 1998 that society should know the following five ideas when it comes to technological change. I have provided my own thoughts around these ideas when considering that the technical change is AI and how those consequences of change may relate to education.

1)    All technology is a trade off

Every technological enhancement is an exchange. However, the extent of the advantages and disadvantages of these exchanges are never clear until after the technology is part of our lives. Smartphones have given us more computing power in our pockets than was required to send man to the moon. The trade-off, according to Maryanne Wolf, has been an infinite set of distractions for children’s attention. Certainly not something that was ever marketed to the population.

AI may well be able to write lesson plans, but this robs teachers of thinking deeply through that lesson while considering the needs of the class, an active ingredient of strong teaching. No centralised resource that is ‘plug and play’ will ever deliver great outcomes for students. A teacher using AI to fill in a performa certainly helps with the administrative workload, and if schools see planning as an administrative activity, I urge that teacher to continue using it to reclaim their Sundays. However, I remain sceptical that an AI will ever know a group of 30 pupils well enough that a teacher does not need to do the intellectual preparation to plan lessons for their pupils to tailor it to what they need.

As with the university students that used Claude, students will be able to ask for direct outputs and circumnavigate the thinking process to get the output. A technologist saying that we can teach young children to use it responsibly are, quite honestly, laughable and only demonstrates the chasm between what teachers know about pupils and perception of what technologists think they know about students. Unleashing open (by that, I mean a tool with chatbot-like interfaces) generative AI as a tool that we begin to expect pupils to use potentially leads us down a path that will be much harder to come back from.

2)    These trade-offs are never distributed evenly among the population

Postman asks us to consider who benefits from this new technology. Which groups will be favoured and which will be harmed?

Let’s be clear, AI companies will seek to benefit financially from this new technology by selling to schools who are being asked to do more with a lot less.  I have certainly experienced marketing departments sprinkle product pages with ‘AI this and AI that’ into their product literature and the busy leader will be none-the-wiser if the product uses AI effectively or not. While not an example from education, here is such a case. This is why I welcome the new initiative by the Chartered College of Teaching to support this area with their Edtech Evidence Board.

As a leader’s work is more likely to involve administrative tasks that AI can perform, and it is unlikely, and undesirable, to foresee a system where our youngest pupils being taught by an AI tutor for most of the day, is AI going to benefit the teacher as much as a leader in a school? A common use case for the teacher use of AI is in the end-of-year reports, but there are certainly ethical issues around this use that simply do not exist for a reader’s workload. In my experience, it is the role of SLT to proof read the reports before they are sent to to parents. Are we going to end up in a situation where the writing of these reports with AI is discouraged, but the proofreading of them with AI actively encouraged?

Much of the talk around AI as a tutor revolves around how it will close the attainment gap. No one is considering if actually the revers will be true. Will pupils in disadvantaged areas benefit as much as their more advantageous counterparts? Without equal access to the technology, a culture in the home of educational success and the space to work uninterrupted, will the attainment gap widen because of AI technology? How can government and institutions ensure that everyone benefits from the use of AI?  We saw during the pandemic who really benefited from online lessons… and the impact such a decision is still being felt by schools today.

Recruitment is another area where there could be uneven distribution of technology. I can foreseen a future where big MATs invest in their own local LLMs that are plugged into their servers and contain all their data. A true teaching and administrative assistant for every member of staff. The local authority, single-form entry school cannot compete with that. Will the promise of such assistance incentivise teachers to apply in certain areas where these big investments can be made? What calibre of teacher will go there versus that single-form entry primary?

3)    Every technology has a philosophy which is given expression in how it makes the user behave.

Generative AI technologies focus on the product, not the process, but it is the process that ensures hard thinking is taking place to enable learning. If pupils are exposed to the instantaneous, do we inadvertently promote immediacy of a final product over accuracy, process and the joy of a job well done through hard work? Amongst all the AI-hype, it is crucial that all educators remember that AI is not changing the cognitive architecture of human learning. If people are not encoding information from working memory to long-term memory, then it does not matter if AI can produce work at graduate level, the human has learnt nothing. If those behind this technology were concerned by such things, then it would be programmed from its inception to support the learning process, not to just provide the outputs. Quite simply, the use of these tools actively encourages people to circumnavigate hard thinking. They were designed that way.

This also has implication for the arts. We now the ability to generate images of anything in any artistic style we choose. Does the power to generated images of what we want, in the style of the renaissance masters, that are beyond one’s own artistic capability take away from the awe, wonder and appreciation of seeing Michelangelo’s work at the Sistine Chapel or any other artist and their work? Art is such a bold expression of the human condition that to think that the painstaking effort and skill of a artist is not appreciated simply because ‘AI could do that’ is a concern.

Do we concern ourselves with the anthropomorphic language that pupils will use to describe their interactions with AI tools that means that they genuinely believe that they have a strong bond and friendship with a graphical user interface?

We are at a point where relative experts are using AI effectively because they have the requisite prior knowledge to make sense of the outputs, so capture the erroneous errors caught through hallucinations. This is a good thing, but when the next generation can produce nearly anything they want and are in that habit of doing so, where does that leave them if they never felt that they had to learn that foundational knowledge. Ask any teacher and they will tell you how much harder it is to teach units of learning where pupils do not have the requisite prior knowledge. Now imagine that on a much bigger scale than it is now!

4)    Technological change is not additive; it is ecological.

After the iPhone was introduced in 2007, there did not exist the pre-iPhone world and the iPhone. The technological landscape had changed, and so did society’s relationship with technology to deliver a new world where handheld computers were expected. Getting back to the ‘before iPhone’ world is impossible, and it seems to me that we are already at that point with Generative AI. Postman says it clearly, ‘The consequences of technological change are always vast, often unpredictable and largely irreversible.’ We see that with the smartphone now. In a recent interview with the BBC, Jony Ive, the former design chief at Apple, addressed his conflict of the albeit unintended consequence of the long-term impact of the technology that he created. We know that more pupils have phones and tablets at a younger age, and those that work in school know the issues that this causes. Are we really prepared to take another quick leap of faith with the use of AI in schools regarding pupil use? Once it is in, it will be exceptionally difficult, if not impossible, to get out.

5)    Technology tends to become mythic.

Mythic in this sense refers to the common tendency to think that technological creations are so engrained in our everyday lives that is like they were part of nature, not so distinguishable from a tree. A concrete example of this is the place value system coupled with Arabic numerals. We take these technologies for granted to the degree that we no longer discuss whether there could be something ‘better’. It just is. We can see this in young children when it comes to touch screens. They experience it once and then everything with a screen is coated in finger prints. That a non-touch screen can exist is madness to them. Likewise, how many adults really consider the invention of the alphabet to encode and decode the written word? Postman argues that when a technology reaches this mythic stage it becomes accepted ‘as is’ and is difficult to modify, both in terms of how users expect it to be used and the underlying technology that controls it. We can see this today when people get upset when the latest software updates changes part of the user interface and with it the habits that they have engrained. Naturally, AI technology is no different. If we and young people expect to be able to use AI in a certain way and to perform certain tasks from the outset, then any challenge to the status-quo can become akin to questioning the very laws of nature itself, and history is full of examples when people tried to do such things and pay the price.

Conclusion:

AI may well certainly change the nature of the workforce. However, as Neil Postman reminds us, this does not mean that it will do so for the better or for all of us. If anything, right now, AI is going to hopefully going to make the workload lives of teachers and leaders easier as it synthesises new documentation from the DfE and Ofsted, writes the impact report in preparation for a school’s latest visit from the Rise Team or school improvement partner, or crunches some data for the assessment lead to present to their headteacher. This is a good thing, but I cannot shake the feeling that all this does in accentuate the state that the profession finds itself in. The use of AI, I would hypothesise, is helping teachers and leaders realise that what is asked of them is often, in the grand scheme of things, is incredibly unimportant to running successful schools. If it is worth doing, then they would, I hope, want to do it themselves. The above is certainly not a rant to pretend that AI does not exist. As a profession and a society we were too late to realise the consequences of a smartphone coupled with social media. All I hope the above will soo is make people see past the glossy demo and think that little bit harder about AI and have those conversations with stakeholders.

I have cautioned against the folly of predicting the future, but humour me for a moment. Ethan Mollick, professor at Wharton School, has said that even he is not clear on what ‘AI Skills’ are or what we should be teaching. Therefore, perhaps the greatest thing we can instil in the pupils that attend our schools is this: Do not forget what it means to be human. As Jacob Bronowski wrote in The Ascent of Man:

Man is unique not because he does science, and is unique not because he does art, but because science and art equally are expression of the marvellous plasticity of mind.

Jacob Bronowski

We would do well to not outsource our human nature so early in the infancy of children and of this emerging technology until we have had a serious conversation around the positives and negatives, and if it is worth the trade-offs.

 

 

Reply

or to participate.