INSIGHT: It’s a changing world - and we need to master the tools

By MALCOLM STRACHAN

THIS column often looks at the politics of the day – but this week, I am looking at something else that caught my eye during the week.

There were a couple of stories in last week’s Tribune about the use of artificial intelligence by students – specifically the piece of software called ChatGPT.

Now, first things first, what is that? It is what is called a chatbot, and has been around publicly since the end of 2022, launched by OpenAI.

Calling it a chatbot is probably a bit too simplistic in a way – it can be used to generate language for all kinds of things.

You can ask it questions and get fairly articulate answers – done by its basic model scouring the internet for what the answer would be if you Googled it and phrasing the answer in such a way as to seem, well, almost human.

It can create articles, social media posts, code for websites, it can write an email for you and… here’s the rub when it comes to students… it can write essays.

Some of the professors at the University of The Bahamas have a problem with that.

After all, part of the professors’ job is to assess the quality of the work of their students. How do they do that if the student plugs the question into ChatGPT and lets it do all the work?

It is easy to react to ChatGPT and other AI software with a cry of “ban it!” – but let’s delve a little deeper.

If you are a parent, you may well have had a conversation with one of your children’s teachers at some point about the use of calculators in the classroom. I did so myself a couple of years back – a tut tut from the teacher to say that my child had been using a calculator to solve problems in the classroom instead of working it out themselves.

The point, of course, is that the children need to learn the method – and are quite able to use calculators later on once they have done so. But to begin with, they need to learn how to do it, not just get the right answer. Show your work, the note from the teacher used to say when the question was set.

ChatGPT steps into that landscape – but while it is a tool that allows people to create essays, we have not as yet it seems adequately developed the tools surrounding when it is and when it is not acceptable to use it.

One of the problems is being able to recognise when ChatGPT has been used – or not just recognising but proving.

You might get a feeling when reading something – particularly if as a professor you are familiar with a particular student’s writing style – that an individual piece is suspect. It might be a bit too generic. It might lack deep insight. It might flow better, even, than some students’ work. But being able to say for certain – and sanction a student for using something they are not supposed to use – is a different matter.

Some steps have been taken in that regard – during quizzes, students are now sometimes monitored by webcam or by access to other browsers being restricted. That in itself can be awkward – intrusive to some extent or limiting the ability to find facts by acceptable methods.

ChatGPT can also make some calamitous mistakes (students can too, as any professor will tell you). In a law case last year, two lawyers were fined $5,000 each for using ChatGPT in research. It duly wrote up legal opinions that didn’t actually exist and fake quotes and citations. Another case saw an AI generate images of black people in Nazi uniforms – with no care for historical accuracy.

The problem here is the same as when you Google something. Out there on the internet, I hate to tell you, is all kinds of rubbish. As sensible people, we can navigate that and work out what looks likely to be accurate and what looks likely to be the work of a crazy person living in a cellar somewhere while wearing a tinfoil hat. But AI programmes gather all of that up and sometimes its filter is not as strong, especially in niche areas where there might be a shortage of consensus.

In another case, a journalist got fired in Wyoming after getting caught using AI to write stories. Complete with fake quotes, put into the governor’s mouth, no less.

But let’s go back to what ChatGPT is. A tool. You can use a tool the right way, you can use a tool the wrong way.

Ask the students themselves – and The Tribune did – and they say that AI software improves their performance. One student talked of how it could explain correct answers, helping her understanding.

One thing is for certain – AI is not going away. Just like we are not throwing calculators out of the classroom.

The question is how to use AI the right way – and still be able to evaluate the student in front of you and not the software they used.

I do not envy the professors of UB this challenge – though they will not be working alone on this. The whole of academia is trying to figure out this puzzle.

There are also concerns about plagiarism. A lot of these AI models have been trained on vast catalogues of writing, including literature, social media posts and so on. Some writers have even sued over it, including famous names such as John Grisham and George RR Martin, saying they had never given permission for their work to be used as a training ground for such software.

In the academic world, plagiarism is major no-no, so if you use AI to help write something and it happens to lift a piece of phrasing from somewhere else, then you could soon run into trouble. And rightly so. In such situations, it is an individual’s words that are being weighed, and if that individual didn’t write it, well, they can face the consequences.

In all of this talk of limiting ways to use such software, though, there is another thing that should be discussed – how should we be using it? It is going to be here today, tomorrow and into the future – and it will keep developing. We need to learn how to recognise it, when to use it and when not to use it, and both the dangers and the benefits it will bring.

Professors may not like it, but students are using the software – if it was no good, they wouldn’t be doing so.

The problem in our classrooms is just the start, of course – the world of work needs to adapt to this too. The bigger picture could see it replacing jobs, even entire fields of employment. Will that enable us to do more, be more? Or will it see people left out of work and a changing society?

Food for thought, and not just in the lecture halls.

Log in to comment