fbpx
Saturday 21st December 2024

Turbo-charged or terminated: is your job safe from AI?

Joe Danbury explores how AI could either turbo-charge or terminate careers in the future.


“As an AI developed by OpenAI, my purpose is to assist, inform, and inspire through the power of language and knowledge. In five years’ time, I envision myself as a more intuitive and adaptable presence, seamlessly integrating into the fabric of daily life to foster creativity, understanding, and global connection.” – GPT4

For thousands of years, the only way new information came into existence was from the relentless toil of human effort. Those who were lucky enough to be literate shaped humanity’s progress through the written word.

Up until recently, we humans retained this monopoly. Yet like a slowly rising tide, computer-generated text has been getting better and better. For the most part this has been so subtle that it was easy to ignore.

Subscribe to get Mouthy stories straight to your mailbox.

Real-life money stories, tips, and deals straight to your inbox.

Ask the average person what a transformer model is and they’ll probably start thinking of decepticons. It was only with the explosive popularity of ChatGPT3 that the public started waking up. Now it seems impossible to ignore the possibilities – both positive and negative – of advanced, easily-accessible AI.

Has the rising tide of AI development become a tsunami? It really depends on who you are and what you do.

This article will try and clear the smoke surrounding the long-term impacts of AI by focusing on five different careers – two of which stand to benefit from AI, and three of which may no longer exist in ten years’ time.

Terminated

Let’s start with the bad news; AI is coming for some jobs.

Customer service operators

If you’ve had a parcel delivered by firms like Evri or Yodel, you may be raising your eyebrows. Many current chat bots are useless, only used by cheapskate companies who don’t care about providing a good service.

Yet the fundament difference between the chatbots of old compared to chatbots like GPT4 is all to do with how they work. Current models are able to ‘look back’ through a conversation to understand the overall context of your request.

The reason that they threaten this job is two-fold. Most customer service issues are very simple to solve, and it’s much cheaper to use chatbots than humans.

Older chatbots follow rule-based logic, which works fine for simple queries, but splutters and dies when it encounters anything beyond very narrow bounds. LLMs such as GPT4 can reason about the answers they give. It’s the difference between reading from a script and actually engaging with the problem at hand.

They can also recognise when they can’t answer a query. This is a game-changing combination – sufficiently advanced chatbots can recognise the limits of its design and escalate the query to a human operator.

Not only can one LLM manage the vast majority of incoming queries – if something goes beyond its limitations, it can hand over control to a human.

This means we can expect to see customer service departments made up of one LLM and a few senior employees who step in to handle complex queries. Customer service operators? Terminated.

Coders

The reason that language models such as ChatGPT4 work in the first place is because of a clever bit of computing called Natural Language Programming (NLP).

The genius of NLP is that it allows humans to communicate with computers using human speech. However, it’s important to remember that LLMs are still computers – and that computers’ mother tongue is code.

It converts prompts into code, analyses them, and outputs its response as ordinary language. But a really cool emergent feature of this is that LLMs can ‘translate’ ordinary language into a wide variety of coding languages.

This has the knock-on effect of LLMs being able to write code for a wide variety of tasks with minimal human oversight. Great news for tech companies with tight deadlines; terrible news for junior software engineers.

Why? Simple. More AI means fewer humans. Just like in the example above, you can have a handful of senior software engineers who use advanced AI systems to replace the guys doing the grunt work.

Moreover, AI systems don’t get tired, don’t demand higher pay, and don’t unionise! Capitalism, baby, let’s go!  

Junior software engineers? Terminated.

Prompt Engineers

Getting a little meta here, but you may have seen some positions at AI companies offering eye-watering salaries. Prompt engineers at Anthropic are currently starting at $280k a year – as of April 2024.

They are responsible for designing effective prompts. What does this mean? Imagine a vending machine with gold bars inside. Everyone wants the gold inside.

There is a way to get it, but doing so is incredibly complicated.

This means that those who know how to use the vending machine are rewarded richly, whereas the average person loses out.

Before you start googling ‘prompt engineer jobs near me’, consider this. What if the whole future development of our gold vending machine was focusing on making it easy for everyone to share in the riches it offers?

Sooner or later the rewards will be shared by all – and at that point there is no value in having esoteric knowledge about how to make the machine work.

This is the end goal of the big AI companies. Sooner or later, you won’t need to be a prompt engineer to use an LLM – and at that point, AI companies won’t need professional prompt engineers anymore.

Prompt engineers? Terminated – ironically, through their own work.

Turbo-charged

Teachers

I think teachers are some of the most undervalued members of society. They face near-constant stress, work exceptionally long hours, and are woefully underpaid. Yet because they are dedicated to improving the lives of children, they push through it.

What if we could build advanced technology that would make teachers’ lives easier? What if this technology also improved the standard of teaching so that every child had their own, tailor-made personal tutor?

Advances in Natural Language Processing (NLP) mean that a chatbot can generate text to fit all levels of reading comprehension. Whether it’s a five-year-old or a fifty-year-old, complex LLMs can explain difficult concepts in easy-to-understand ways.

Moreover, they can generate personalised explanations – meaning that if you need to explain compound interest to a kid who loves Star Wars, you can do so in seconds.

This means that teachers can generate personalised learning materials for a wide range of students in a fraction of the time.

This is already happening; the Khan Academy have created ‘Khanmigo’, an AI chatbot built specifically for education under strict ethical guidelines (read more here). [This article is also really useful WaPo).

Software that doesn’t judge children (or adults, for that matter) can be extremely effective for shy kids who may feel self-conscious about not knowing something.

I use GPT4 all the time to teach me new things. But I’m careful to not take everything it says at face value.

A key concern with using A.I. in schools is that children may start consuming hallucinated information as fact – effectively eroding our knowledge base through AI error. It is important to note, therefore, that efforts to introduce AI into education are being carefully monitored.

These AI systems aren’t going to replace teachers, but will hopefully become an indispensable tool that allows every child to flourish and learn in a way that suits them.

The issues facing teaching won’t be fixed entirely by mass adoption of AI systems – but developing these systems properly may have domino effects that help with other problems.

Teaching? Turbo-charged.

Writers – brainstorming, information gathering, editing and proofreading.

This may be the elephant in the room. Writers are worried that they will be replaced by AI systems, and for a while there didn’t seem to be any evidence against this happening.

However, it’s becoming clear that this isn’t the case – largely because you can’t train AI models on AI-generated content (the technical reasons for this are that the model output quality collapses over time due to information degradation and regression to an unoriginal, shitty average. Not sure how relevant this is).

This means that there will be a premium on human-generated content as time goes on. Which should, in theory, be great for journalists and creatives, as there may soon become a high premium on original writing (provided that AI companies actually pay to use content – see the FT’s deal with OpenAI.)

We can see a future where humans are still writing the articles – but there is extra revenue generated from selling on this content as training data.

But how else can AI systems benefit writers beyond merely serving as an additional way to make money?

Writers’ block can be frustrating at the best of times. Yet sometimes, all you need is a little nudge to get the gears turning. Whilst I’m not staring down the barrel of a tight deadline right now, I do want to get a move on and finish this draft.

Because of this, I enlisted GPT4 to suggest a brief list of jobs that could benefit from widespread adoption of LLMs. I didn’t use it to write the text for me. Rather, I used it to help me decide what to write about.

I can see that it may initially seem like cheating to use AI in a creative task. You’re interested in my work (I hope) – not a machine’s. Recognising this is key to using AI in a creative task.

The depth and nuance that humans bring to a discussion can’t be replicated by AI systems. Yet AI systems can generate resources in fractions of the time that humans spend doing the same task.

So, writers can use AI in a symbiotic relationship, in which AI systems provide useful prompts for humans to produce high quality content from, which in turn is used to help make AI systems better.

Photo Credits: Pexels

Joe Danbury

Joe is a Philosophy graduate, a cat owner, and a guitarist, but his true passion is AI. Since ChatGPT-3's inception, he has explored generative AI, eager to share its potential benefits. While its ultimate impact is uncertain, Joe remains committed to helping others harness AI's positive possibilities.

No Comments Yet

Leave a Reply

Your email address will not be published.