The Miami Entrepreneur

Laura Kuenssberg: Should we shut down AI?

Read Time:6 Minute, 32 Second

Should we worry about artificial intelligence, or embrace the possibilities it brings, asks Laura Kuenssberg.

Image source, BBC/Getty Images

What do the Pope’s crazy puffa jacket, a student avoiding a parking ticket, a dry government document and Elon Musk warning the robots might come for us have in common?

This is not an April Fool’s joke but a genuine question.

The answer is AI – artificial intelligence – two words we are going to hear a lot about in the coming months.

The picture of the Pope in a Michelin-man style white coat was everywhere online but was made using AI by a computer user from Chicago.

In Yorkshire, 22-year-old Millie Houlton asked AI chatbot ChatGPT to “please help me write a letter to the council, they gave me a parking ticket” and sent it off. The computer’s version of her appeal successfully got her out of a £60 fine.

Also this week, without much fanfare, the government published draft proposals on how to regulate this emerging technology, while a letter signed by more than 1,000 tech experts including Tesla boss Elon Musk called on the world to press pause on the development of more advanced AI because it poses “profound risks to humanity”.

You are not alone if you don’t understand all the terms being bandied about:

A chatbot is, in its basic form, a computer program that’s meant to simulate a conversation you might have with a human on the internet – like when you type a question to ask for help with a booking. The launch, and explosion of a much more advanced one, ChatGPT, has got tongues wagging in recent monthsArtificial Intelligence in its most simple form is technology that allows a computer to think or act in a more human wayThat includes machine learning when, through experience, computers can learn what to do without being given explicit instructions

It’s the speed at which the technology is progressing that led those tech entrepreneurs to intervene, with one AI leader even writing in a US magazine this week: “Shut it down.”

Image source, Reuters

Estonian billionaire Jaan Tallinn is one of them. He was one of the brains behind internet communication app Skype but is now one of the leading voices trying to put the brakes on.

I asked him, in an interview for this Sunday’s show, to explain the threat as simply as he could.

“Imagine if you substitute human civilisation with AI civilisation,” he told me. “Civilisation that could potentially run millions of times faster than humans… so like, imagine global warming was sped up a million times.

“One big vector of existential risk is that we are going to lose control over our environment.

“Once we have AIs that we a) cannot stop and b) are smart enough to do things like geoengineering, build their own structures, build their own AIs, then, what’s going to happen to their environment, the environment that we critically need for our survival? It’s up in the air.”

And if governments don’t act? Mr Tallinn thinks it’s possible to “apply the existing technology, regulation, knowledge and regulatory frameworks” to the current generation of AI, but says the “big worry” is letting the technology race ahead without society adapting: “Then we are in a lot of trouble.”

It’s worth noting they are not saying they want to put a stop to the lot but pause the high-end work that is training computers to be ever smarter and more like us.

On this week’s show are Home Secretary Suella Braverman and Labour’s shadow levelling-up secretary Lisa NandyWatch on BBC One this Sunday from 09:00 BSTFollow live updates in text and video here on the BBC News website

The pace of change and its potential presents an almighty challenge to governments around the world.

Westminster and technology are not always a happy mix and while politics moves pretty fast these days, compared to developments in Silicon Valley, it’s a snail versus an F1 car.

There are efforts to put up some guard rails in other countries. On Friday Italy banned ChatGPT while the EU is working on an Artificial Intelligence Act. China is bringing in laws and a “registry” for algorithms – the step-by-step instructions used in programming that tell computers what to do.

But the UK government’s set of draft proposals this week proposed no new laws, and no new watchdog or regulator to take it on. Even though the White Paper is an effort to manage one of the biggest technological changes in history, blink and you might have missed it.

The government wants, for now, to give existing regulators like the Health and Safety Executive the responsibility of keeping an eye on what is going on. The argument is that AI will potentially have a role in every aspect of our lives, in endless ways, so to create one new big referee is the wrong approach. One minister told me that “it’s a whole revolution” so “identifying it as one technology is wrong”.

Ministers also want the UK to make the most of its undoubted expertise in the field because AI is big business with huge potential benefits.

The government is reluctant to introduce tight regulation that could strangle innovation. The challenge according to the minister is to be “very, very tough on the bad stuff”, but “harness the seriously beneficial bits” too.

That approach hasn’t persuaded Labour’s shadow digital secretary Lucy Powell, who says the government “hasn’t grappled with the scale of the problem” and we are “running to catch up”.

Are existing regulators really up to the task? The Health and Safety Executive wouldn’t say how many staff it had ready to work on the issue or are being trained. “We will work with the government and other regulators as AI develops and explore the challenges and opportunities it brings using our scientific expertise,” they told me.

Image source, Getty Images

How on earth can any government strike the right balance? Predictions about the potential of technology are often wildly wrong. One MP familiar with the field reckons: “The tech bros have all watched a bit too much Terminator – how does this technology go from a computer program to removing oxygen from the atmosphere?” The MP believes heavier regulation won’t be required for a few years.

One tech firm has told us there is no need to panic: “There are harms we’re already aware of, like deep fake videos impersonating people or students cheating on tests, but that’s quite a leap to then say we should all be terrified of a sentient machine taking control or killing humanity.”

Another senior MP, whose been studying the UK’s proposals, says the risks are not yet “catastrophic” and it’s better to take a careful and gradual approach to any new laws than “take a running jump, and splash into the unknown”.

But to worry about big changes is part of human nature. Clerics worried the printing press would make monks lazy in the 15th Century. Weavers smashed up machines in the 19th Century fearing they’d lose their livelihood.

Even your author snubbed the offer of a mobile phone in 1997 convinced they’d only be for “show-offs” and would never really catch on.

What is certain, is that this generation of politicians and those who follow will increasingly have to spend their time grappling with this emerging frontier of technology.

More from Laura Kuenssberg

About Post Author

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post Two Iranian women arrested for not covering hair after man attacks them with yoghurt
Next post Japan Breaks With U.S. Allies, Buys Russian Oil Above Cap