A misunderstanding of AI could see Australia adopt suffocating and unnecessary regulation

Allan-Waddell

Kablamo co-chief executive Allan Waddell.

The Human Rights Commission is currently developing recommendations for the future of artificial intelligence in Australia. This is much needed, but the way forward from here is problematic despite our best intentions.

Artificial intelligence (AI) is one of those technologies that is deeply misunderstood, and maligned by pretty much everyone at some point, from government regulators to tech CEOs (*cough cough* Elon Musk) to the average person on the street.

While this general misunderstanding of AI is a shame, it usually isn’t a big deal…

Now, though, regulators are in the midst of a general push towards tech regulation which threatens to seriously hobble the promise of AI. Because of this, I feel an obligation to say a few words in its defence.

Let’s start by saying the obvious: AI can be exciting and fun.

Case in point, a brand new event (among many) hit the wires in 2020: the AI Song Contest, where the aim was to create the ultimate Eurovision song with AI.

Team Australia took home the crown by combining tracks of koalas and kookaburras with output from their AI created melody and lyrics which had been made from processing previous Eurovision hits.

It’s not the first time AI has been used to create art.

The website bot or not asks visitors to decide if poems are created by a celebrated author or a computer. It’s not easy to tell.

And maybe this is one of the reasons people are afraid of artificial intelligence in ways they do not fear other tech, and why regulators seem to reflexively respond to this fear.

Too many regulations stem from an overblown perception of the risks associated with AI, which have been exacerbated in the public dialogue with statements such as Elon Musk’s famous declaration that AI poses the supreme existential risk to human civilisation.

Let’s put some of these fears to rest.

In order to properly understand the true threat posed by AI, it’s necessary to separate the narrow AI which exists today from the general AI of science fiction and Musk’s nightmares.

It might be true that general artificial intelligence, capable of broad reasoning and possessing superhuman computational power, presents a serious threat to the human species. However, the narrow AI which exists today poses roughly the same existential risk to humans as, say, turritopsis dohrnii, the immortal jellyfish.

‘But,’ an alarmist could say, ‘don’t you understand that this jellyfish never ever dies? Our pitiful human civilisation can never outlast the sheer longevity of these creatures! And god help us all when they evolve into Cthulhus.’

Fine, fair enough. When considering the infinite span of time, many things are possible, and large problems can grow from humble beginnings. But also, don’t forget that what we’re facing today is a jellyfish, not an Eldritch horror.

Besides, who knows how much good could be created by studying these little jellies? Perhaps we can harness some of their longevity for ourselves to live longer, healthier lives.

I think, on balance, the possible scientific and humanitarian benefits of studying an undying jellyfish far outweigh the existential risk posed by what threat might emerge from jellyfish immortality.

In the same way, the development of narrow AI systems has immense potential for good, and very little risk. The narrow artificial intelligence causing all this existential panic is not alarming in the least, nor is there a significant risk of accidentally creating a superintelligence using current AI development techniques.

Today’s AIs are basically complicated linear algebra problems, a network of numbers which go up or down depending on the data fed in at the front.

These systems are getting pretty ‘smart’ within narrow domains, and amazing things are possible using this simple model, including text-to-speech, image recognition, visual and linguistic generation, classification, preference prediction, superhuman game-playing, and a host of other applications which are pushing tech and business forward around the world.

So while AI can beat us at chess and checkers, and even Go, that’s a long way from overthrowing humanity.

That being said, these tools don’t generalise well to systems beyond their narrow domains, and the amount of data and computational power necessary to make a truly superhuman narrow system like AlphaGo, the Go-playing AI, should indicate that making a superhuman system across several domains isn’t something that might happen by accident.

During the (in)famous Go match with Lee Sedol in 2016, AlphaGo ran using 48 of Google’s first-generation TPUs (tensor processing units), which each had a clock speed of 700MHz. For reference, that’s about half the processing speed of a high-end GPU (computer graphics card) in 2020.

So, in order to perform at a superhuman level in one domain, AlphaGo’s team needed to string together 24 high-end gaming PCs of the future.

My point is, however, when the first superintelligent AI is made, it isn’t going to be by a small local company developing narrow AIs, nor is it likely to randomly come to life.

Considering the true risks posed by the narrow AI systems which exist today, the spirit and the magnitude of most hinted-at regulations are inappropriate.

What’s worse, though, is that, even if the situation was as dire as believed, proposed regulations wouldn’t work to curtail the AI-pocalypse anyway.

After all, AI research is a global phenomenon. And if somewhere like Australia cut itself off from the global research and development, the spooky AI, if it is created, will still reach us.

Why not be a leader?

NOW READ: As AI startups focus on time-to-market, ethical considerations should be the priority

NOW READ: “Tools for human hands to build the future with”: Why AI could never fully replace workers

COMMENTS