New government report reveals fears that Australia will fall behind on AI

csiro AI ed husic quantum

Australian Industry and Science Minister Ed Husic. Source: AAP Image/Lukas Coch

As artificial intelligence (AI) continues to dominate our news feeds, the Albanese government has launched two papers exploring the ethical future of AI in Australia.

The first is a Safe and Responsible AI in Australia discussion paper that examines existing regulatory and government responses to AI, including the holes that are currently there. The consultation period opened today and will run for eight weeks.

The second is a National Science and Technology Council’s paper titled Rapid Response Report: Generative AI, which assesses the potential risks and opportunities in relation to AI.

“With the rapid acceleration of the development of AI applications, such as ChatGPT, and indications of increased capability, it is time for Australia to consider whether further action is required to manage potential risks while continuing to foster uptake,” the Rapid Response Report reads.

These papers come in the wake of the Albanese government dedicating $102.5 million to support small businesses integrating quantum and AI technologies into their operations in the most recent budget.

It also committed $41 million to the responsible development of AI through the National AI Centre.

It also comes in the same week that world-leading AI researchers, CEOs and engineers signed a statement on AI risk titled ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’.

The statement was signed by big names in the space, including OpenAI CEO Sam Altman and the COO of Google DeepMind, Lila Ibrahim.

“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment. The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” Minister for Industry and Science, Ed Husic, said in a statement.

“But as I have been saying for many years, there need to be appropriate safeguards to ensure the safe and responsible use of AI. We’ve made a good start, thanks to the Government’s $41 million investment in AI for industry and our strong advocacy in this space.

“Today is about what we do next to build trust and public confidence in these critical technologies.”

The government is concerned that Australia is lagging behind in AI

One of the potential problem areas that the Rapid Response Report highlights is the concentration of generative AI resources, which it describes as laying with “a small number of large multinational and primarily US-based technology companies.”

It names Google and Microsoft specifically and says that this “poses potential risks to Australia” due to how resource-heavy LLM and MFM AI is. It goes on to ask whether Australia can be competitive due to the lack of resources, as well as a skilled labour shortage when it comes to AI.

“Australia has capability in AI-related areas like computer vision and robotics, and the social and governance aspects of AI, but its core fundamental capacity in LLMs and related areas is relatively weak.”

And we certainly know that computing power is integral here. Earlier this week Nvidia was in the headlines for its mammoth quarterly sales results which resulted in a huge stock jump and, this week, becoming a US$1 trillion company.

And this was largely thanks to tech companies scrambling to buy its H100 processor that is being scooped up for its at-scale generative AI capabilities.

Still, not everyone agrees with the government’s assessment of the current situation.

Aiden Roberts is the CEO and co-founder SimConverse, a generative medical AI simulation platform. The startup raised $1.5 million in seed funding back in March.

Bringing up the models developed by the likes of OpenAI, Roberts says they’re great feats but not particularly complicated to build.

“Virtually every advanced sovereign nation has the capability to compete here, as shown by the rapid entry of [for example] UAE into the foundation model space,” Roberts said in an email to SmartCompany.

“Given that LLMs and adjacent technology will enable virtually all of commerce within the next 15 years, every country should want to be in control of its own destiny here.”

Roberts also has hope when it comes to the tech skills needed in this space.

“Australia has lagged historically in teaching the latest and greatest skills, such as deep learning, in our tertiary sector. When it moves however, it always moves to deliver world class education. I have confidence that with targeted resource investment in enabling our tertiary centres to provide education on deep learning (this means GPU compute clusters for every major university), we’ll catch up fast,” he said.

Samantha Lengyel, co-founder and CEO of Decoded.AI says that the paper is well balanced and represents most viewpoints in regards to AI, but does feel it could contribute to further anxiety around LLM and MFMs.

“I do believe that the benefits of Generative AI outweigh the risks contingent on effective education amongst users and developers and, in particular, we will need to equip workforce participants with ways of thinking which do not induce ‘mathematical anxiety’, invite reticence to critical analysis and do not require widespread re-skilling,” Lengyel said in a message to SmartCompany.

“I believe that, for Australia, a blended strategy of higher investment in AI (such as a sovereign LLM/MFM), as well as self-regulation guided by ‘technologically-neutral’ and risk-specific legislation, is appropriate. We are a relatively highly-educated workforce with a strong culture of risk-adversity and mitigation. This is important because we need to encourage and enable experimentation in AI so that we do not fall behind international standards.”

COMMENTS