Surging development and implementation of artificial intelligence has caught the eye of Australian regulators, whose submission to a government investigation could shape how AI regulation covers businesses in the future.
In an attempt to gauge how AI is changing the economy and the safeguards needed to protect users and businesses, the Department of Industry, Science and Resources issued its Safe and Responsible AI in Australia discussion paper in June this year.
It focuses on “governance mechanisms to ensure AI is developed and used safely and responsibly in Australia,” including “regulations, standards, tools, frameworks, principles and business practices”.
A major key to understanding today’s regulations, and how they could change in the future, is a submission by DP-REG — a new alliance of regulators including the Australian Communications and Media Authority, the Australian Competition and Consumer Commission, the Office of the Australian Information Commissioner, and the eSafety Commissioner.
In short: existing regulation covering competition, the media, digital safety, and data privacy already brushes up against AI, DP-REG states, meaning any legislative or regulatory responses to AI should first consider how existing rules could expand.
However, there are some noticeable gaps where today’s rulebook may not apply.
Here’s what you need to know about how regulators see the issue, and the business-facing questions being raised by AI.
What concerns do regulators have?
DP-REG outlined a litany of ways artificial intelligence and its misuse could affect small businesses, either directly or indirectly.
Generative AI and advanced large language models have the capacity to churn out large volumes of plausible but totally fake online reviews, the submission notes, compounding what some small businesses already see as an unfair and lopsided battle against aggrieved internet users — or competitors in disguise.
Unsubstantiated hype and false claims around AI could also trick businesses and consumers into buying services that aren’t all they claim to be.
The intense focus on AI and its transformative business potential can “incentivise spurious and misleading claims about the capabilities (or existence of) of AI technology in a wide range of products,” DP-REG said.
Small businesses could also fall victim to old-school anti-competitive behaviours, due to bigger competitors using AI systems to leverage existing market domination.
Major online platforms could lock away data and content that is publicly available today to feed their own AI algorithms to the exclusion of competitors, DP-REG states.
Equally concerning is the possibility for self-preferencing and tying conduct, where an online marketplace could weaponise AI recommendation algorithms to favour products and services owned by the platform itself, to the exclusion of smaller businesses using the platform as a gateway to customers.
DP-REG also notes a particularly sci-fi concern: AI pricing algorithms effectively colluding with other algorithms to automatically change prices, putting slower businesses without access to major pricing datasets at a disadvantage.
“Collusion assisted by algorithms may make it easier for firms to avoid detection, or to effectively coordinate, where doing so may otherwise be too complicated (such as in relation to two large sets of pricing data), resulting in higher prices for customers,” the regulators said.
DP-REG also noted two potential indirect threats to small businesses arising from generative AI: the likelihood of data needed for investigations being secluded away in advanced and opaque AI systems, and the potential for inquiries to become stuffed with AI-generated junk submissions.
Where does existing regulation cover AI?
Although the list of concerns is long, DP-REG appears confident that many of those risks could be addressed by existing regulations or rules which regulators have already suggested.
The discussion paper itself notes that existing rules touching the borders of AI should be considered before any drastic changes take place.
“We support this approach and, where gaps are identified, the Government should consider how existing frameworks may be strengthened and enhanced (including through existing regulatory reform proposals) before consideration is given to creating a separate regime specific to this technology,” DP-REG said.
Self-preferencing and tying behaviours undertaken by AI algorithms could be effectively policed by suggestions put forward by ACCC’s Digital Platforms Services Inquiry in September 2022, which called for “service-specific codes of conduct with targeted competition obligations, which would apply to designated platforms with the ability and incentive to engage in anti-competitive conduct to address such conduct”.
That same interim report “recommended a new independent ombuds scheme to resolve disputes between digital platforms and consumers, including small businesses,” which could target fake reviews spewed out on those platforms.
Misleading statements generated by AI could be covered by a blanket ban on unfair trading practices, which is currently under consultation by the federal government.
Fake news, spam, and digital scams perpetrated or enabled by AI would likely come under existing broadcast regulations, DP-REG added.
Australia’s data privacy rules are evolving, but are foundationally fit to adapt to AI, the submission adds.
“The Privacy Act is principles-based and technology neutral, which has a number of advantages in the context of AI… Given the ‘speed of innovation in recent AI models’, this future-proofing is essential to effective regulation,” DP-REG said.
Where are the gaps?
Although existing regulations may have the capacity to cover concerns kicked up by AI adoption, DP-REG says there are key areas where AI usage could slip through the cracks.
“One challenge for regulators is that some forms of potentially harmful algorithmic collusion are likely to be legal under current regulatory settings, including where ‘competing’ algorithms simultaneously learn to set higher prices collectively to maximise profit,” the group notes.
Elsewhere, Safety By Design — a voluntary initiative promoted by eSafety, aimed at “anticipating, detecting and eliminating online harms before they occur” — is not currently enforceable through the organisation’s regulatory powers.
This means eSafety has “somewhat limited ability to require companies to build in risk mitigation measures at the development phase when many important safety decisions are made, as its regulatory options generally only apply after a technology has been made available to Australians”.
“Consideration should be given to the need for ex-ante regulatory oversight to apply earlier in the process to ensure effective guardrails are established before technology is publicly released.”
In other words, the federal government should consider whether regulation should interact with AI developments before they ever hit the market.
What other views are under consideration?
Although DP-REG’s submission comes directly from Australia’s regulatory experts, it is hardly the only set of views the federal government is taking under consideration.
Workplace Relations Minister Tony Burke has flagged the government is considering how AI will brush up against industrial relations, given the big-picture potential of new digital systems to reshape much of the Australian workforce.
“We’re trying to work through some concepts as to how do you maximise the opportunity for secure work knowing that technology changes, knowing that technology will change and it will come,” Burke said in August, per the Australian Financial Review.
The intellectual property rights of creative workers — another focus for Burke in his capacity as Arts Minister — is addressed in the Media, Entertainment and Arts Alliance (MEAA) submission to the DISR discussion paper.
Generative AI can produce work derived from writing, art, or music created by an actual person, leading to a global debate over copyright protections and assurances that humans will be compensated for AI source material.
“AI tools which profit from the work of media or creative professionals must introduce methods to compensate the creators whose work is used to train these tools,” the MEAA said in its submission.
Australia “cannot have a ‘set and forget’ approach to regulation and monitoring of this,” the submission added.
“A flexible, responsive, evolving approach is needed to minimise harm.”
Those views, and many more put forward by business advocates, think tanks, and human rights organisations, will be expressed in DISR’s final report.
COMMENTS
SmartCompany is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.
The SmartCompany comment section is members-only content. Please subscribe to leave a comment.
The SmartCompany comment section is members-only content. Please login to leave a comment.