The part of the productivity debate that continues to be ignored

productivity ai leadership leaders

Source: Adobe Stock

Increased economic productivity is the exalted aim of the 21st-century state, the holy grail of every treasurer or Reserve Bank governor. But what does it really mean? At its simplest, an increase in productivity is the ability to produce more with less. Grow more wheat on less land, run a factory with less electricity, but most commonly, create more things with fewer workers.

As the central bank asserts in its chirpy explainer, productivity is good because it “contributes to the economic prosperity and welfare of all Australians”. How? Supposedly by increasing wages, lowering prices, driving up profits, and generally by increasing growth. It’s a great bedtime story: simple, reassuring, with a happy ending. The key point left out is that making more of anything with less labour means less demand for that labour, often leading to fewer jobs and reduced wages.

This year’s Asia-Pacific Economic Cooperation (APEC) Summit is taking place in the tech centre of the world, San Francisco, and among this year’s themes of sustainability, inclusivity, innovation and resilience, there is surprisingly little mention of the growing impact artificial intelligence will have over the coming decade.

I say surprisingly because, as far as economic topics go, AI is the single most important story of the coming decade. Following the release of ChatGPT, almost every major consultancy has or is in the process of scrambling to pull together a report on the anticipated economic impact of AI. How will it accelerate growth, productivity, the labour market, and sales?

PricewaterhouseCoopers (PwC) was one of the first out of the gates with “Sizing the prize”, a study concluding that in the year 2030, AI could be contributing up to $15.7 trillion to the global economy (to put that into perspective, that’s more than the current output of China and India combined). Meanwhile, a survey conducted by research firm Valoir — of more than 1000 employees over a range of sectors and roles including finance, HR, IT, marketing, operations, sales and the service industry — concluded that generative AI could replace 40% of the average working day.

This finding is replicated in a similar report by Goldman Sachs. The firm’s analysis of 900 different occupations showed that with just the application of existing AI technology, roughly two-thirds of jobs could be fully or mostly automated. McKinsey Digital was even more optimistic, suggesting that if implemented today, current generative AI (let alone future improvements) has the potential to automate work activities that absorb up to 70% of all employees’ time.

In the past, while changes in technology, including the advent of the digital revolution and the widespread use of computers, have led to lay-offs and the mass restructuring of the economy, new technologies have also led to the corresponding emergence of novel and unforeseen industries and the creation of entire job sectors. The digital revolution disrupted traditional enterprises and many people were left behind, but there is an argument to be made that, overall, total labour demands went up, and the progress it brought has made us materially better off, though perhaps less happy. Plus, at the end of the day, there really wasn’t much choice. If we hadn’t embraced digital, others would have — and nobody wants to be left behind in the proverbial stone age.

However, the AI revolution will be fundamentally different. It will not repeat historical patterns following the introduction of new technology into the workplace. Unlike previous technology tools, AI has three axiomatic differences.

Firstly, it is the first tool in human history that can determine a preferred course of action by itself. Computers were tools. They helped humans perform calculations but did not indicate what calculations should be made. They could help make typing and sending a letter faster, but could not decide if a letter was fit for purpose or should be sent.

Secondly, having determined a decision, next-generation AI will be the first invention in human history capable of agency — able to undertake steps to enact a decision independent of a human actor and more efficiently than a human could. In deciding to build a new model car, it can determine all the actions needed to bring that project to life and can send out instructions, order materials and monitor progress.

Thirdly, it could soon be the first technology to have the capacity to form new ideas. Novel thoughts and creations, from literature and fine art to invention. The content of the letter it decides to write, in the pathway of actions it takes to complete a goal, will be entirely of its own making.

As an economist, I think the reports by Valoir, PwC, Goldman Sachs and McKinsey grossly underestimate the upheaval that AI will bring. This is not a disruptive tool; it is a dystopian one, at least in the medium term. At our core, humans are social creatures, and we express our utility to the group through our contribution — through meaningful work and production. What is our role in an AI future? As these many reports make clear, the technology’s role is seemingly limitless.

Yet in their zealous enthusiasm to unveil how increases in productivity will drive exponential profits and growth, one key part of the narrative is missing. Where do we, the people, fit in? Profit perhaps, but for whom? And what happens to the rest of us? Economics is a model mapped from reality, a simplified mathematical reflection of social order, in which productivity is not the complete sum of human life. In a new economy where AI not only replaces much of the workforce, but is the inventor, the decision-maker and the determinant of our economic trajectory, what role do we play?

The new era of computational economics is and should be a major topic of APEC this year, for Davos next year, and for all international forums for years to come.

This article was first published by Crikey.

COMMENTS