Artificial intelligence will upend our economy. We’re utterly unprepared.
The AI revolution promises a vastly more efficient economy. But who pays the price of massive job losses, soaring inequality and social dislocation?
Every revolution has its costs and benefits, its victims and victors. The industrial revolution of the 19th and 20th centuries generated vast prosperity and created the world we know.
But it also produced some of the greatest inequality in human history as the wealth went to the owners of capital while the huge majority lived in deprivation and squalor.
The industrial revolution lasted for at least 150 years but
the digital revolution has been going for forty. It has barely begun.
MORE OUTPUT, FEWER WORKERS
Before economists grumble about flatlining productivity, they should perhaps widen their gaze and revise their ideas about what has caused it first to surge, then to slump. Throughout the developed world, the increase in productivity – that is, an increase in output for the same levels of input – rose sharply through the 1990s and then slowed.
This coincides exactly with the computerisation of enterprises all over the world. Computers weren’t the only reason for the productivity surge – globalisation, removal of trade barriers and other structural reforms were also involved – but the wave of IT investment in commerce was an important, though under-estimated, factor.
We can see what happened from the amount of money being spent on information and communications technology (ICT). Once saturation was reached, ICT money was largely spent on maintaining and occasionally upgrading existing equipment rather than employing new equipment for new tasks.
And that mirrors the productivity pattern. The twin booms in
productivity growth and in ICT investment began around 1990 and peaked at
around 2000.
Research from the Productivity Commission shows ICT capital services in Australia grew at 24% during the 1990s. It was faster in the decade’s second half, which is consistent with the increase in productivity at that time. And analysis of its data shows that ICT in commercial firms alone was responsible for around 25% of all productivity increase over the decade, and made a substantial contribution to the nation’s economic growth.
But that did not disrupt the long-term trend of a declining share of GDP going to wages and an increasing share going to profits as this familiar chart shows. And over the past decade, that trend appears to have accelerated.
For 50 years, incremental limitations on employees’ rights of collective bargaining have had their inevitable effect. If the financial gains of increasing productivity are to be fairly shared between capital and labour, both sides of the bargain need similar levels of bargaining power. Without that, employees and their dependants – that is, the vast majority of the population – will lose. Shareholders and company executives will gain. Artificial intelligence will supercharge that transfer of wealth.
WILL A COMPUTER TAKE YOUR JOB?
The initial phase of the digital revolution – computers and better communications – largely made existing jobs easier rather than replacing them altogether. And it created jobs that did not exist before, like data input clerks. There are exceptions, of course, like bank tellers.
AI is different. It will work, and is already working, not by making your job easier but by doing your job.
In 2015, a research paper by Oxford Economics estimated that 630,000 Australian jobs would be replaced by new technologies over the following decade, equivalent to 7.3% of the national workforce. And they estimated that this would produce massive increases in the income from economic activity.
The Oxford Economics project, sponsored by the Cisco technology company, assumed that all of this monetary gain would go to creating new jobs. But history and current experience show no reason for believing that to be likely. A far more probable result is that most of that productivity-derived income will accrue to the people who own the computers – employers and shareholders, not employees.
These 2015 estimates have not yet been reflected in real outcomes. Unforeseeable changes have intervened: the pandemic, the war in Ukraine, commodity prices and Chinese trade disruptions. But the point remains valid: there will be massive job losses, they will happen quite soon, and productivity will soar.
There are many similar estimates for other countries. These include:
- Automation will have displaced 85 million jobs globally by 2025.
- Over 60% of food preparation jobs are at risk of automation.
- By 2030, job losses in the US will reach 73 million, 17 million in Germany and 236 million in China.
- About 40% of low-skilled workers will lose their jobs but only 10% of more highly-educated people will.
These estimates vary widely, but all point in the same direction.
THE WILD NEW WORLD OF COMPUTERS THAT THINK
Alan Turing was a pioneering British scientist who is often regarded as the father of computing. He hypothesised a “universal machine” that could undertake many functions. He could have been describing the laptop computer on which I am writing this.
During World War 2, Turing led the team that created an electromagnetic computer, which they called the Bombe (pictured). With it, they broke the German Enigma code and shortened the war by two to four years, saving millions of lives.In 1950, Turing published a paper called Computing Machinery and Intelligence. In that paper, he postulated what has become the Turing test: how to tell if a machine actually thinks.
It puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a computer can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.We haven’t got there – quite – but soon will. The ChatGPT application would probably fail the Turing test, just. But soon, and probably within the decade, artificial intelligence will have progressed so far that ChatGPT will seem as old-fashioned as a 1985 personal computer seems to us now.
The development of AI has been many times faster than most experts predicted. The speed has taken industry, populations, regulators and governments by surprise. The world is almost totally unprepared.
A 2016 report by the US National Science and Technology Council for the Obama White House divided the field into ‘narrow’ and ‘general’ artificial intelligence.
Narrow AI is already familiar: “Narrow AI underpins many commercial services,” said the report, “such as trip planning, shopper recommendation systems, and ad targeting, and is finding important applications in medical diagnosis, education, and scientific research.” It’s about machines being able to learn and adapt. Machine learning has been available, at various levels of sophistication, ever since an IBM computer was able to beat the world chess champion in 1997.
General AI is about machines that can think like humans. “It refers,” the report said, “to a notional future AI system that exhibits apparently intelligent behaviour at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s narrow AI from the much more difficult challenge of general AI. … The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that general AI will not be achieved for at least decades.”
It was bad, though understandable, advice. Few at that time, and nobody on Obama’s committee, thought anything like ChatGPT would be around within six years of handing in their report. They did not predict self-driving cars, self-driving trains, facial recognition, autonomous drones or chatbots.
The estimates of job losses quoted earlier are almost all about the effects of current technology. But when machines can think for themselves, potentially as well or better than humans can, a vastly greater range of jobs will become vulnerable.
Germany’s Bild tabloid newspaper – the biggest-circulation paper in Europe – has announced plans to cut 200 jobs, or 20% of its workforce, to allow AI to do the jobs currently done by journalists.
The CEO of the Axel Springer company, the paper’s publisher, Mathias Döpfner, was quoted by The Guardian as predicting that AI would soon be better at the “aggregation of information” than human journalists and said that only publishers who created “the best original content” – such as investigative journalism and original commentary – would survive.AI is being used to read and interpret medical images, like X-rays and MRIs, and doing so better than experienced human radiologists.
Radio announcers are being replaced by computers that speak just like real people.
Already, “narrow” AI is doing much of the grunt work in accountancy practices – tax returns, conveyancing and so on. But the next wave of AI threatens almost the entire accountancy profession. Their work will be done instead by computers that can think, perform highly complex tasks of logical deduction, apply an encyclopaedic knowledge of tax law and practice, all much more quickly and cheaply than a human accountant.
We'll still need nurses |
The jobs least likely to be replaced are those which require physical presence, regardless of educational levels. Computers will not readily replace nurses, bricklayers, carpenters, panel beaters or motor mechanics.
A COLOSSAL ECONOMIC SHOCK
The rate at which this new technology affects the Australian and global economies will depend both on the speed of development and on the rate at which organisations adopt it.
Current indications point to the probability of the basic technology of highly advanced AI being achieved within the current decade. In the past, Australian businesses and individuals have quickly embraced new technology – personal computers, mobile phones, the internet, television streaming – and can be expected to do so again. Other countries may be slower.
Nevertheless, neither the new AI systems, nor their application to industry, commerce and government, will happen all at once. It will therefore take some years for the full impact of the new digital revolution to appear. As that point approaches, we can expect profound economic and social changes to grow and accelerate.
Right now, around 14 million people – 65% of the Australian population – have some kind of paid employment. Household spending accounts for 70% of all demand in the economy and for 50% of gross domestic product. If very large numbers of people are thrown out of work, or even have their hours substantially reduced, the economy would sink into deep and lasting recession or depression.
Individual enterprises will face a classic dilemma. What is good for each individually is bad for them all, and vice-versa. It would be in the interests of any profit-making outfit to minimise costs and maximise output – and, therefore, to embrace advanced AI and sack as many workers as possible.
But if they all did that, the economy would collapse and most of them would go out of business. Workers are also consumers: if the workers have no income, they can’t spend on the things those companies sell.
Multiply that around the country and around the world. Without adequate and imaginative government intervention, the apparent blessing of artificial intelligence would be the greatest of all economic and social curses.
Very large numbers of people would be newly dependent on social welfare payments – but the taxation revenue to pay for all that welfare would tumble.
Nothing but leisure? For life? |
For that to happen, income would have to come from somewhere. The idea of a universal basic payment is not new. It would mean everyone is entitled to a government-sourced payment that would guarantee they stayed above the poverty line. But poverty-level payments for a large part of the population would not drive a modern post-industrial economy. We would still be in permanent depression.
For a universal payment system to work, all or most of the money that now goes to wage-earners out of company earnings would have to go into the universal scheme. So after all that upheaval, would we really be any better off?
Maybe it’s time we started thinking about this stuff.