Briefing | The future of jobs

The onrushing wave

Previous technological innovation has always delivered more long-run employment, not less. But things can change

IN 1930, when the world was “suffering…from a bad attack of economic pessimism”, John Maynard Keynes wrote a broadly optimistic essay, “Economic Possibilities for our Grandchildren”. It imagined a middle way between revolution and stagnation that would leave the said grandchildren a great deal richer than their grandparents. But the path was not without dangers.

One of the worries Keynes admitted was a “new disease”: “technological unemployment…due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” His readers might not have heard of the problem, he suggested—but they were certain to hear a lot more about it in the years to come.

For the most part, they did not. Nowadays, the majority of economists confidently wave such worries away. By raising productivity, they argue, any automation which economises on the use of labour will increase incomes. That will generate demand for new products and services, which will in turn create new jobs for displaced workers. To think otherwise has meant being tarred a Luddite—the name taken by 19th-century textile workers who smashed the machines taking their jobs.

For much of the 20th century, those arguing that technology brought ever more jobs and prosperity looked to have the better of the debate. Real incomes in Britain scarcely doubled between the beginning of the common era and 1570. They then tripled from 1570 to 1875. And they more than tripled from 1875 to 1975. Industrialisation did not end up eliminating the need for human workers. On the contrary, it created employment opportunities sufficient to soak up the 20th century’s exploding population. Keynes’s vision of everyone in the 2030s being a lot richer is largely achieved. His belief they would work just 15 hours or so a week has not come to pass.

When the sleeper wakes

Yet some now fear that a new era of automation enabled by ever more powerful and capable computers could work out differently. They start from the observation that, across the rich world, all is far from well in the world of work. The essence of what they see as a work crisis is that in rich countries the wages of the typical worker, adjusted for cost of living, are stagnant. In America the real wage has hardly budged over the past four decades. Even in places like Britain and Germany, where employment is touching new highs, wages have been flat for a decade. Recent research suggests that this is because substituting capital for labour through automation is increasingly attractive; as a result owners of capital have captured ever more of the world’s income since the 1980s, while the share going to labour has fallen.

At the same time, even in relatively egalitarian places like Sweden, inequality among the employed has risen sharply, with the share going to the highest earners soaring. For those not in the elite, argues David Graeber, an anthropologist at the London School of Economics, much of modern labour consists of stultifying “bullshit jobs”—low- and mid-level screen-sitting that serves simply to occupy workers for whom the economy no longer has much use. Keeping them employed, Mr Graeber argues, is not an economic choice; it is something the ruling class does to keep control over the lives of others.

Be that as it may, drudgery may soon enough give way to frank unemployment. There is already a long-term trend towards lower levels of employment in some rich countries. The proportion of American adults participating in the labour force recently hit its lowest level since 1978, and although some of that is due to the effects of ageing, some is not. In a recent speech that was modelled in part on Keynes’s “Possibilities”, Larry Summers, a former American treasury secretary, looked at employment trends among American men between 25 and 54. In the 1960s only one in 20 of those men was not working. According to Mr Summers’s extrapolations, in ten years the number could be one in seven.

This is one indication, Mr Summers says, that technical change is increasingly taking the form of “capital that effectively substitutes for labour”. There may be a lot more for such capital to do in the near future. A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.

Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit.

The lathe of heaven

Economists take the relationship between innovation and higher living standards for granted in part because they believe history justifies such a view. Industrialisation clearly led to enormous rises in incomes and living standards over the long run. Yet the road to riches was rockier than is often appreciated.

In 1500 an estimated 75% of the British labour force toiled in agriculture. By 1800 that figure had fallen to 35%. When the shift to manufacturing got under way during the 18th century it was overwhelmingly done at small scale, either within the home or in a small workshop; employment in a large factory was a rarity. By the end of the 19th century huge plants in massive industrial cities were the norm. The great shift was made possible by automation and steam engines.

Industrial firms combined human labour with big, expensive capital equipment. To maximise the output of that costly machinery, factory owners reorganised the processes of production. Workers were given one or a few repetitive tasks, often making components of finished products rather than whole pieces. Bosses imposed a tight schedule and strict worker discipline to keep up the productive pace. The Industrial Revolution was not simply a matter of replacing muscle with steam; it was a matter of reshaping jobs themselves into the sort of precisely defined components that steam-driven machinery needed—cogs in a factory system.

The way old jobs were done changed; new jobs were created. Joel Mokyr, an economic historian at Northwestern University in Illinois, argues that the more intricate machines, techniques and supply chains of the period all required careful tending. The workers who provided that care were well rewarded. As research by Lawrence Katz, of Harvard University, and Robert Margo, of Boston University, shows, employment in manufacturing “hollowed out”. As employment grew for highly skilled workers and unskilled workers, craft workers lost out. This was the loss to which the Luddites, understandably if not effectively, took exception.

With the low-skilled workers far more numerous, at least to begin with, the lot of the average worker during the early part of this great industrial and social upheaval was not a happy one. As Mr Mokyr notes, “life did not improve all that much between 1750 and 1850.” For 60 years, from 1770 to 1830, growth in British wages, adjusted for inflation, was imperceptible because productivity growth was restricted to a few industries. Not until the late 19th century, when the gains had spread across the whole economy, did wages at last perform in line with productivity (see chart 1).

Along with social reforms and new political movements that gave voice to the workers, this faster wage growth helped spread the benefits of industrialisation across wider segments of the population. New investments in education provided a supply of workers for the more skilled jobs that were by then being created in ever greater numbers. This shift continued into the 20th century as post-secondary education became increasingly common.

Claudia Goldin, an economist at Harvard University, and Mr Katz have written that workers were in a “race between education and technology” during this period, and for the most part they won. Even so, it was not until the “golden age” after the second world war that workers in the rich world secured real prosperity, and a large, property-owning middle class came to dominate politics. At the same time communism, a legacy of industrialisation’s harsh early era, kept hundreds of millions of people around the world in poverty, and the effects of the imperialism driven by European industrialisation continued to be felt by billions.

The impacts of technological change take their time appearing. They also vary hugely from industry to industry. Although in many simple economic models technology pairs neatly with capital and labour to produce output, in practice technological changes do not affect all workers the same way. Some find that their skills are complementary to new technologies. Others find themselves out of work.

Take computers. In the early 20th century a “computer” was a worker, or a room of workers, doing mathematical calculations by hand, often with the end point of one person’s work the starting point for the next. The development of mechanical and electronic computing rendered these arrangements obsolete. But in time it greatly increased the productivity of those who used the new computers in their work.

Many other technical innovations had similar effects. New machinery displaced handicraft producers across numerous industries, from textiles to metalworking. At the same time it enabled vastly more output per person than craft producers could ever manage.

Player piano

For a task to be replaced by a machine, it helps a great deal if, like the work of human computers, it is already highly routine. Hence the demise of production-line jobs and some sorts of book-keeping, lost to the robot and the spreadsheet. Meanwhile work less easily broken down into a series of stereotyped tasks—whether rewarding, as the management of other workers and the teaching of toddlers can be, or more of a grind, like tidying and cleaning messy work places—has grown as a share of total employment.

But the “race” aspect of technological change means that such workers cannot rest on their pay packets. Firms are constantly experimenting with new technologies and production processes. Experimentation with different techniques and business models requires flexibility, which is one critical advantage of a human worker. Yet over time, as best practices are worked out and then codified, it becomes easier to break production down into routine components, then automate those components as technology allows.

If, that is, automation makes sense. As David Autor, an economist at the Massachusetts Institute of Technology (MIT), points out in a 2013 paper, the mere fact that a job can be automated does not mean that it will be; relative costs also matter. When Nissan produces cars in Japan, he notes, it relies heavily on robots. At plants in India, by contrast, the firm relies more heavily on cheap local labour.

Even when machine capabilities are rapidly improving, it can make sense instead to seek out ever cheaper supplies of increasingly skilled labour. Thus since the 1980s (a time when, in America, the trend towards post-secondary education levelled off) workers there and elsewhere have found themselves facing increased competition from both machines and cheap emerging-market workers.

Such processes have steadily and relentlessly squeezed labour out of the manufacturing sector in most rich economies. The share of American employment in manufacturing has declined sharply since the 1950s, from almost 30% to less than 10%. At the same time, jobs in services soared, from less than 50% of employment to almost 70% (see chart 2). It was inevitable, therefore, that firms would start to apply the same experimentation and reorganisation to service industries.

A new wave of technological progress may dramatically accelerate this automation of brain-work. Evidence is mounting that rapid technological progress, which accounted for the long era of rapid productivity growth from the 19th century to the 1970s, is back. The sort of advances that allow people to put in their pocket a computer that is not only more powerful than any in the world 20 years ago, but also has far better software and far greater access to useful data, as well as to other people and machines, have implications for all sorts of work.

The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age”, a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. Their argument rests on an underappreciated aspect of the exponential growth in chip processing speed, memory capacity and other computer metrics: that the amount of progress computers will make in the next few years is always equal to the progress they have made since the very beginning. Mr Brynjolfsson and Mr McAfee reckon that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.

A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess.

Brave new world

Even after computers beat grandmasters at chess (once thought highly unlikely), nobody thought they could take on people at free-form games played in natural language. Then Watson, a pattern-recognising supercomputer developed by IBM, bested the best human competitors in America’s popular and syntactically tricksy general-knowledge quiz show “Jeopardy!” Versions of Watson are being marketed to firms across a range of industries to help with all sorts of pattern-recognition problems. Its acumen will grow, and its costs fall, as firms learn to harness its abilities.

The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.

Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive” jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. If such jobs persist, they may engage Mr Graeber’s “bullshit” detector.

Being newly able to do brain work will not stop computers from doing ever more formerly manual labour; it will make them better at it. The designers of the latest generation of industrial robots talk about their creations as helping workers rather than replacing them; but there is little doubt that the technology will be able to do a bit of both—probably more than a bit. A taxi driver will be a rarity in many places by the 2030s or 2040s. That sounds like bad news for journalists who rely on that most reliable source of local knowledge and prejudice—but will there be many journalists left to care? Will there be airline pilots? Or traffic cops? Or soldiers?

There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over”, that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.

And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers (see article). Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.

In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers” grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics.

The potential for dramatic change is clear. A future of widespread technological unemployment is harder for many to accept. Every great period of innovation has produced its share of labour-market doomsayers, but technological progress has never previously failed to generate new employment opportunities.

The productivity gains from future automation will be real, even if they mostly accrue to the owners of the machines. Some will be spent on goods and services—golf instructors, household help and so on—and most of the rest invested in firms that are seeking to expand and presumably hire more labour. Though inequality could soar in such a world, unemployment would not necessarily spike. The current doldrum in wages may, like that of the early industrial era, be a temporary matter, with the good times about to roll (see chart 3).

These jobs may look distinctly different from those they replace. Just as past mechanisation freed, or forced, workers into jobs requiring more cognitive dexterity, leaps in machine intelligence could create space for people to specialise in more emotive occupations, as yet unsuited to machines: a world of artists and therapists, love counsellors and yoga instructors.

Such emotional and relational work could be as critical to the future as metal-bashing was in the past, even if it gets little respect at first. Cultural norms change slowly. Manufacturing jobs are still often treated as “better”—in some vague, non-pecuniary way—than paper-pushing is. To some 18th-century observers, working in the fields was inherently more noble than making gewgaws.

But though growth in areas of the economy that are not easily automated provides jobs, it does not necessarily help real wages. Mr Summers points out that prices of things-made-of-widgets have fallen remarkably in past decades; America’s Bureau of Labour Statistics reckons that today you could get the equivalent of an early 1980s television for a twentieth of its then price, were it not that no televisions that poor are still made. However, prices of things not made of widgets, most notably college education and health care, have shot up. If people lived on widgets alone— goods whose costs have fallen because of both globalisation and technology—there would have been no pause in the increase of real wages. It is the increase in the prices of stuff that isn’t mechanised (whose supply is often under the control of the state and perhaps subject to fundamental scarcity) that means a pay packet goes no further than it used to.

So technological progress squeezes some incomes in the short term before making everyone richer in the long term, and can drive up the costs of some things even more than it eventually increases earnings. As innovation continues, automation may bring down costs in some of those stubborn areas as well, though those dominated by scarcity—such as houses in desirable places—are likely to resist the trend, as may those where the state keeps market forces at bay. But if innovation does make health care or higher education cheaper, it will probably be at the cost of more jobs, and give rise to yet more concentration of income.

The machine stops

Even if the long-term outlook is rosy, with the potential for greater wealth and lots of new jobs, it does not mean that policymakers should simply sit on their hands in the mean time. Adaptation to past waves of progress rested on political and policy responses. The most obvious are the massive improvements in educational attainment brought on first by the institution of universal secondary education and then by the rise of university attendance. Policies aimed at similar gains would now seem to be in order. But as Mr Cowen has pointed out, the gains of the 19th and 20th centuries will be hard to duplicate.

Boosting the skills and earning power of the children of 19th-century farmers and labourers took little more than offering schools where they could learn to read, write and do algebra. Pushing a large proportion of college graduates to complete graduate work successfully will be harder and more expensive. Perhaps cheap and innovative online education will indeed make new attainment possible. But as Mr Cowen notes, such programmes may tend to deliver big gains only for the most conscientious students.

Another way in which previous adaptation is not necessarily a good guide to future employment is the existence of welfare. The alternative to joining the 19th-century industrial proletariat was malnourished deprivation. Today, because of measures introduced in response to, and to some extent on the proceeds of, industrialisation, people in the developed world are provided with unemployment benefits, disability allowances and other forms of welfare. They are also much more likely than a bygone peasant to have savings. This means that the “reservation wage”—the wage below which a worker will not accept a job—is now high in historical terms. If governments refuse to allow jobless workers to fall too far below the average standard of living, then this reservation wage will rise steadily, and ever more workers may find work unattractive. And the higher it rises, the greater the incentive to invest in capital that replaces labour.

Everyone should be able to benefit from productivity gains—in that, Keynes was united with his successors. His worry about technological unemployment was mainly a worry about a “temporary phase of maladjustment” as society and the economy adjusted to ever greater levels of productivity. So it could well prove. However, society may find itself sorely tested if, as seems possible, growth and innovation deliver handsome gains to the skilled, while the rest cling to dwindling employment opportunities at stagnant wages.

This article appeared in the Briefing section of the print edition under the headline "The onrushing wave"

Coming to an office near you...

From the January 18th 2014 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

America’s economy has escaped a hard landing

But there are still pitfalls ahead

Third-party candidates could be decisive in America’s election

But they have to get on the ballot first


America’s ten-year-old fentanyl epidemic is still getting worse

The government is spending record amounts, just to slow its growth