The Future of Work
AI and the Objective Truth
In a new book about how technology will affect workers, MIT experts explain why ‘the future of AI is the future of work’ and how artificial intelligence is far from replacing humans but will still change most occupations.
The most insightful part of the study is that amid widespread exuberance and anxiety about AI, machines displacing workers. The conclusion is that technological advances aren’t driving us toward a jobless future anytime soon.
MIT professors David Autor and David Mindell and principal research scientist Elisabeth Reynolds write in their new book: “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” that “many in our country are failing to thrive in a labour market that generates plenty of jobs but little economic security.” The authors say it’s important to understand the relationships between emerging technologies and the workplace. This helps us form more realistic expectations about technology and helps us stay away from forming positions based on emotion, or lack of proper information on the topic.
Having a fundamental and practical understanding of what automation technology has done over past periods of industrialization, can be insightful about its evolution. Insight into AI’s usefulness helps us explore useful strategies and policies to optimally integrate it into society and to better shape our future prosperity.
Several recommendations were made for how employers, schools, and governments should progress forward with AI — including investing in innovative skills training, its use in improving job quality, modernizing unemployment insurance and labour laws, increasing federal research and development spending to enhance and shape innovation, and rebalancing taxes on capital and labour, and applying corporate income taxes more equally.
The first step toward preparing for a future with AI is understanding AI’s capabilities and limitations! “The future of AI is the future of work,” and it’s important to consider the nature of technological change over time and its broader implications on society. “It has become common practice among techno-pundits to describe these changes as accelerating, though with little agreement on the measures.” A lot of talk but no real solutions presented which only serves to add to the noise.
The authors point out that when looking at historical patterns, they often find long gestation periods before these “apparent accelerations” set in, often three or four decades. Moreover, the expected natural evolution of things usually doesn’t materialize as the pontificators say. Often it’s very different. Be very skeptical of those who profess to know what the future will hold. While the technology evolution towards digital automation is an ongoing process, the idea that the future relative to AI will be one of machine dominance is a far cry from any sort of certainty. If the history of human predictions is any indicator, it won’t be anything like the hype courtiers and ‘experts’ are saying.
The basic technologies of the internet began in the 1960s and 1970s, then exploded into the mid-1990s. Even so, it is only in the past decade that most businesses have truly embraced networked computing as a transformation tool of their businesses, says task force member Erik Brynjolfsson. He calls this phenomenon a “J-curve,” which suggests that the path of technological acceptance is usually slow and incremental in the beginning. Acceleration comes with various other technologies and societal breakthroughs that act as catalysts for a leap forward. For example, COVID and Zoom.
Zoom was a technology that already existed like other similar technologies — Google, Skype, Cisco etc. It was not until a major societal event that allowed for the adaptation of virtual calling, ushering the normalization of ‘Let’s set up a Zoom call.’ Therefore, technology alone is not responsible for progress, it comes down to usefulness and society — humans decide the broad acceptance of any technology based on its usefulness to them.
AI may seem to be accelerating quickly but in the real context of things, AI has been around for a while now, and its timeline reflects a combination of perfecting and maturing of new technologies. Its integration into society, useful adaptation and adoption.
“The ability to adapt to novel situations is still a challenge for AI and robotics and a key reason why companies continue to rely on human workers for a variety of tasks.” Autor, Mindell, and Reynolds.
As science fiction writer William Gibson once famously said, “The future is already here, it’s just not evenly distributed.” Gibson’s idea profoundly tells us about the slow evolution of mass adoption of technology which is what we are seeing today. Regardless of the hype (which is fading), we are experiencing with Generative AI; et al.
So instead of wasting time making predictions that are inevitably linked to our hopes, fears, anxieties — our biases and motivations —applied intelligence seeks to develop technology applications that are useful and practical to people — ai doesn't seek to enter the useless fascination and pontification domain. Instead, it seeks to develop useful and robust practical software that helps people with strategy development to adapt and win in the new digital environment. By empowering them with value-creating tools you simultaneously create value for the enterprise too. Win-win!
ai believes that we should look for places in today’s world where we can lead technological change for easier and broader adoption in the best interest of society — our humanity.
For example, autonomous cars are already 15 years long in development tooth, but we’re not anywhere close to seeing driverless cars on the road. As something normal. We can look at driverless cars as initial deployments — things to come but whose future as predicted has not yet panned out. Humans take time to adjust, and in the case of driverless cars broad society is not yet comfortable in putting their safety, their lives in the hands of an intelligent machine. Those very human decisions are what ultimately determine the faith of technology use and its long-term sustainability as part of society and culture.
A plane flies on autopilot technology which has AI in it but we still have pilots in the cock pit, in control, and ground controllers to back them up. This might inherently be the essence of how AI is integrated and normalized into our society.
Most of the AI systems deployed today, while novel and impressive, still fall into the category of what task force member, AI pioneer, and director of MIT’s Computer Science and Artificial Intelligence Laboratory Daniela Rus calls “specialized AI.” Systems that can solve a limited number of problems using vast amounts of data, extract patterns, and make predictions to guide future actions. “Narrow AI solutions exist for a wide range of specific problems,” writes MIT Sloan School professor Robert Laubacher of the MIT Center for Collective Intelligence.
Such systems include IBM’s Watson system and Google’s AlphaGo program. The systems we also explore in insurance and health care all belong to this class of narrow AI, though they vary in different classes of machine learning, computer vision, and natural language processing. Other systems include more traditional “classic AI” systems, which represent and reason about the world with formalized logic. AI is not a single thing but rather a variety of different AIs, each with different characteristics and purposes.
The traditional “Turing test” for AI should be updated argues Task Force Research Advisory Board member Rodney Brooks. The old standard was a computer behind a wall with which a human could hold a textual conversation and find it indistinguishable (highly questionable) from another person, he says, has already been achieved long ago with simple chatbots.
“Specialized AI systems, through their reliance on largely human-generated data, excel at producing behaviours that mimic human behaviour on well-known tasks. They also incorporate human biases. They still have problems with robustness, the ability to perform consistently under changing circumstances (including intentionally introduced noise in the data), and trust, the human belief that an assigned task will be performed correctly every single time” writes Malone, Rus, and Laubacher.
This is because LLM or deep neural nets lack robustness, which of course is not acceptable in critical applications. So like with Tesla’s driverless cars, they lack trust and so too will lack adaptation. The problem is further exacerbated by the lack of explainability — not able to reveal to humans how they reach decisions.
“The ability to adapt to entirely novel situations is still an enormous challenge for AI and robotics and a key reason why companies continue to rely on human workers for a variety of tasks. Humans still excel at social interaction, unpredictable physical skills, common sense, and, of course, general intelligence.” The team says.
Specialized AI systems, therefore, tend to be task-oriented, executing limited sets of tasks, more than the full set of activities. Nevertheless, all occupations have some exposure to AI specialization. For example, AI in the medical world allows doctors to spend more time on the most important tasks, i.e., conducting physical examinations or developing customized treatment plans for their patients.
These tasks include providing physical assistance to fragile humans, observing their behaviour and communicating with them and family. Medicine is still very human from what I can tell.
So whether embodied in this particular job, a warehouse worker’s job, or other kinds of work, captures the sense that today’s intelligence challenges are problems of physical dexterity, social interaction, and judgment. So AI has to fit narrowly and task-specifically into our lives to enhance and be useful to us. But the idea now being floated today of ‘artificial general intelligence’ (AGI) functioning like the human brain — neural networks — replacing humans altogether, is science fiction.
Intelligent people should not waste precious time and energy listening to storytelling about AI, which can brew on the edges of conspiracy theories. We ought to be careful about building our future away from reality.
The ability to do complex work tasks in real environments requires authentic general or natural intelligence supervision. There is no way around that. This is something non-biological organisms can’t ever be nor be programmed to have. The laws of nature reign supreme regardless of the nonsense that may come out from time to time.
These dimensions remain out of reach for current AI. Nevertheless, AI has real value and significant implications for the future of work, including its responsible and useful integration into society. At the end of the day, the future of work will remain very human indeed.