AI Should Augment Human Intelligence, Not Replace It
March 18th, 2021
People and AI both bring different abilities and strengths to the table. The real question is how human intelligence works with artificial intelligence.
In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence at a larger scale will add as much as $15.7 trillion to the global economy by 2030. As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines. This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable.
Are humans and AI machine really in competition with each? The history of work — particularly since the Industrial Revolution — is the history of people outsourcing their labor to machines. While that began with rote, repetitive physical tasks like weaving, machines have evolved to the point where they can now do what we might think of as complex cognitive work, such as math equations, recognizing language and speech, and writing. Machines thus seem ready to replicate the work of our minds, and not just our bodies. In the 21st century, AI is evolving to be superior to humans in many tasks. With this latest trend, it seems like there’s nothing that can’t soon be automated, meaning that no job is safe from being offloaded to machines.
This vision of the future of work has taken the shape of a zero-sum game, in which there can only be one winner.
We believe, however, that this view of the role AI will play in the workplace is wrong. The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities — but they don’t. AI-based machines are fast, more accurate, and consistently rational, but they aren’t intuitive, emotional, or culturally sensitive. And it’s exactly these abilities that humans possess and which make us effective.
Machine Intelligence vs. Human Intelligence (AI 1 and AI 2)
In general, people recognize today’s advanced computers as intelligent because they have the potential to learn and make decisions based on the information they take in. But while we may recognize that ability, it’s a decidedly different type of intelligence what we possess.
In its simplest form, AI is a computer acting and deciding in ways that seem intelligent. In line with Alan Turing’s philosophy, AI imitates how humans act, feel, speak, and decide. This type of intelligence is extremely useful in an organizational setting: Because of its imitating abilities, AI has the quality to identify informational patterns that optimize trends relevant to the job. In addition, contrary to humans, AI never gets physically tired and as long it’s fed data it will keep going.
These qualities mean that AI is perfectly suited to put at work in lower-level routine tasks that are repetitive and take place within a closed management system. In such a system, the rules of the game are clear and not influenced by external forces. Think, for example, of an assembly line where workers are not interrupted by external demands and influences like work meetings. As a case in point, the assembly line is exactly the place where Amazon placed algorithms in the role of managers to supervise human workers and even fire them. As the work is repetitive and subject to rigid procedures optimizing efficiency and productivity, AI can perform in more accurate ways to human supervisors.
Human abilities, however, are more expansive. Contrary to AI abilities that are only responsive to the data available, humans can imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns.
These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.
In this way humans represent what we call authentic intelligence. This type of intelligence is needed when open systems are in place. In an open management system, the team or organization is interacting with the external environment and therefore must deal with influences from outside. Such work setting requires the ability to anticipate and work with, for example, sudden changes and distorted information exchange, while at the same time being creative in distilling a vision and future strategy. In open systems, transformation efforts are continuously at work and effective management of that process requires authentic intelligence.
Although Artificial Intelligence (AI 1) seems opposite to Authentic Intelligence (AI 2), they are also complimentary. In the context of organizations, both types of intelligence offer a range of specific talents.
Which talents – operationalized as abilities needed to meet performance requirements – are needed to perform best? It is, first of all, important to emphasize that talent can win games, but often it will not win championships. For this reason, we believe that it will be the combination of the talents, working in tandem, that will make the future of intelligent work. It will create the kind of intelligence that will allow for organizations to be more efficient and accurate, but at the same time also creative and pro-active. This other type of AI we call Augmented Intelligence.
The Augmented Intelligence (AI 3)
What will Augmented Intelligence (AI 3) be able to offer that AI 1 and AI 2 can’t? The second author of this article has some unique insight here: he is known for winning championships, while at the same time he also has the distinctive experience of being the first human to lose a high-level game to a machine. In 1997, chess grand master Garry Kasparov lost a game from an IBM supercomputer program called Deep Blue. It left him to rethink how the intellectual game of chess could be approached differently, not simply as an individual effort but as a collaborative one. And, with the unexpected victory of Deep Blue, he decided to try collaborating with an AI.
In a match in 1998 in León, Spain, Kasparov partnered with a PC running the chess software of his choice — an arrangement called “advanced chess” — in a match against the Bulgarian Veselin Topalov, who he had beaten 4-0 a month earlier. This time, with both players supported by computers, the match ended in a 3-3 draw. It appeared that the use of a PC nullified the calculative and strategic advances Kasparov usually displayed over his opponent.
The match provided an important illustration of how humans might work with AI. After the match, Kasparov noted that the use of a PC allowed him to focus more on strategic planning while machine took care of the calculations. Nevertheless, he also stressed that simply putting together the best human player and best PC did not, in his eyes, reveal games that were perfect. Like with human teams, the power of working with an AI comes from how the person and computer complement each other; the best players and most powerful AIs partnering up don’t necessarily produce the best results.
Once again, the chess world offers a useful test case for how this collaboration can play out. In 2005 the online chess playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. What made this competition interesting is that several groups of grandmasters working with computers also participated in this tournament. Predictably, most people expected that one of these grandmasters in combination with a supercomputer would dominate this competition — but that’s not what happened. The tournament was won by a pair of amateur American chess players using three computers. It was their ability to coordinate and effectively coach their computers that defeated the combination of a smart grandmaster and a PC with great computational power.
The publication of this article has been extracted from Harvard Business Review Website by David De Cremer and Garry Kasparov.