The Leobner Prize is awarded annually to the most seemingly human-like artificial intelligence (AI) application. It is presented by the Society for the Study of Artificial Intelligence and Simulation of Behaviour, the world’s oldest and most established AI society in the world. The Leobner prize is also the best example of a formal competition at what is famously called the Turing Test.
For four of the last six years the winner was Mitsuku, a highly intelligent chatbot that has been in a state of constant development since 2005. And it is the ability to pass this test that makes Mitsuku impressive. It also highlights a key concern that is often flagged when it comes to the topic of AI in the workplace: if we can’t tell the difference between a human and a machine, are humans irrelevant in certain jobs?
This is a common concern we see reflected in a number of surveys and interviews. In a 2018 Gallup survey, for example, seventy-three percent of participants said they expect the increased use of AI will eliminate more jobs than it creates. Sixty-three percent predict that new technologies and smart machines would widen the gap between rich and poor. However, in the same survey, 79 percent of Americans say artificial intelligence has had a “mostly positive” or “very positive” impact on their lives thus far.
Where AI in the form of machine learning and natural language processing in application can be seen as a new invention, the idea itself has been around for some time. In fact, it can be traced back to a man named Alan Turing. You might recognize that name as the man who led a key decryption team at Britain’s Bletchley Park in World War Two. This team saved an estimated 14 million people by building a machine capable of breaking the nearly impossibly enciphered communications of the Axis powers. In 1950, in the Journal Mind, Touring asked an important question that would put us on a trajectory leading to the thousands of AI applications available today. He asked, can machines think?
Five years later, a Dartmouth Assistant Professor of Mathematics named John McCarthy organized a team of researchers to dig deeper into this idea. McCarthy’s group submitted a formal proposal for a two month project to study, in their terms, artificial intelligence, during the summer of 1956. This is the first published use of this phrase. The proposal goes on to state that those involved were to make a significant advancement in the study of how machines use language, solve problems now reserved for humans, and, in principle, simulate any other feature of intelligence requested.
In this proposal we see not only the first use of the term, but the foundations of AI as it is today: machine learning, natural language processing, and more. These are the foundations of Roomba, Siri, Watson, Alexa, and the thousands of other applications in use today. And this is simply the window dressing of the AI revolution. Today, applications span every conceivable market.
There was a Fast Company article on the science of ideal work environments that looked at lighting, sound levels, temperature, and more. It’s an interesting approach to ideals, as well as how businesses can optimize a work environment to bring out the best in employees. For example, if you’re taking on a creative task, ambient noise is optimal. Extreme quiet can sharpen our focus for tasks that require complete focus. But then extreme quiet makes it harder to think creatively. Optimal room temperature is 71 degrees, and employees make 44% more mistakes when temperatures drop to around 68 degrees. Productivity also starts to drop considerably the higher you go as well. Too hot, productivity lowers. Too cold, mistakes are made. Victory belongs to those who keep the air and the noise levels just right.
This equation should not be a surprise for anyone familiar with the children’s fable of Goldilocks and the three bears. The story is of a girl who appears at the house of three bears and, after looking around a bit, samples three bowls of porridge. One she finds to be too hot. One too cold. But one, that was just right. This concept of the optimal point, the one that is just right, is what is often called the Goldilocks principle.
The application of the Goldilocks principle has impacted a wide range of disciplines. Theoretical physicist Stephen Hawking once used it to describe the habitable zone near a star, where the temperature would be just right for life. In economics, it is the sustainable level of growth and low inflation. In medicine, it refers to a drug that can hold the line between antagonistic and agonistic properties.
The question then is, for businesses today, what is the right approach to AI that optimizes productivity in employees?
The primary use for AI today is in assisting and augmenting workers, allowing time-consuming tasks to be offloaded. AI assists in the manage the quantity of data we have today and will increase in the future. If this is the case, and most who have interacted with AI have had a positive experience, why would 73 percent of people say they expect the increased use of AI will eliminate more jobs than it creates? The problem is not about the technology, but rather the perception of the technology. It is susceptible to broad interpretations as a term. The problem is one of emotions, about how we feel about the use of technology in how it alters our day to day.
Here then is the Goldilocks Dilemma, as applied to AI: When implementing an AI application, the focus should be on how it assists and augments workers, offloading time consuming tasks so workers can focus on more important tasks. It is about managing expectations and experiences as much as it is about potential gains. Business leaders should be mindful to roll out new features aligned with the natural comfort level of the organization around the usage of technology. Let’s simplify this: too little innovation and you are short changing your organization. Too much innovation all at once and people will avoid the usage of new technology. The key is right in the middle.