Is Your AI Tool A Competent Jerk Or A Lovable Fool?
This article originally appeared on behalf of the Forbes Technology Council, a community for world-class CIOs, CTOs, and technology executives. Read the original post here.
According to the PwC CEO Survey released late last month, 45% of North American organizations have introduced artificial intelligence (AI) initiatives, with a further 37% planning to do so in the next three years. It’s not surprising that so many CEOs are adopting artificial intelligence tools given the touted benefits. As a CEO who works with senior leaders at fast-growing enterprises to empower their customer-facing teams, I’m observing a big and widening gap between AI hype and AI reality. Not only that, I am seeing two categories of “AI fails" — initiatives that don’t come close to achieving expected outcomes. These typically involve the “Competent Jerk” AI tools and the “Lovable Fool” AI tools.
The 'Competent Jerk' AI Tool
In 2014, the machine learning team at Amazon built an algorithm designed to speed up the résumé review process with the goal of bypassing the traditionally slow and costly human-driven process. A computer can screen thousands of résumés faster than even the most practiced, speed-reading recruiter. Using AI in this situation was a slam dunk — on paper.
Within a year, however, Amazon realized the AI wasn’t working as they’d hoped. By training the AI on hiring patterns and résumés submitted over the previous decade — which tended to skew overwhelmingly male — the system decided that candidates must be male in order to be considered top candidates. Résumés that included the word “woman” or “women,” or that listed candidates as being graduates of some women’s colleges, were automatically downranked. Even after manual adjustments to the algorithm to prevent the most obvious biases, Amazon executives ultimately lost faith and shut down the project.
This example is a high-profile mistake of the “competent jerk” variety. While Amazon’s AI recruiting tool may have executed its duties faithfully, it ended up amplifying biases of human beings which led to an unacceptable outcome by any stretch of the imagination. Amazon is not the only victim of such a project, by the way. I know dozens of enterprises that are running AI projects right now with results that are equally disappointing because their AI tools are “competent jerks.”
Worse, many of these initiatives are customer-facing. Imagine a “competent jerk” AI tool let loose on a decade’s worth of responses from your support agents to customer questions and tickets. If the tool optimizes for answering customer questions as quickly as possible and getting them off your back, it will completely ignore any opportunities for educating your customers and perhaps up-selling them. It will also ignore customer emotions and perhaps lead to more of your customers leaving.
There are many other failure modes beyond the above. Are you willing to bet your company’s revenue on a bunch of “competent jerk” AI tools?
The 'Lovable Fool' AI Tool
While “competent jerks” don’t seem that great, “lovable fool” AI tools can be even worse. In an effort to make chatbots seem more human, many companies have undertaken efforts to give their chatbots “personalities.” Unfortunately, this effort comes at a cost to the effectiveness of the AI tool and the value it delivers to customers. At the extreme end of the “lovable fool” AI tools is this example of two bots stuck in an infinite loop replying to each other on Twitter. It’s the modern-day equivalent of the two village idiots yelling at each other in the town square.
A more realistic (yet equally frustrating) example is the increasing number of times I’ve called a customer support phone number only to hear, “Hello there! Hope you are having a great day! Please say out loud the problem you’re having.” Hopeful given the bot’s pleasant “personality,” I oblige and speak into my phone, only to have the bot misunderstand me and redirect my call.
A real-life example of this is Vodafone’s commerce chatbot called TOBi, which apparently used to route customers to the bereavement team when they reported that their phones were “dead.” Despite the twinge of sadness we all feel when a beloved phone makes its way to the great provider in the sky, I don’t believe the bereavement team is where most customers want to be routed.
You may argue that these bots aren’t “really AI,” and I would agree with you. However, in my experience, they are certainly being marketed that way to senior executives at companies around the world who want to ensure they don’t miss the boat on the next big innovation wave. And given the high volume of customer service calls that take place, these are great training grounds for the next generation of “lovable fool” AI tools.
A Better Way — Maybe
One takeaway for me personally, based on the stories I’ve heard from customers and the tools that I personally use as a consumer myself, is that AI tools are not ready to replace human beings. We do so many things well, from understanding context and emotion and colloquialism to communicating effectively and adjusting at a moment’s notice when circumstances change.
One company that really gets this is a quiet giant in the fintech industry, PrecisionLender. Its AI technology allows banks to write the most optimal loan for potential customers. At the same time, it empowers bankers who write the loans to have more informed, better relationships with their clients, and to be simultaneously more data-driven and more human than the previous approach that emphasized a binary, yes/no outcome.
Imagine a future where you actually learn how to use a company’s products better every time you call their support hotline. We’re certainly within reach of this future. After all, why settle for a competent jerk or a lovable fool when you can have something else entirely: a truly beneficial partnership?