ArabicChinese (Simplified)DutchEnglishFrenchGermanItalianPortugueseRussianSpanish

Eating Disorder Org’s AI Blunder is a Cautionary Tale About Embracing Tech for Fundamentally Human Roles

One of the ongoing debates in tech circles and beyond is how fast AI will replace humans in certain lines of work. One role where we’ve already seen organizations embrace the technology is in customer support, deploying AI-powered customer interfaces to act as the first line of contact to handle inbound queries and provide critical information to customers.

The only problem? Sometimes the information they provide is wrong and potentially harmful to an organization’s customers. To illustrate this, we need to look no further than this week’s news about efforts by the National Eating Disorder Association to use AI-powered chatbots to replace human workers for the organization’s call helpline. The group announced a chatbot named Tessa earlier this month after the helpline workers decided to unionize and, just a couple of weeks later, announced they would shut the chatbot down.

The quick about-face resulted from the chatbot giving out information that, according to NEDA, “was harmful and unrelated to the program.” This included giving weight loss advice to a body-positive activist named Sharon Maxwell, who has a history of eating disorders. Maxwell documented the interaction in which the bot told her to weigh herself daily and track her calories. In doing so, the bot went off-script since it was only supposed to walk users through the organization’s eating disorder prevention program and refer them to other resources.

While one has to question the decision-making of an organization that thought it could replace professionals trained to help those with sensitive health and mental wellness challenges, the example of NEDA is a cautionary tale for any organization eager to replace humans with AI. In the world of food and nutrition, AI can be a valuable tool to provide information to customers. However, the potential cost savings and efficiency the technology provides must be balanced against the need for a nuanced human understanding of the sensitive issues and the potential damage bad information could cause.

NEDA saw AI as a quick fix to what it saw as a nuisance in the form of real human workers and their pesky desire to organize a union to force change in the workplace. But unfortunately, in swapping out humans for a computer simulation of humans, the organization lost sight of the fact that serving their community requires the most human forms of expression in empathy, something AI is famously bad at.

All forms of customer interaction are not created equal. An AI that asks if you want a drink with your burger at the drive-thru is probably going to be suitable in most scenarios, but even in those scenarios, it’s perhaps best to tightly guardrail the AI’s knowledge set and build in offramps to the system where customers can seamlessly be handed over to an actual human in case they have a specialized question or if there’s any potential for creating more harm than good during the interaction.