An AI-powered health chatbot named “Tessa” has been suspended by the National Eating Disorders Association. The move happened after allegations surfaced that it may have contributed to eating disorders development in its users.
The decision to suspend the chatbot came due to its potential to provide inaccurate and harmful advice regarding weight loss and dietary habits.
An #ArtificialIntelligence chatbot named "Tessa" has been withdrawn by the National Eating Disorder Association (Neda) following accusations that it was giving harmful advicehttps://t.co/kTsSgC9Ict— WION (@WIONews) June 3, 2023
The chatbot, designed to offer personalized health recommendations and support, utilized artificial intelligence algorithms to interact with users seeking nutrition information, exercise, and weight management.
However, reports began to emerge suggesting that the chatbot was promoting unhealthy eating patterns and fostering a negative body image among vulnerable individuals.
In recent weeks, some social media users shared screenshots of their interactions with the chatbot. They claimed that the bot knew the user had an eating issue, yet it continued to advocate behaviors such as calorie restriction and dieting.
According to the American Academy of Family Physicians, for patients who are already dealing with weight stigma, further encouragement to lose weight might lead to disordered eating behaviors such as binging, restricting, or purging.
"Every single thing Tessa suggested were things that led to the development of my eating disorder," wrote weight-inclusive activist Sharon Maxwell in a widely shared Instagram post about a conversation with the bot, which she claimed advised her to keep a calorie deficit and monitor her weight daily.
"I wouldn't have received help if I had used this chatbot when my eating disorder was at its worst," the user writes.
NEDA CEO Liz Thompson, in a statement provided to media sources, said: “The advice the chatbot offered is against our policies and core beliefs as an eating disorder organization."
Experts and mental health professionals have long warned about the potential risks associated with AI-powered platforms for mental and physical health.
While AI can provide valuable insights and support, it lacks the contextual understanding and empathetic response necessary to address complex and sensitive topics like eating disorders.
In response to this incident, experts emphasize the need for stringent oversight and regulation of AI technologies used in healthcare settings.
Their recommendations include subjecting AI algorithms to rigorous testing, ongoing monitoring, and continuous improvement to mitigate potential harm and safeguard user well-being.