Tatiana Maksimova via Getty Images
In July this year, I decided to lose a bit of weight to improve my overall health. I’d just been diagnosed with PCOS and a life-threatening gallbladder disease, so it felt like the right decision at the time.
Prioritising my health over aesthetics while embarking on a ‘weight loss journey’ wasn’t something I’d done before – it made me feel empowered, as if I was actually taking control of my health and doing it sensibly, rather than crash-dieting or comparing myself to other people.
Sadly, though, I found it impossible not to fall victim to 2020s diet culture – a slightly more insidious and digitally-dominated form of the weight-loss ‘trends’ we endured in the 2000s.
I constantly compared myself to slimmer friends; wondering if I’d have a bit more confidence if I just lost another 10lbs. I’d stay up until 2am sometimes, endlessly scrolling through TikTok and despairing at my upper arms, thighs and the way my tummy has a slight overhang.
I’m a bit ashamed to admit that just six weeks after I underwent gallbladder removal surgery in June (2025), I was back at the gym, lifting weights and doing high-intensity cardio. I was a little worried about the month-and-a-half I’d spent on the sofa (you know, letting my body recover) and every negative comment I’d previously heard about my weight began floating around in my brain like the bouncing DVD logo.
It was during this period I turned to ChatGPT for the first time, asking it to give me advice on how to stay slim after surgery. My advice-seeking soon turned into full-blown body-checking – addicted to the bot telling me exactly what I wanted to hear.
Worryingly, I’m not alone. Eating disorder charity Beat revealed to me last week (5th November) that a number of people have spoken with helpline operators about AI websites like ChatGPT. Some calls have mentioned talking to AI about body image concerns in lieu of seeking help from a GP or therapist; as well as sending photos to ChatGPT and asking it to guess their weight.
The charity has expressed concerns that AI misuse can fuel harmful, eating disorder-related behaviours – like reading misinformation, looking at AI-generated content and asking for instructions on how to execute certain behaviours.
But it also expressed that it can be helpful for signposting people to resources, like the charity itself.
Eating disorders can affect anyone of any age, gender, race, sexual orientation or background – and the helpline has received AI-related calls from a range of people.
Jessica*, 27, has lived with an eating disorder in the past and since recovered. She’s concerned her AI use is triggering some old thought patterns and behaviours, and occasionally uses it to seek reassurance about her body.
She said: “When it comes to keeping fit, I do use ChatGPT to log workouts and nutrition – I don’t think this is a problem, it’s basically like a journal for me.
“I’m on a set number of calories which aligns with my fitness goals at the moment, but I do struggle with that sometimes.
“There was this one specific week where I’d been eating more indulgently than usual.
“I was feeling really paranoid that I’d gained weight and got bigger.
“I hate that’s how my brain works – and of course there shouldn’t be any shame attached to living in a bigger body.”
Like my own experience with ChatGPT, Jessica asked the AI bot to reassure her she hadn’t gained any weight.
“So, I was panicking and I felt awful; my clothes started fitting weirdly and I went to ChatGPT,” she continued.
“I said: ‘Look, can you just verify for me – am I tripping out or have I got bigger?’
“It showed evidence of all my workout and nutrition logs and told me it was scientifically impossible for me to have gained weight.”
While she felt validated in the moment, Jessica worries it wasn’t something she needed to hear in the long-term.
She said: “I felt validated and relieved, and ChatGPT does feel like a voice of reason sometimes, even though it’s not a person.
“But I was also a bit like: ’Crap – why did I need to hear this?”
Despite the dangers of using AI to seek reassurance for body image issues, some find it helpful when taken with a pinch of salt. Deborah, 30, says she hasn’t struggled with her body, but will ask ChatGPT for reassurance on her appearance when she needs a “hype-woman.”
She said: “I like to think of ‘Chat’ as a really good hype-woman, who boosts my confidence and assures me I’m in a great place and a healthy weight-range.
“I’m a competitive athlete who has always had a strong build; so I never focus on the scale, more-so how I feel, how my clothes fit and how well I perform in my sports.
“I could definitely see how this could be dangerous if ChatGPT is constantly agreeing with you or telling you what you want to hear, and it needs to be used cautiously.”
OpenAI, the company who developed ChatGPT, has said they’re continuously looking for ways to improve how their models respond in sensitive interactions – and has been working with more than 170 mental health experts to reduce unhelpful responses.
Other steps the developers are taking include forming an expert council on wellbeing and AI, working with a network of nearly 300 physicians and psychologists to directly inform their safety research, and introducing parental controls.
Tom Quinn, Beat’s Director of External Affairs, said: “We’re very concerned about the misuse of AI to fuel harmful eating disorder behaviours.
“In particular, we’re worried about people coming across health misinformation, sharing harmful or AI-generated content and concerns about people learning about harmful behaviours using the technology.
’However, it’s important to note that eating disorders are complex mental illnesses with a variety of causes.
“While AI can exacerbate an existing eating disorder or disrupt recovery, it will never be the sole and direct cause of an eating disorder.
“We also know that responsibly-used AI can be beneficial to our community, such as signposting people to sources of support such as Beat.”
A spokesperson for OpenAI said: “We know people sometimes turn to ChatGPT in sensitive moments.
“Over the last few months, we’ve worked with more than 170 mental health experts around the world and updated our models to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support.
“We’ll continue to evolve ChatGPT’s responses with input from experts to make it as helpful and safe as possible.”
