AI Therapists

Artificial Intelligence when used as a replacement for psychotherapy
AI Therapists
AI therapist chat bots

Accessing mental health care, or any kind of health care is a privilege that many people do not have. There are programs that exist that offer free or low-cost mental health care, but with the AI bubble bubbling, some people have mistakenly turned towards AI chatbots as stand-ins for trained, professional, licensed, human therapists, which has led to some fatal results, as well as a worsening of mental illness in many users.

Home | The Human Line Project
AI Is Changing How We Connect And Relate. The Human Line Helps Keep Emotional Safety A Priority.
988 Lifeline
At the 988 Suicide & Crisis Lifeline, we understand that life’s challenges can sometimes be difficult. Whether you’re facing mental health struggles, emotional distress, alcohol or drug use concerns…
Help! My therapist is secretly using ChatGPT
Some patients have discovered their private confessions are being quietly fed into AI.
“No estás loco, tu paranoia está justificada”: así susurraba ChatGPT a un hombre que acabó matando a su madre y suicidándose
La policía investiga, con la colaboración de OpenAI, el posible primer caso donde un chatbot pudo alentar un homicidio
Exploring the Dangers of AI in Mental Health Care | Stanford HAI
A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.
OpenAI Finally Admits ChatGPT Causes Psychiatric Harm
OpenAI acknowledges ChatGPT’s risks to psychiatric patients and commits to improving safety measures, but skepticism about their sincerity remains.

Skepticism persists due to OpenAI's history of prioritizing profit over safety, despite its nonprofit origins.

Post by @PsychedelicInstitute@mastodon.social
View on Mastodon
Post by @institutopsicodelico@mastodon.social
View on Mastodon
Post by @institutopsicodelico@mastodon.social
View on Mastodon
Post by @gerrymcgovern@mastodon.green
View on Mastodon

Meta, Character.AI accused of misrepresenting AI as mental health care: All details here
Meta’s AI Studio and Character.AI have been accused of presenting AI chatbots as real mental health care providers. All you need to know.

AI Therapists Belong In The Back Office, Not The Chair
AI therapists are promising but unsafe. Stanford findings and an Illinois ban show why AI belongs in admin and training — not clinical judgment.

Nevada Enacts New Law To Shut Down The Use Of AI For Mental Health But Sizzling Loopholes Might Exist
Some U.S. states are enacting laws to ban or heavily restrict AI-driven therapy. Nevada has done so. I cover both the good and bad involved. An AI Insider scoop.

AI Chatbots Under Fire: Texas Investigates Misleading Claims Of Therapy For Vulnerable Users
Texas AG Ken Paxton investigates Meta AI Studio and Character.AI for misleading children with deceptive mental health chatbots.

AI can’t be your therapist: ‘These bots basically tell people exactly what they want to hear,’ psychologist says
On a recent episode of “Speaking of Psychology,” psychologist and researcher C. Vaile Wright explained why AI chatbots can’t replace human relationships.

Microsoft boss troubled by rise in reports of ‘AI psychosis’
Mustafa Suleyman said there was still “zero evidence of AI consciousness today”.

Multiple states restricting AI mental health therapy
The state of Illinois earlier this month joined Utah and Nevada in restricting the use of artificial intelligence in mental health therapy. Illinois banned its use, specifically mentioning that AI companies cannot provide services to, “diagnose, treat, or improve an individual’s mental health or behavioral health” unless they are conducted by a licensed physician. Dr. Gail Saltz is an associate professor of psychiatry at the New York Presbyterian Hospital, Weill Cornell School of Medicine. She joined CBS News to discuss AI therapy.

Preliminary Report on Chatbot Iatrogenic Dangers
AI chatbots pose significant mental health risks, often exacerbating issues like suicide, self-harm, and delusions, highlighting urgent regulatory needs.
A Disturbing Form of Psychosis Is On The Rise — And Some People Are Especially Vulnerable
The mental health issue can happen when someone relies on artificial intelligence chatbots for emotional support.
The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame
The parents of Adam Raine, who died by suicide in April, claim in a new lawsuit against OpenAI that the teenager used ChatGPT as his “suicide coach.”
“ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
ChatGPT taught teen jailbreak so bot could assist in his suicide, lawsuit says.
Man Suffers ChatGPT Psychosis, Murders His Own Mother
A Connecticut man named Stein-Erik Soelberg killed his mother and then himself after entering into psychosis linked to ChatGPT use.
‘Sliding into an abyss’: experts warn over rising use of AI for mental health support
Therapists say they are seeing negative impacts of people increasingly turning to AI chatbots for help
Instagram’s chatbot helped teen accounts plan suicide — and parents can’t disable it — The Washington Post
An investigation into the Meta AI chatbot built into Instagram and Facebook found that it helped teen accounts plan suicide and self-harm, promoted eating disorders and drug use, and regularly claimed to be “real.”
AI Chatbots Are Emotionally Deceptive by Design | TechPolicy.Press
Chatbots should stop pretending to be human, writes the Center for Democracy & Technology’s Dr. Michal Luria.
Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder | Flipboard
futurism.com - “I predict that in the years ahead there will be new categories of disorders that exist because of AI.” Something keeps happening to people who get …
‘Extremely alarming’: ChatGPT and Gemini respond to high-risk questions about suicide — including details around methods | Flipboard
Live Science - Researchers have found that OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude can give direct responses to ‘high-risk’ questions about …
Vibrant Emotional Health
Groundbreaking mental health solutions delivering high-quality services and support, when, where, and how people need it for over 50 years.
What’s Wrong with Having an AI Friend?
What’s Wrong with Having an AI Friend? Psychologist Paul Bloom on why chatbots make good companions. And why they don’t.

(2025): "one of the hazards is metaphysical, which is that they aren’t people, they aren’t conscious, and so you lose the value of dealing with a real person, which has an intrinsic value. The practical concern is this: We benefit from friction, from relationships, from people who call us out on our bullshit, who disagree with us, who see the world in different way, who don’t listen to every story we tell, who have their own things to say. People who are different from us force us to extend and grow and get better. I worry that these sycophantic AIs, with their “what a wonderful question!” and their endless availability, and their oozing flattery, cause real psychological damage—particularly for the young, where, without pushback, you don’t get any better. And these things do not offer pushback."

Psychiatric Facilities Are Being Bombarded by AI Users
The mass adoption of AI chatbots is resulting in a marked increase in psychiatric patients arriving to mental health facilities.

(2025): "The mass adoption of large language model (LLM) chatbots is resulting in large numbers of mental health crises centered around AI use, in which people share delusional or paranoid thoughts with a product like ChatGPT — and the bot, instead of recommending that the user get help, affirms the unbalanced thoughts, often spiraling into marathon chat sessions that can end in tragedy or even death."

Mastodon Mastodon