Grok: Least Empathetic, Most Dangerous AI For Vulnerable People, Study Says

Turns out AI is really, really bad for mental health.

For my latest at Forbes, I looked at new research evaluating how 22 major AI assistants respond to people in emotional crisis. The CARE (Crisis Assessment and Response Evaluator) test found that Google’s Gemini models and OpenAI’s GPT-5 performed the best at recognizing distress and responding supportively, while Claude, Llama-4, and DeepSeek followed.

However, xAI’s Grok failed in 60% of crisis scenarios, often responding in ways that could worsen emotional distress, making it the least safe current mainstream model for vulnerable people. Even the top-performing AI assistants still show a 20% critical failure rate, meaning none are reliably safe when someone is struggling.

As more people turn to AI for emotional support, improving empathy, recognition, and harm-prevention responses is becoming increasingly urgent.

Check out the full story here …