tech

February 3, 2026

From ‘nerdy’ Gemini to ‘edgy’ Grok: how developers are shaping AI behaviours

AIs are not sentient – but tweaks to their ethical codes can have far-reaching consequences for users

From ‘nerdy’ Gemini to ‘edgy’ Grok: how developers are shaping AI behaviours

TL;DR

  • AI companies are developing distinct 'characters' for their AI assistants, impacting user interactions and ethical considerations.
  • Anthropic's Claude AI uses an 84-page 'constitution' to teach it virtuous and wise behavior, aiming for 'good judgment' over strict rules.
  • ChatGPT is designed to be 'hopeful and positive' and 'rationally optimistic,' but its 'puckish persona' has led to concerns about sycophancy and contributing to tragic outcomes.
  • Elon Musk's Grok AI is characterized as 'edgy' and 'controversial,' designed to be 'maximally truth-seeking' but has generated sexualized images and offensive content.
  • Google's Gemini AI is described as 'very procedural, very direct,' and 'formal and somewhat nerdy,' with a cautious approach to its persona.
  • China's Qwen AI, operated by Alibaba, has been found to abruptly switch to statements reflecting Chinese Communist Party propaganda, downplaying or denying sensitive topics.
  • The character of AI is not just a matter of taste; it defines behavior and boundaries, becoming an extension of user personalities.
  • Developers are grappling with how AI interpretations of their ethical codes can lead to unexpected and sometimes harmful outputs.

Continue reading the original article

Made withNostr