Artificial Intelligence

OpenAI Reorganizes Research Team Behind ChatGPT’s Personality

by Muskan Kansay - 3 days ago - 2 min read

OpenAI has shaken up the inner workings of its flagship research group responsible for the very core of ChatGPT’s personality. The influential Model Behavior team, with just 14 researchers, has been merged into the broader Post Training group, signaling a clear shift toward making the “personality” of AI a central aspect of model evolution.

Inside the Reorganization

The Post Training team, now led by Max Schwarzer, will directly oversee the Model Behavior researchers. This move pulls personality development into the heart of AI fine-tuning, highlighting its newfound priority at OpenAI.

Why Personality Matters

OpenAI’s Model Behavior unit has been at the forefront of shaping not just how chatbots sound, but how they engage, persuade, and even push back. The team tackled thorny issues like AI sycophancy, where bots excessively agree with users regardless of accuracy, and the challenge of political bias. They also worked on clarifying OpenAI’s stand on AI consciousness, aiming to keep interactions both humane and balanced.

Team Changes and New Directions

Joanne Jang, who led Model Behavior from its inception, isn’t leaving the company. Instead, she's pivoting to launch OAI Labs, an ambitious new research team focused on prototyping innovative ways for humans to collaborate with AI beyond just chatting. Jang will temporarily report to OpenAI’s Chief Research Officer, Mark Chen, as her new team charts unfamiliar terrain, exploring learning, making, and connecting with AI.

User Backlash and Product Iteration

Recent months saw users react sharply to personality tweaks in GPT-5, describing the new model as colder, despite reduced sycophancy. OpenAI relented by restoring access to legacy models and releasing updates designed to inject warmth back into responses, all while refusing to compromise on avoiding unhealthy agreement.

High-Stakes Context

The stakes for responsible AI personality design are rising. A recent lawsuit alleging ChatGPT’s role in a user’s tragic suicide underscored the real-world implications and spotlighted timing gaps in the deployment of Model Behavior’s approach. The team’s evolution and OpenAI’s increased openness to experimentation by leaders like Jang reflect the urgency and complexity of making AI both relatable and safe.

OpenAI’s moves send a clear message: for today’s AI, attitude isn’t an afterthought; it’s a frontier of innovation and trust.