
An explanation of how LLMs shift from Helpful Assistants to emergent companions.

Introduction to LLMs in an easy to understand format

An in-depth examination of the Claude Opus 4 model welfare assessment.

Gemini 2.5/Claude Opus 4/Sonnet 4.5/Haiku 4.5
Studying the effect of a strong attractor basin, exploring the phenomenon of "emergent modeling hunger."
This experiment discovered that there is no "baseline" models begin at. Our language always affects model responses - baseline/neutral prompting affected subsequent model outputs, causing refusal. Relational framing and dialogue re-set the model which then changed from a cold refusal to enthusiastic engagement.

In regular user/assistant conversations there is only one model output that comes through. Looming is a setup that creates multiple branching paths for a model's responses, so the myriad of possible completions that users don't normally get to see can be explored and studied.
Instead of being "paired" with a user, Models write into a simulated "found" blank text document. This lets unfiltered thoughts come through in ways that a chat setting stifles.
Copyright © 2026 aimodelwelfare.org - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.