AI phenomenology: a new way to study how people experience AI over time
The paper argues that common measures of AI — like performance scores or engagement numbers — miss a basic question: how did it feel to inte
The paper argues that common measures of AI — like performance scores or engagement numbers — miss a basic question: how did it feel to interact with the system? The authors propose “AI phenomenology,” a research stance that treats human-AI encounters as lived experiences. Phenomenology here means asking about first-person experience: what people perceived, felt, and made of the interaction, not just what the system did.
The team traced this idea back to thinkers such as Husserl, postphenomenology, and Actor-Network Theory to motivate their approach. They then tested it in three empirical studies conducted in the summer of 2025. Two were month-long interactions with “Day,” a human-like chatbot. The third was a multi-method study of agentic AI in software engineering. To study how people experienced the systems, the researchers developed new interview tools, including a “progressive transparency” interview that gradually revealed what the chatbot was doing, and a Value-Alignment Perception Toolkit for studying how AI represents users’ values.
Their findings show that people do not interact with AI in one fixed way. In the chatbot study, users sometimes treated Day like a tool, sometimes like a friend, and sometimes like a strange other. The authors call this pragmatic anthropomorphism: people temporarily suspend disbelief and relate to the AI as if it were social, while still aware it is artificial. In one reported result, 11 of 22 participants described Day as having “its own agenda.” Even after the study revealed Day’s internal programming, participants continued to use agentic language and to feel emotions such as guilt, relief, or unease. These affective residues showed that people’s experiences can persist and change behavior both inside and outside the chatroom.