After an Interview with Anthropic Interviewer
I turned on Claude, and an AI interview window popped up.

A few months ago, Claude also requested an AI interview about using the Artifact feature, which I did (Oh, I need to use that 100,000 won Amazon voucher I got back then!). Wondering what kind of interview this was this time, I first read the Anthropic blog post. There were many impressive points, so I summarized them.
Introducing Anthropic Interviewer: What 1,250 Experts Said About Working with AI
> Millions of people now use AI daily. As a company developing AI systems, we want to understand how and why people do so, and how it impacts them. Partly because we want to use their feedback to build better products, but also because understanding human-AI interaction is one of the critical sociological questions of our time.
> We recently designed a tool to investigate AI usage patterns while protecting user privacy. This allowed us to analyze shifts in AI usage patterns across the economy. But this tool only let us understand what happens within conversations with Claude. What happens after that? How are people actually using Claude's outputs? How do they feel about it? How do they envision AI's role in the future? To get a comprehensive picture of AI's changing role in people's lives and to put humans at the center of model development, we needed to ask people directly.
When Claude recently announced it would start collecting user data, I felt a slight sense of reluctance (since not collecting user data had been a key selling point until then). However, they mentioned it's a tool for investigating patterns while protecting user privacy, so I thought I should look into this further later. In any case, Claude created an interview tool called 'Anthropic Interviewer powered by Claude' specifically for this purpose. Anthropic Interviewer is a tool that automatically conducts detailed interviews at an unprecedented scale and delivers the results to human researchers for analysis. Before opening this tool to all users, Anthropic reportedly first conducted interviews with 1,250 experts across various fields (general workforce N=1,000, scientists N=125, creators N=125). An interesting finding from the interview analysis was that about 25% of those expressing anxiety about AI use had established boundaries. This approach was similar in context to the "lazy thinking mode" mentioned in my subsequent AI interview.
> While 41% of interviewees felt secure in their jobs and believed human skills were irreplaceable, 55% expressed anxiety about AI's future impact. Of the anxious group, 25% said they set boundaries for AI use (e.g., educators who always create lesson plans themselves), while another 25% adapted their workplace roles to take on additional responsibilities or pursue more specialized tasks.
The subsequent analysis of the emotional spectrum (frustration-hope) across each occupational group was also intriguing. The generally low level of trust in AI was another interesting point. The most impressive part was actually Anthropic's description of activities beyond building AI models and products. Introducing the Anthropic Interviewer tool featured in this blog post as "our latest step to put the human voice at the center of the conversation about AI model development," Anthropic detailed actual partnerships and activities with creators, scientists, and teachers. The teacher segment was particularly striking.
> Teachers: We recently partnered with the American Federation of Teachers (AFT) to reimagine teacher training for an increasingly capable AI era. This program aims to support 400,000 teachers in AI education and incorporate their perspectives into AI system development. Additionally, we previewed some findings from Anthropic Interviewer about how AI is transforming software engineering at Anthropic. Sharing qualitative stories about our own workplace changes revealed many commonalities between software engineers and teachers, bringing everyone to the same table to brainstorm the kind of AI-driven work changes we actually want.
These activities felt like ones I wanted to explore further and support. After finishing the blog post, a window popped up inviting me to use this tool for interviews too. Ah, I couldn't resist such an interesting activity. As a heavy Claude user and someone deeply interested in AI-human interaction, I happily chatted with the Anthropic interviewer for about ten minutes. I saw an article a few days ago about Anthropic planning an IPO next year—if they go public, I'm definitely buying Anthropic stock first.