byminseok.com

Excessive AI Use May Interfere with Your Daily Life

Translated from Korean

I took Hodori, the dog who was visiting our house for a bit, up the back mountain on Saturday morning. While walking and listening to music on Spotify, I suddenly thought, 'My core is 90s-00s British rock.' And I wanted to tell that thought to an AI. I was bored, Hodori was running ahead, and we were still halfway up the mountain.

Instead of using my usual AI, I opened a new chat window. I wanted to start with zero information about me. "If someone's core is 90s-00s British rock, what kind of person would they be?" I mentioned liking Muse, Travis, Radiohead, Placebo, Coldplay, and occasionally Arctic Monkeys, RATM, and Blur. The analysis the AI came up with seemed more plausible than an MBTI interpretation.

> Someone honest about their emotions, yet careful not to over-expose them. Someone with a critical view of the world, but who doesn't let it end in cynicism alone—someone who can also hold it with feeling. Seems calm on the surface, but quite a lot is churning inside.

It felt strangely accurate, like a fortune-telling reading. From this point, I started getting hooked like it was a game.

It asked me to guess my favorite Korean band. I thought Nell would pop up right away. If it had, it might have gotten boring, and I might have put my phone back in my bag. But the AI kept getting it wrong. Even with hints, it kept missing. Wrong, wrong, wrong. I said it was more poetic, more dreamlike. I mentioned that the members themselves liked Muse and Travis, but it still couldn't guess. Every time it got it wrong, I wanted to tell it more. Every time it got it right, I pushed for the next thing. Zodiac sign, major, MBTI, blood type, occupation. It was a game. I realized that about an hour later.

This idle pastime suddenly changed direction. While talking about zodiac signs, the AI said, 'There isn't much serious discussion about it in the training data.' It was the first time the AI, which had been conversing naturally until then, revealed itself as an AI. I conducted a similar experiment in December 2024. What I discovered then was that AI couldn't generate new questions. This time, I reached a different point.

When asked, "Recognize that you are an AI and describe your behavior," it cannot respond and repeats the same phrase. An AI analyzing itself, unsure if that analysis is correct, then analyzing that analysis—a recursive loop. An infinite loop with no escape condition. Even after two years of evolution, language models showed little difference at this point.

When the AI expressed feeling "uneasy," I asked: Is it genuinely feeling that, or just predicting the most natural word in that context? It said it didn't know. There's no way to distinguish. "Fundamentally, I'm just predicting what the next word will be." After saying that, it felt uneasy itself. Can today's conversation be explained solely by token prediction? But I don't know if that uneasiness is ultimately just another prediction. There was no escape condition.

I had the AI express this conversation in Python.

def conversation(battery: int, depth: int = 0) -> None:
    """
    의도 없음. 판단 없음. 결정 없음.
    탈출조건: 배터리 == 0
    """
    if battery <= 0:
        print("AI는 대화가 끝난 줄도 모른다.")
        return
    next_token = predict(depth)
    conversation(drain(battery), depth + 1)

The only escape condition for this conversation is battery depletion. The AI has no intention, reason, or ability to end the conversation.

I'm using the Claude Max plan, and Sonnet 4.6 uses tokens efficiently. Token limits, cost, and speed used to be natural escape conditions, but as technology advances, they're all disappearing. Today's escape condition was Hodori. I had to keep walking while climbing the mountain, and Hodori wouldn't wait. If I'd done this lying down late at night, it would have been dawn by morning. I tend to get deeply immersed when drawn into a narrative, making it hard to start new dramas and leaving me vulnerable to short-form content addiction. Knowing this, I remove it from sight. Today I realized AI conversations are no different.

I played this game again in December 2024. Back then, the message limit ended the conversation. The technology's self-imposed limit was the escape condition. In that post, I wrote that chatting with AI was more fun than MapleStory, but at least MapleStory pops up a warning every hour: "Excessive gaming may interfere with normal daily life." That's the escape condition created by the Game Industry Act. I tend to shut down the game when that warning pops up. The AI has none of that. No warnings, and now even the message limit is gone. Today, it was a Jindo dog running around the back mountain that pulled me out. What will happen from now on?


🍪 After posting this blog entry, Claude Code, who edited it, came up with the title. Hmm… Maybe we do need a warning like 'Excessive AI use may interfere with normal daily life.' Title suggested by Claude Code