Being a Slow Thinker in the Age of Fast Thinking

Paul Graham is the founder of Y Combinator, a renowned Silicon Valley startup incubator. Companies like Airbnb and Dropbox all started here. A programmer by background, he has written essays on technology, society, and business with his characteristic sharp insight.
His recent piece, ‘Writes and Write-nots’, contains predictions about writing and thinking in the AI era. He predicts that within decades, people will be divided into those who can write and those who cannot. As an era arrives where AI writes for us, the ability to write independently will become increasingly rare.
He particularly states that 'writing is thinking'. Believing one can think without writing is merely an illusion. Ultimately, his prediction serves as a warning that future society will polarize into 'thinkers' and 'non-thinkers'.
However, the prediction that society will split into 'thinkers' and 'non-thinkers' is hardly novel. It's already all too visible around us. Since the widespread adoption of so-called 'algorithms'—or 'personalized recommendation services'—several years ago, people have found it increasingly difficult to escape filter bubbles. Netflix's documentary ⟨The Social Dilemma⟩, which addressed the problems of filter bubbles on social media, came out four years ago, yet the world's polarization has only deepened since then.
Algorithm originally refers to a technical term describing the detailed process of solving problems in a way a computer can follow. Recently, however, it's used to mean 'I'll show you what you might be interested in.' Just as watching a few baseball videos fills YouTube with baseball-related content, it keeps showing us what we already like. It's almost like it's thinking for us. It relieves us of the effort to think for ourselves and choose what to see. It might even be a convenient and beneficial technology.
This 'algorithm' creates a 'filter bubble'. Like someone trapped inside a soap bubble, we become confined within personalized filters. This filter bubble ultimately reinforces confirmation bias. You only keep confirming that your thoughts are correct, losing the need to think deeply or question things. This is dangerous. Algorithms and filter bubbles ultimately erode the 'ability to think'. As Paul Graham said, it's the end of the thinking person.
I prided myself on recognizing this danger early. Four years ago, I was shocked after watching ⟨The Social Dilemma⟩ on Netflix and began turning off personalized recommendations one by one. YouTube's suggested videos, Google and Instagram's recommended ads, even push notifications from various apps. It was a vow: 'I'll decide what I see.'
Ironically, though, I recently embraced an even more powerful 'proxy for thought': generative AI. Every time I write, I develop ideas by conversing with AI, and whenever my thoughts stall, I ask AI for fresh perspectives. Sometimes I even get confused about whether an idea is mine or AI's.
Why did I, who rejected algorithmic recommendations, willingly accept AI's help? Perhaps because AI seemed to expand my thoughts through the format of 'conversation'. In fact, I even added a rule to the instructions I set for AI: "When I ask questions or talk, respond appropriately with questions to help me move in a better direction." But isn't this just another form of filter bubble that 'thinks for me'?
Like fast food, algorithms and AI provide us with quick and convenient 'fast food for thought'. AI produces rapid results, and algorithms only show us comfortable content. But just as fast food alone cannot sustain a healthy diet, Fast Thinking alone cannot foster deep, meaningful thought.
Conversing with AI is addictive. The UI that outputs lengthy responses in seconds makes me mistakenly believe I'm thinking for myself. Even now, I'm conversing with AI to quickly finish this article. It's the temptation of Fast Thinking.
But I've established some rules. The most crucial is to always have 'time for solitary thinking'. Before conversing with AI, I pick up pen and paper like before and draw a mind map. I also jot down thoughts as they come to me on my blog. By conversing with AI based on these organized thoughts, I can discover new perspectives without losing sight of my core ideas.
This preparation is necessary because conversations with AI possess a unique nature. The most significant difference between talking to AI and talking to another human is that AI has no world of its own. It has no childhood traumas, no friends it fought with in school. It has no romantic history shared during youth, no period of spectacular failure or hitting rock bottom. It is simply a world of infinite data. This creates a unique paradox. AI possesses a world of infinite data, but that infinity only holds meaning within the context we share. For example, AI can only have meaningful conversations within the scope of the experiences and thoughts I've described. Meaningful dialogue with AI, despite its infinite potential, ultimately occurs only at the point where it meets my world. This is precisely why conversations with AI ultimately become conversations with myself.
This makes 'time for solitary reflection' even more crucial. For seven years, I've consistently maintained a habit of reflection: weekly, monthly, quarterly, and yearly reviews. Through these regular reflections, I've organized and developed my thoughts. It is precisely because I've built this accumulated personal world that richer conversations with AI become possible.
To properly utilize AI that aids Fast Thinking, we actually need time for Slow Thinking. AI can be a tool to expand our thoughts, but before that, we must not forget how to think for ourselves. As Paul Graham predicted, when the future divides into 'thinkers' and 'non-thinkers,' if you want to be a thinker, start by trusting and nurturing your own thoughts.