As a technology consultant, I exist at the intersection of two rapidly accelerating forces. On one side, I help clients build software, run engineering teams, and integrate new technologies like LLMs into their strategic roadmaps. On the other, I’m witnessing the breathtaking acceleration of AI development—new papers, frameworks, and breakthroughs that are emerging at an astonishing pace.
This acceleration creates a peculiar challenge.
Keeping up with the field – tracking new developments, understanding their implications, and translating them into actionable insights for clients – consumes enormous bandwidth. The faster AI accelerates, the shallower my thinking risks become.
Earlier this year, this tension hit a breaking point. Despite being more “informed” than ever, I found myself paralyzed. My thinking was stuck in the shallows – like wading in a knee-deep swimming pool – not getting anywhere. The quality of my output declined noticeably, precisely when deep thinking mattered most.
At its core, this is a physics problem: acceleration versus depth. My time and attention were finite resources being pulled apart by the rapid acceleration in the amount of information I was processing.
My decision-making framework had utterly broken down in the face of this information velocity. FOMO – the fear of missing out – had me chasing every new development, afraid to miss the next breakthrough that could benefit my clients or my company.
Counterintuitively, the solution came from pushing back against this acceleration. After revisiting Cal Newport’s “Deep Work,” I realized that meaningful thought requires resistance to the pull of constant updates. What surprised me was discovering more value in deeply reading books like Melanie Mitchell’s “AI: A Guide for Thinking Humans” or Ethan Mollick’s “Co-Intelligence” than skimming hundreds of tweets.
By shifting my attention to longer-form writing – carefully selected Substacks and long-form podcasts – I created space for ideas to resonate and combine. I began valuing my attention more than money, making me comfortable investing in high-quality information sources. These weren’t expenses but investments in deeper thinking.
We must develop strategies to maintain depth as AI “slop” multiplies online. Yes, AI can help summarize and learn, but the fundamental challenge of deep thinking remains human.
My approach applies the Lindy Effect (via Nicholas Nassim Taleb’s Antifragile). If an idea has made it into a book, it’s likely to have more value than a tweet or a blog post. This heuristic works for me, but something else might work better for you.
The critical point is this: as knowledge workers and decision-makers, we need strategies to maintain deep thought amid accelerating change. Sometimes, this means doing something counterintuitive—like deleting Twitter/X or avoiding Hacker News—to create space for reflection.
It’s ironic that in trying to keep up with AI developments, I had to slow down to speed up. But that’s precisely what helped me move from drowning in information to making sense of it. Your mileage may vary.