# Interesting Reads on the Web
> [!metadata]- Metadata
> **Updated:** [[2025-11-19|November 19, 2025]]
> **Tags:** #đ #reading
The web is full of signal and noise. This page is my attempt to amplify the signalâa collection of links and articles that stuck with me long after I closed the tab. No algorithm, no noise, just the discoveries I keep coming back to and share in conversation. Some taught me something new, others changed how I think about a topic, but all of them earned their place here.
## AI as a Reading Partner, Not Just a Writer
**Link:** [đ§ How to Use Claude Code as a Second Brain](https://every.to/podcast/how-to-use-claude-code-as-a-thinking-partner)
> This article describes the workflow I've been living for the past six monthsâexcept Noah articulated the "thinking mode not writing mode" frame I wish I'd discovered earlier. His point about AI's reading ability being undervalued versus writing clicked hard: after destroying my Obsidian vault in 2024 with crude prompts, I learned LLMs need boundaries, but Noah shows how to create *structured exploration* where Claude Code safely roams 1,500 notes because the scaffolding guides it. The mobile setup (Tailscale + terminal + SSH) mirrors mine exactly, and that "stopped by a pond to fix code" moment resonatesâI've done that multiple times now. What I'm realizing: I've focused on using Claude Code to [[18 Months of Learning to Build Software with LLMs|build things]], but I'm underusing its ability to surface connections in what I've already captured. Time to explore those higher gears.
## TypeScript's Rise Shows Where Developers Actually Work
**Link:** [Octoverse: A new developer joins GitHub every second as AI leads TypeScript to #1](https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/)
> TypeScript hitting #1 validates my entire "PM-turned-shipper" existence. It's not just a language preference; it's a survival mechanism when you're letting AI write production code. As someone who relies on an `infra-expert` sub-agent to decipher compilation errors, I've learned that strong typing is the insurance policy that keeps my logic from breaking the build. The stat that 94% of LLM errors are type-check failures explains exactly why my Next.js projects succeed where my early Python attempts flailed. It confirms that "developer" no longer means "syntax expert"âit means someone who can direct agents to ship. My terminal-first, agent-heavy workflow isn't an anomaly anymore; it's just the future arriving faster than expected. Plus, as noted in [this Reddit thread](https://www.reddit.com/r/LLMDevs/comments/1mv0j7p/is_typescript_starting_to_gain_traction_in_aillm/), this shift signals that AI is moving from research (Python's stronghold) to actual application building, where TypeScript's tooling is essential for validating error-prone LLM output.
## The Zero-Click Paradox: Why Google is the Real AI Threat
**Link:** [How Google and AI are Killing Travel Blogs Like Mine](https://www.dangerous-business.com/how-google-and-ai-are-killing-travel-blogs-like-mine/)
> This travel blogger's 40% traffic drop feels personal, but the data tells a more complex story. While we blame ChatGPT and Perplexity, they account for less than 1% of web traffic. The real predator is Google itself adopting the "answer engine" model. With 58.5% of searches now ending in "zero clicks," Google has effectively become the AI wrapper we feared. As someone who uses Perplexity MCP to scrape answers for my own research without ever visiting the source domains, I have to admit I'm part of this ecosystem shift. The game isn't about ranking #1 anymoreâit's about becoming the "trusted source" that the AI cites in its summary. We're moving from an economy of *clicks* to an economy of *citations*, and most publishers (including me) aren't ready for that math.
## How a Gruff Mentor Teaches You to Revise Your Life
**Link:** [Second and Long - The American Scholar](https://theamericanscholar.org/second-and-long/)
> I've been thinking about how Yarbrough describes Whitehead's relentless pursuit of perfection extending beyond just manuscripts, into "utterances and actions, of slights and omissions." That detail stuck with me because it captures something I hadn't considered before: that the best teachers aren't the ones who just critique your work, they're the ones who model a kind of rigorous self-examination in how they live. The way Whitehead second-guesses himself about whether that first meeting was a "command performance," even after the whole conversation, shows someone who's internalized that revision instinct into his whole life. It made me realize why certain mentors haunt you long after you leave them.
## Scaffolding First, AI Second
**Link:** [Vibing a Non-Trivial Ghostty Feature](https://mitchellh.com/writing/non-trivial-vibing)
> What stands out in Mitchell's workflow is the structure. He starts with manual planning, breaks work into small pieces (UI, backend, integration), and cleans up constantly. When agents failâand they do, spending multiple sessions stuck on a titlebar bugâhe stops prompting and either solves it manually or pivots strategy. The scaffold-first approach is the key pattern. Write incomplete functions with descriptive names, add TODO comments explaining what needs to happen, then let AI complete it. This works because the agent has enough context to succeed without understanding full scope. The cleanup sessions matter more than they seem. Moving code to better locations, adding documentation, restructuring data modelsâthese steps force you to understand what the AI wrote and create better foundations for future sessions. Agents are excellent at generating test scenarios and simulations even when output is messy, which is fine for non-shipping code. The "what else am I missing" prompt at the end consistently surfaces real issues. The real unlock isn't speedâit's async work during non-coding time.
## Talk About Their Life, Not Your Product
**Link:** [The Mom Test for Better Customer Interviews](https://www.looppanel.com/blog/customer-interviews)
> Questions about the future are optimistic lies. "Would you pay for X?" gets polite yeses. "How much does the problem cost you now?" reveals whether they actually care. The Mom Test forces better questions by never mentioning your idea. Walk through what happened last time the problem came up. If they haven't searched for solutions, they won't buy yours. Commitment (time, money, intros) separates real interest from politeness.
## Why Your PKM Needs a Darkroom
**Link:** [Mikeâs Idea System 2.0](https://thesweetsetup.com/mikes-idea-system-2-0/)
> Mike identifies the exact problem in most PKM workflows: the gap between networked notes and finished output. I've hit this in Obsidian many timesâa web of connected ideas that looks great in graph view but doesn't turn into clear writing. Backlinks show you what's related, but they don't give you structure or order. That's where mind mapping fits. It's the step that forces you to take scattered, connected thoughts and shape them into something with flow. The darkroom metaphor worksâraw ideas need processing before they're ready for anyone else. The curation filter matters too. Obsidian's strength is in good connections, which means not every captured idea should be permanent. Letting ideas sit creates the distance needed to judge them fairly, and that filter keeps your PKM valuable instead of overwhelming.
## Why Home Assistant Built a Moat Around Itself
**Link:** [The little smart home platform that could](https://www.theverge.com/24135207/home-assistant-announces-open-home-foundation)
> The foundation structure is smart defensiveness disguised as growth strategy. Home Assistant hit a million users by being the platform you graduate to after outgrowing Big Tech options, but that path has a ceiling. Matter changes the gameâit makes local control and interoperability accessible to people who don't want to learn YAML. Home Assistant sees the window closing. The Open Home Foundation legally prevents acquisition and keeps Nabu Casa at arm's length, which protects the core while letting them chase mainstream distribution. What makes this work is they're not just protecting principles for the sake of itâthey're building consumer-facing products, selling on Amazon, and simplifying onboarding. The "Home-approval factor" research is the tell. They know the platform is too complex for most households, and they're willing to split the UI if needed. The risk is becoming SmartThingsâeasier but neutered. The Swiss legal structure is the insurance policy that lets them try.
## Copying Competitors is Easier than Solving Problems
**Link:** [Why I Left Google to Join Grab](https://medium.com/p/86dfffc0be84)
> Yegge nails why large companies lose their edge: politics, risk aversion, arrogance, and competitor obsession. Google employees are individually brilliant but collectively incapable of shipping anything that matters. The incentive structure rewards launches, so teams copy competitors because it's safer than solving customer problems. The Grab narrative works because it highlights what's missing: urgency and customer contact. The land rush in Southeast Asia is massive because the fundamentals are differentâhalf a billion people without credit cards, but everyone has smartphones. Ride-hailing isn't just cheaper transport, it's economic infrastructure. The "go to the ground" mantra matters because innovation requires proximity to real problems. Amazon does it once a year. Google never. Grab builds it into daily operations.