# Interesting Reads on the Web > [!metadata]- Metadata > **Updated:** [[2025-10-30|October 30, 2025]] > **Tags:** #🌐 #reading The web is full of signal and noise. This page is my attempt to amplify the signal—a collection of links and articles that stuck with me long after I closed the tab. No algorithm, no noise, just the discoveries I keep coming back to and share in conversation. Some taught me something new, others changed how I think about a topic, but all of them earned their place here. ## When the Tools You're Competing Against Steal Your Content **Link:** [How Google and AI are Killing Travel Blogs Like Mine](https://www.dangerous-business.com/how-google-and-ai-are-killing-travel-blogs-like-mine/) > I didn't realize that AI companies literally trained their models on people's work without permission until reading this. The detail about Google deciding it would be "too complicated" to give publishers a choice really got me because that's not a technical limitation, that's a choice to prioritize speed over fairness. It makes sense now why this blogger's traffic dropped 40% in one year when you think about it: ChatGPT can just give you the travel advice directly instead of sending you to the blog that originally created it. ## How a Gruff Mentor Teaches You to Revise Your Life **Link:** [Second and Long - The American Scholar](https://theamericanscholar.org/second-and-long/) > I've been thinking about how Yarbrough describes Whitehead's relentless pursuit of perfection extending beyond just manuscripts, into "utterances and actions, of slights and omissions." That detail stuck with me because it captures something I hadn't considered before: that the best teachers aren't the ones who just critique your work, they're the ones who model a kind of rigorous self-examination in how they live. The way Whitehead second-guesses himself about whether that first meeting was a "command performance," even after the whole conversation, shows someone who's internalized that revision instinct into his whole life. It made me realize why certain mentors haunt you long after you leave them. ## Scaffolding First, AI Second **Link:** [Vibing a Non-Trivial Ghostty Feature](https://mitchellh.com/writing/non-trivial-vibing) > What stands out in Mitchell's workflow is the structure. He starts with manual planning, breaks work into small pieces (UI, backend, integration), and cleans up constantly. When agents fail—and they do, spending multiple sessions stuck on a titlebar bug—he stops prompting and either solves it manually or pivots strategy. The scaffold-first approach is the key pattern. Write incomplete functions with descriptive names, add TODO comments explaining what needs to happen, then let AI complete it. This works because the agent has enough context to succeed without understanding full scope. The cleanup sessions matter more than they seem. Moving code to better locations, adding documentation, restructuring data models—these steps force you to understand what the AI wrote and create better foundations for future sessions. Agents are excellent at generating test scenarios and simulations even when output is messy, which is fine for non-shipping code. The "what else am I missing" prompt at the end consistently surfaces real issues. The real unlock isn't speed—it's async work during non-coding time. ## Talk About Their Life, Not Your Product **Link:** [The Mom Test for Better Customer Interviews](https://www.looppanel.com/blog/customer-interviews) > Questions about the future are optimistic lies. "Would you pay for X?" gets polite yeses. "How much does the problem cost you now?" reveals whether they actually care. The Mom Test forces better questions by never mentioning your idea. Walk through what happened last time the problem came up. If they haven't searched for solutions, they won't buy yours. Commitment (time, money, intros) separates real interest from politeness. ## Why Your PKM Needs a Darkroom **Link:** [Mike’s Idea System 2.0](https://thesweetsetup.com/mikes-idea-system-2-0/) > Mike identifies the exact problem in most PKM workflows: the gap between networked notes and finished output. I've hit this in Obsidian many times—a web of connected ideas that looks great in graph view but doesn't turn into clear writing. Backlinks show you what's related, but they don't give you structure or order. That's where mind mapping fits. It's the step that forces you to take scattered, connected thoughts and shape them into something with flow. The darkroom metaphor works—raw ideas need processing before they're ready for anyone else. The curation filter matters too. Obsidian's strength is in good connections, which means not every captured idea should be permanent. Letting ideas sit creates the distance needed to judge them fairly, and that filter keeps your PKM valuable instead of overwhelming. ## Why Home Assistant Built a Moat Around Itself **Link:** [The little smart home platform that could](https://www.theverge.com/24135207/home-assistant-announces-open-home-foundation) > The foundation structure is smart defensiveness disguised as growth strategy. Home Assistant hit a million users by being the platform you graduate to after outgrowing Big Tech options, but that path has a ceiling. Matter changes the game—it makes local control and interoperability accessible to people who don't want to learn YAML. Home Assistant sees the window closing. The Open Home Foundation legally prevents acquisition and keeps Nabu Casa at arm's length, which protects the core while letting them chase mainstream distribution. What makes this work is they're not just protecting principles for the sake of it—they're building consumer-facing products, selling on Amazon, and simplifying onboarding. The "Home-approval factor" research is the tell. They know the platform is too complex for most households, and they're willing to split the UI if needed. The risk is becoming SmartThings—easier but neutered. The Swiss legal structure is the insurance policy that lets them try. ## Copying Competitors is Easier than Solving Problems **Link:** [Why I Left Google to Join Grab](https://medium.com/p/86dfffc0be84) > Yegge nails why large companies lose their edge: politics, risk aversion, arrogance, and competitor obsession. Google employees are individually brilliant but collectively incapable of shipping anything that matters. The incentive structure rewards launches, so teams copy competitors because it's safer than solving customer problems. The Grab narrative works because it highlights what's missing: urgency and customer contact. The land rush in Southeast Asia is massive because the fundamentals are different—half a billion people without credit cards, but everyone has smartphones. Ride-hailing isn't just cheaper transport, it's economic infrastructure. The "go to the ground" mantra matters because innovation requires proximity to real problems. Amazon does it once a year. Google never. Grab builds it into daily operations.