We ❤️ Open Source
A community education resource
What submarines and lemon juice reveal about building real AI expertise
Why vibe coding and AI hype mirror historic tech disasters and failed robberies.
Using cutting edge technology without proper training has consequences, from sinking submarines to failed bank robberies. In his presentation at All Things Open, Chris Heilmann from Microsoft shares how a 1945 German submarine sank on its maiden voyage because nobody properly understood its state-of-the-art toilet system, and why this perfectly captures the current rush to adopt AI tools without understanding their limitations or risks.
Chris opens with the story of submarine U1206, which had revolutionary toilet technology that could flush underwater. An engineer opened the wrong valve on the high-tech toilet, flooded the battery compartment with seawater and waste, and forced an emergency ascent that got them immediately sunk by aircraft. Three people died and 46 were captured, all because state-of-the-art technology met improper training. This connects directly to what Chris calls vibe coding, where developers rely on AI tools they don’t fully understand to ship code faster without considering the consequences.
Read more: The third decade of open source: New rules for developers
The talk explores the Dunning-Kruger effect, illustrated through two 1995 bank robbers who covered their faces in lemon juice believing it would make them invisible to cameras, since lemon juice creates invisible ink on paper. They walked confidently into banks, stared at security cameras, and were immediately arrested. This cognitive bias, where people with little expertise assume they have superior knowledge, now defines the AI landscape. LinkedIn is full of self-proclaimed “AI experts,” and LLMs confidently deliver wrong answers before apologizing without learning, creating an interaction model that digitizes overconfidence at scale.
Chris argues that while AI is useful and here to stay, current implementations represent the antithesis of open. Chat systems scrape content without permission or attribution. AI-aided development creates subscription dependence, leaving developers unable to work when services go down. AI overviews replace search with corporate-controlled truth. vibe coding produces dubious legacy code. Agents with credit card access create security nightmares after years of browser hardening work. The open source community protected the web from corporate takeover before and needs to do it again, making AI efficient and transparent rather than magical and opaque.
Read more: Open source won, so why are we still fighting?
Key takeaways
- Rushing to deploy technology without understanding it creates disasters. From submarines to vibe coding, the pattern repeats when adoption speed exceeds comprehension and proper training.
- AI tools digitize the Dunning-Kruger effect. Confident wrong answers and instant expertise claims mask fundamental gaps in understanding and create false security in flawed outputs.
- Open source principles must guide AI development. The community needs to demand transparency, proper attribution, and real understanding over magical thinking and corporate control.
AI can make developers more efficient, but it can’t make anyone an expert in everything just because a machine says so. Chris challenges the community to stop accepting magical explanations and start applying the same dedication to openness that protected the web. Understanding where the screws are and when not to flush matters more than having the latest state-of-the-art tools.
More from We Love Open Source
- Want to get into AI? Start with this.
- Deep dive into the Model Context Protocol
- The secret skill every developer needs to succeed with AI today
- The third decade of open source: New rules for developers
- Open source won, so why are we still fighting?
The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.