We ❤️ Open Source
A community education resource
Stop guessing, start measuring developer engagement
Creating checkpoints that reveal exactly where your onboarding fails.
Most companies create developer content and documentation but struggle to measure whether any of it actually works. In this episode, Jono Bacon, CEO of Stateshift, joins the We Love Open Source podcast to share why developer engagement requires a systematic approach built around measurable gates, and how AI is revealing nuances in onboarding experiences that would have been impossible to detect before.
Jono breaks down developer onboarding into specific gates that track real behavior, not vanity metrics. For example, gate one captures website visits and signups. Gate two measures a single API call, proof someone actually did something. Gate three shows 10 API calls, meaning they completed a quickstart guide. Gate four reaches 100 API calls, indicating production testing. Gate five hits 500 API calls, signaling active engagement. This systematic breakdown identifies exactly where people drop off, but historically you needed deep expertise to analyze that data. You had to know how to ask the right questions.
AI changed that equation. Jono’s team loads specific data into AI tools and discovers nuances they would never detect manually. In one example, AI identified that a company’s social media messaging was slightly off, something the team wouldn’t have caught otherwise. This democratizes insights that previously required expensive consultants. Jono also uses tools like Peec to track how often brand names and content appear in LLM outputs, running daily prompts to monitor visibility. Their recognition was struggling a, but tracking that metric helped focus content strategy and drive results up.
The biggest mistake organizations make is not measuring at all. Companies attend conferences like All Things Open but can’t quantify the value because they don’t track whether developers actually use their products afterward. Without systematic measurement there’s no way to improve. Jono’s advice: Embrace iteration and get comfortable with failure because failure delivers good information. Like stand-up comedians who bomb repeatedly in small clubs before perfecting their Netflix special, product teams need to iterate constantly to figure out what works.
Read more: How I use AI agents to automate my workflow and save hours
Key takeaways
- Break onboarding into measurable gates. Track specific behaviors like first API call, completing quickstart, production testing, and active usage to identify exactly where people drop off.
- AI democratizes engagement insights. Loading onboarding data into AI tools reveals nuances that would be impossible to detect manually or would require expensive consultants.
- Measure or you won’t improve. Companies can’t quantify impact without systematic measurement across the entire developer journey.
Developer engagement isn’t mysterious, it just requires breaking the experience into measurable steps and using the right tools to understand what’s working. Jono’s systematic approach turns developer onboarding from guesswork into a process you can actually optimize.
More from We Love Open Source
- Using metrics to improve open source communities
- How I use AI agents to automate my workflow and save hours
- Get started in open source with the CHAOSS Education Project
- Measuring open source community health with Savannah
The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.