AI Amplified Development: Beyond Vibe Coding - Event Recap
Thanks to everyone who joined us in Chicago for an open conversation about AI-amplified development and its role in our industry. These discussions are only becoming more important as the ground beneath software development keeps shifting.
Looking forward to keeping the momentum going and seeing where it all leads next.
Below is a recap of what stood out to us.
1. Shifting Mindsets
- It’s clear a major cultural shift is underway: developers are learning to let go of treating code as precious. Instead of clinging to large codebases, the focus is moving toward the "what" and "why" of software, not just the "how."
- There’s a growing willingness to start over—to explore a partially working solution, then reframe or rebuild it with fresh context.
- The long-term mindset: adapt and evolve with tools, but rely on your own ability to learn. Our friend Dustin Anderson had perhaps the quote of the day: “The model to stick with long-term is yourself.”
2. Prompting, Specificity, and Workflow Success
- AI tools like Claude Code is highly dependent on specificity and context. The better the prompt and problem framing, the better the results.
- One group discussed using workflow tools (like NAM?) to compartmentalize steps and tune processes, improving outcomes from LLMs.
- The importance of structure and breaking work into discrete, contextual pieces came up repeatedly.
3. Navigating Tensions
- There was discussion about potential intergenerational divides in how AI tools are adopted:
- Older professionals may bring depth and experience but less familiarity with modern AI tools.
- Younger folks may experiment more, but questions remain: Will they still need to learn to code? When does AI replace or augment that skill?
- This led to a broader theme: When do we rely on expertise, structure, and process vs AI-driven agility?
- One historical analogy was made to Visual Basic: non-programmers could build real apps—useful, if limited. We’re now facing the same blurred lines with AI.
4. Tool Fragmentation and Future-Proofing
Attendees raised concerns about tool fragmentation and uncertainty. It’s hard to say what will be around in 5 years.
- There’s a need for long-term thinking:
- Contracts with out-clauses
- Migration strategies
- Code portability
- Standards that scale from startup to enterprise
- Balancing morale vs velocity is another leadership challenge—how to move fast while keeping teams grounded and motivated.
5. Claude Code & Emerging Workflows
- Claude Code impressed many and is being used creatively:
- One attendee described using Cursor to write user stories, then Claude to build features.
Another mentioned Claude auto-generating commit history and invoices.
- One attendee described using Cursor to write user stories, then Claude to build features.
- Several groups discussed agentic development—where an AI agent orchestrates tooling.
- One example involved crypto platforms using APIs like CoinGecko:
A new MCP server was added to the agent interface, removing the need to directly interface with complex APIs. “That was a game changer."
- One example involved crypto platforms using APIs like CoinGecko:
6. Trust, Quality, and System Reliability
- Across industries, especially in healthcare and enterprise, the central challenge is trust and reliability.
- One line summed it up: “It will always boil down to reliability and trust.”
Developers and engineers have to be accountable for code generated by AI. - Questions were raised about:
- How to ensure consistent quality
- How to vet or select tools
- How to scale processes across teams with different work styles
- How to ensure consistent quality
7. Checks and Balances
- Some teams are exploring LLM-on-LLM supervision: using one large language model to monitor or verify the output of another.
- For example: one LLM generates code, another flags issues or corrects it. This concept echoes the need for checks and balances, potentially reducing hallucinations and increasing reliability.
- Tools like Claude Code’s code reviewer or Gemini code assist can be helpful for first pass reviews.
- For example: one LLM generates code, another flags issues or corrects it. This concept echoes the need for checks and balances, potentially reducing hallucinations and increasing reliability.
8. Open Questions
- At what point does a product or idea need to transition from low-code or AI-built prototypes into a more “real” language or professional engineering hands?
- How do we create systems of integrity around AI-assisted development before it scales too far without oversight?
- Will non-developers continue to build increasingly complex tools—and should they?
From rethinking long-held engineering mindsets to experimenting with emerging tools like Claude Code, it’s clear the way we build is changing fast. These conversations are rooted in curiosity, healthy skepticism, and real-world application, and they’re essential as we navigate what comes next.
Stay tuned for future gatherings as we continue to explore, question, and shape the future of AI-amplified development together.

Rob Volk
Rob Volk is Foxbox Digital’s founder and CEO. Prior to starting Foxbox, Rob helped Fortune 500 clients, including Pfizer, USPS, and Morgan Stanley build and scale enterprise apps. He was the CTO of Beyond Diet and implemented technology that scaled to over 350k+ customers, and was the CTO and Co-Founder of Detective (detective.io), a venture-backed intelligence platform that amassed 200k+ users in a short time frame. Read more