Back to Articles

Why AI Bootcamps actually work

Why AI Bootcamps actually work
[
Blog
]
Table of contents
    TOC icon
    TOC icon up
    Chantelle Gallagher, Lead Business Analyst
    Published:
    March 3, 2026

    Last month we ran our AI Bootcamp again—another iteration of how we build software with AI-enhanced development. Learning stayed the primary goal; delivery of a working resource allocation tool was a useful secondary target. We made progress on the tool, reinforced what works in our process, and surfaced what we still want to improve. The takeaway we keep seeing from these Bootcamps: they work well, and we learn something new every time.

    Learning First, Delivery Second

    We run these bootcamps with learning as the main goal. Our target this time was a resource allocation tool. What we focused on primarily is how we adapt SDLC practices for AI tooling, evolve roles and skills, and use shared language for "AI-enhanced" vs "standard" projects. This time we:

    • Applied our three-stage process (Requirements → Solution Definition → Code Creation)
    • Updated our documented AI behaviour and best practices
    • Used our baseline workflow for AI-assisted development

    What Actually Worked

    Several things held up again:

    • Structured process — Our repeatable 7-step workflow within each stage keeps requirements, solution design, and code creation from becoming ad hoc.
    • Context before creation — Setting clear context and deliverables before asking the AI to generate anything continued to improve quality.
    • Scaffolding — Creating file structures and headers first, then filling in content section-by-section, gave us control and made verification easier.
    • Separating stable requirements from changeable UI — Kept the "what" stable while we iterated on the "how."
    • AI-assisted development environment — Especially contextual callouts, noticeably boosted day-to-day efficiency.
    • Real stakeholder problems — Tackling a genuine problem with real stakeholders in the room kept the work grounded and relevant.
    • External expertise and dedicated on-call support — Sharpened focus and alignment.
    • Cross-functional collaboration — Brought diverse perspectives and better outcomes.

    In short, structure, context, scaffolding, and human-led direction keep making AI useful instead of noisy.

    What We Learned (The Good and The Tricky)

    We reinforced and updated lessons we already use:

    • AI is intelligent but not smart — We verify understanding by asking for summaries and agreeing on deliverables before generation.
    • Scaffolding before creating and phased creation beat "generate everything at once."
    • New AI conversation per distinct task reduced unwanted context carryover and scope creep.
    • Focused questions get detailed answers; broad questions hit token limits and stay shallow.
    • Impact analysis — When we change one artefact, we explicitly ask the system to find and update all affected files.

    We also saw again where we overcorrect:

    • Over-reliance on AI for starting requirements and guidance dulled critical thinking and ownership. The balance we aim for is human-led requirements and decisions, with AI assisting—augmenting, not replacing.
    • Hands-on participation beat passive observation; we keep designing sessions to favour doing over watching.
    • Aligning training with real stakeholder needs kept the bootcamp relevant and engaged.

    What We're Still Refining

    We're upfront about what we're still improving:

    • Coordination in parallel development — Developers stepping on each other's toes; we've named it and are improving protocols.
    • Requirements vs solution boundaries — These can blur; we use a principle (stable business logic in requirements, changeable UI in solution design) but need to keep reinforcing it.
    • Scope creep from the AI — "Doing more than asked" still requires discipline: new conversations for new tasks and explicit "do not create" instructions.
    • Tool fit — Our AI development tool isn't the right fit for UI design; we use dedicated design tools for design and the main AI tool for code and contextual help.
    • Process flexibility — We maintain a baseline to experiment with while leaving room for different project and team needs.
    • Placeholder use cases — Requirement changes mid-code, working in existing codebases—we're filling these in over time.

    We have a working process and a clear list of improvements. That's the norm for us: honest about what we're still refining.

    Where We Go From Here

    Each bootcamp feeds the next. We don't treat this as a one-off.

    Next:

    • Document updates to our 7-step workflow and baseline for AI-enhanced projects
    • Refresh prompting guidelines and team coordination protocols

    Right after that:

    • Share the process with other teams for feedback
    • Refine terminology and templates
    • Test the workflow beyond greenfield product creation

    Longer term:

    • Keep evaluating and refining our AI-SDLC
    • Evolve role definitions for AI-enhanced work
    • Develop metrics for AI tool effectiveness

    We're also acting on our lessons-learned appendix: future bootcamps will continue to include a "control and direction" module, more non-greenfield scenarios, and structured lessons on AI techniques beyond basic chatbot use.

    Why This Matters for Anyone Adopting AI in Development

    For teams building or refining their own approach:

    • Start with clear context and deliverable agreement
    • Use a structure first, content second, section by section approach for requirements definition and development, with frequent verification
    • Draw a clear line between requirements and solution design
    • Assume coordination challenges in parallel work and plan for them
    • Keep humans in the lead on requirements and key decisions; use AI to augment, not replace
    • Prefer hands-on, interactive learning over passive observation, and give people time to practice and reflect
    • Use the right tool for the job and retain what works while adapting from real feedback

    Our AI Bootcamps keep showing that AI-enhanced development works when it's structured, human-led, and treated as a learning exercise. We have a process we use, a shared vocabulary, and a steady list of improvements. We're not declaring victory; we're continuing to refine. It works well—and we still have more to learn.

    If you're exploring AI-enhanced development, running or designing similar bootcamps, or rethinking how your teams work with AI, we're happy to share what we've learned—the workflows we use, the pitfalls we've hit, and the next iterations we're running. Get in touch, let’s continue the conversation.

    Got a complex challenge?
    Let’s solve it – together, and for real
    Frequently Asked Questions