
Gif by TwinSpires on Giphy
Your 12-month roadmap is lying to you.. it assumes you know what will work at the end of the year. It locks you into a plan based on your assumptions from months ago. The problem? When you're building in the AI era your assumptions made in January are probably wrong by March. The capabilities you need will change, costs will change, how users behave will change, your team’s capacity will change.
Over the last 18 months we’ve been continually reminded that planning months ahead just simply doesn’t work anymore when it comes to AI. The landscape keeps changing and new tools/capabilities are added daily.
That's why we’ve adopted and adapted the betting table approach from Basecamp’s book Shape Up. We've been doing it intuitively for over a year, but now we have language for it and we’re planning for uncertainty instead of pretending like it doesn’t exist.
The roadmap trap
Traditional roadmaps make four big assumptions:
Priority - You know what the value is upfront (e.g., "this feature is more important than that feature")
Timeline - Estimates are accurate (e.g., "this will take 3 months")
Scope - Requirements won't change (e.g., "here's exactly what we're building")
Feasibility - Technical complexity is understood (e.g., "we know how to build this")
All of these are totally collapsing with AI. Costs are evolving, people’s behavior is shifting every month, and the technical capabilities of AI tools are moving even faster. Between models, APIs, and integration patterns, what's "impossible" today becomes trivial next month.
Roadmaps can't keep pace.
Place your bets
As Shape Up puts it, betting one 6-week cycle at a time prevents commitments that lock in future direction. The core tension is that stakeholders typically want roadmaps because they want to know "What are we building and when?" The honest answer right now is, "We know what we're building this cycle. The next cycle depends on what we learn and how things evolve."
Shape Up outlines how good bets include three key components:
Meaningful payout: Teams deliver finished and specific outcomes rather than incremental progress toward vague goals.
Actual commitment: Uninterrupted focus for a specific amount of time.
Capped downside: The most you can lose is what you invested in 6-weeks.
Working in these 6-week cycles balances value with deadline visibility and is long enough to complete substantial work, but short enough that folks feel the deadline throughout.
Confidence, not priority
Instead of ranking features by priority, you should allocate resources by confidence. When asked "What are we confident we can ship in 6 weeks?" instead of "what's most important?", the conversation changes.
In a traditional planning cycle/roadmap, folks commit to delivering 10 things in 10 months, but then they ship 6 things that are half-done or the landscape changes and everyone's mad.
In a betting cycle, you commit to getting a few high confidence things done. Everyone's aligned and it ensures you get value.
Why this matters more with AI
Traditional software development has some uncertainty. Will users like the experience? Is our technical approach scalable? Did we estimate complexity correctly?
AI product development has more uncertainty. Will users actually trust the AI output enough to act on it? Will variable token costs make this unprofitable at scale? Will the model capabilities we're building on still exist in 6 months? What should we even charge for AI-powered features when the market hasn't settled on pricing patterns yet?
Making a 6-week bet matches the uncertainty level. A roadmap has no room for "we learned this doesn't work” but a betting cycle does.
What we're still refining
We don't have a perfect facilitation playbook for this, we're still learning. We're optimizing Shape Up’s betting table format for us and figuring out how to get stakeholders to honestly assess confidence levels without optimism bias. We’re working through what happens when a high-confidence bet depends on medium-confidence external factors. We're helping partners understand why "we'll decide next cycle" is better than locking in to a 12-month plan.
We're running experiments with new AI capabilities right now and testing different LLM providers for specific use cases. We’re continually working through tradeoffs and building systems to test/validate our bets and surface the right learnings.
Putting this into action
Look at your roadmap right now. For each item, ask: "Are we absolutely confident that we can deliver this, or are we hoping we can?" If you're hoping more than you're confident, you need a betting table. A few things to try:
Pick a bet you can confidently deliver in the next 6-weeks. Gather the initiatives or ideas you're considering. Instead of ranking them by priority, assign confidence levels.
High - We know how to do this. We've done it before.
Medium - We think it'll work. We need to validate assumptions.
Low - We're not sure, but we need to learn.
Allocate your team based on confidence, not priority. High confidence gets full resources and commitment to ship. Medium confidence gets parallel track with contingency. Low confidence gets research spikes, not full builds.
Define "done" for each bet before you start. What does shipped look like for high confidence? What does "enough learning to decide" look like for low confidence?
Set a “circuit breaker” (i.e., no extensions.. if you don't hit it in 6-weeks, you stop. Force scope cuts, not timeline extensions).
Run your cycle then run a retro. Did your confidence levels match reality? What did you learn? Use that to inform your next bet.
Wrapping up
Roadmaps assume you know, betting tables assume you'll learn. When you're building AI products (or honestly anything in this fast-moving market) pretending to know is expensive. It’s way better to make bets and learn fast.
This is how we plan at StealthX now. We’re continually refining the process and iterating as we learn.
TL;DR - If your roadmap is making promises your confidence level can't back, you might need a betting cycle instead.
Onward & upward,
Drew
P.s. If we haven’t met yet, hello! I’m Drew Burdick, Founder and Managing Partner at StealthX. We work with brands to design & build great customer experiences that win. I share ideas weekly through this newsletter & over on the Building Great Experiences podcast. Have a question? Feel free to contact us, I’d love to hear from you.