Nail this first...

Get this right, and everything else on your AI roadmap gets easier.

Construction Cinder Blocks GIF by JC Property Professionals

Most mid-sized companies start their AI journey in the wrong place. They jump straight to the shiny stuff like copilots, custom agents, and automated workflows. It looks great on a whiteboard, but dies fast in the real world. Why? Because no one stopped to solve the first, most obvious problem.. can your people actually talk to their docs and data to do their daily work?

Startups can often skip this step. They’ve got smaller teams, less red tape, fewer systems, and tighter feedback loops. They can experiment and iterate fast.

But mid-sized companies don’t have that luxury. You’ve got legacy tools, compliance, silos, and a dozen places where knowledge hides. If your employees can’t get fast, trusted answers to basic questions, they’ll never adopt/use the fancy stuff.

So before you fantasize about the possibilities, nail the foundation. We’ve been helping quite a few companies tackle this this year. Here’s what we’ve learned through the process.

1. Set your north star

Almost every successful talk to your data project starts with the same goal. Give employees quick, reliable answers so they spend less time searching and more time solving. That’s your north star. Think about what data to include, which vendor to pick, and how the user experience should be. Everything flows from those initial decisions. You’re not building a chatbot. You’re designing a better way for people to find and use what your company already knows.

2. Define the non-negotiables

Mid-sized companies live and die by trust and ease of use. If answers are wrong (or even if they feel wrong) or the experience is cumbersome, people won’t use it. Every AI pilot needs several key guardrails baked in:

  • Security: No accidental data leaks or shadow access.

  • Accuracy: Every answer should cite its source and be explainable.

  • Ease of use: If non-technical folks can’t use it, you’ve already failed.

Most early-stage pilots stall out not because of model quality, but because the answers aren’t trustworthy or the UX sucks.

3. Be focused (and boring)

Forget the enterprise-wide AI strategy. Start boring. Pick 3–5 everyday workflows that quietly waste everyone’s time. Things like:

  • Drafting or reviewing contracts or writing RFP responses

  • Searching for policy or product info

  • Doing market or competitive research

Boring is good. Boring scales and is where the fastest and most immediate impact for AI is usually hiding.

4. Take inventory of your stuff

Most mid-sized companies underestimate how messy their data sprawl is. Before you plug anything in, make an inventory:

  • Which systems matter most?

  • Who owns them?

  • What permissions exist?

Then start with highest value, lowest risk sources (i.e., the things that are widely used and already governed). Don’t dump everything in. Quality beats quantity.

5. Design a good pilot

Aim for 4 weeks of usage with 10 to 15 real users. Connect a few key data sources and make sure you can:

  • Show citations (i.e., show where answers come from)

  • Track usage (i.e., track what people ask and how often they ask it)

  • Measure accuracy and usefulness (i.e., survey &/or interview pilot users before and after)

That’s how you’ll learn quickly without burning trust. Each week, share a quick digest with things like top queries for the week, recent changes to the model (fine tuning), and any new data sources added. It helps build momentum and visibility at the same time.

6. Calculate the real impact

Most teams stop at hours saved from AI tools. That’s a start, but not the whole story. The real impact comes from what those hours unlock. Things like faster client responses, quicker deals, shorter onboarding, etc. Happier employees who spend time on creative, high-value work instead of digging through files and folders. When you translate time saved into business results, skeptics listen.

7. Scale only once it works

Once adoption and ROI are clear, expand carefully. Add more teams and data sources but only if the foundation holds. If usage dips or trust erodes, fix that before scaling.
AI systems don’t fail because of the tech. They fail because the output isn’t good (garbage in, garbage out) which causes people to lose trust.

Putting this into action

  • Write down a north star vision statement. If you can’t explain how it helps employees do real work faster, it’s not clear enough.

  • Poll your team and ask them what questions they have to ask at least 2-3 times ever week. Those are your first use cases.

  • Make a simple table with columns for system, owner, data type, level of access, sensitivity. If you can’t fill that in for a system, it’s probably not pilot-ready.

  • Ask your team (and yourself), “If we gave you 2 hours back every week, what would you do with it?” That’s your real metric.

  • Start with an initial pilot and test with a select group of users. If they don’t like it or use it, ask why. You’ll find your adoption blockers immediately. Don’t try to prove perfection. Prove traction. If people are coming back every week, you’re on the right track.

  • Don’t scale until the majority of your pilot users say they’d be disappointed if the tool went away.

Some example AI knowledge tools

If you’re struggling to get started, here are some SaaS products built to help mid-sized & enterprise companies with AI-powered knowledge:

Wrapping up

Mid-sized companies don’t fail at AI because of bad models. They fail because they try to leapfrog the basics. Start by making it easy for team can easily talk to your data and get the right answers quickly with strong security and real adoption. Then go build the copilots and custom agents.

Nail the foundation before you get fancy.

Onward & upward,
Drew

P.s. What did you think about this newsletter? If you’ve got thoughts/questions or suggestions for future editions, reply and I’ll email you back personally. Also, if you found this helpful forward it to someone else you think would benefit 😊