logo making sense

Latest posts

Explore our categories

The 10 Questions That Reveal a Fake AI Strategy in 20 Minutes

Most AI pitches look compelling in a demo. These questions are designed to find out what's actually behind them.

Mar 23, 2026

The deck is polished. The demo runs smoothly. The service provider talks about "unlocking value" and "accelerating transformation," and the numbers on the slide look like they were reverse-engineered to get a yes. Then you sign, the engagement starts, and six months later your team is maintaining a fragile proof of concept that no one outside the original pilot group is using, and that your CFO can't connect to any line on a P&L.

In The GenAI Divide: State of AI in Business 2025, MIT's NANDA initiative found that despite $30–40 billion in enterprise investment, 95% of enterprise AI pilots are falling short, delivering little to no measurable impact on P&L. The core problem wasn't model quality. It was flawed enterprise integration and misaligned priorities.

For mid-market companies, that dynamic is more costly than it looks. There's no dedicated AI lab to absorb the misfire, no parallel initiative to fall back on. A poorly scoped engagement doesn't just burn the budget. It ties up engineering for months, makes the next AI conversation internally harder to have, and delays the work that would have actually moved the needle.

The question worth asking isn't whether AI can help. At this point, the evidence is fairly settled. The question is whether the proposal in front of you is based on real diagnostic work or on a playbook the service provider runs the same way regardless of who's in the room.

The gap between what sells and what ships

Most service providers are optimized for the sale. A polished demo is fast to build. A production system, one that integrates with existing data architecture, handles edge cases, gets adopted by actual users, and produces numbers a finance team trusts, takes longer to scope and doesn't fit neatly into a 45-minute pitch.

The gap between what sells and what ships.png

The gap between a compelling demo and a system your team actually uses six months later is rarely a technology problem. Almost always, a solution was chosen before the problem was clearly defined. 

The firms most likely to waste your time arrive with a fixed answer. They've decided what they're selling. The operating partners who tend to produce results start with a diagnostic. They want to understand where your operations are actually losing time or money before proposing anything. These 10 questions are built to tell those two apart.

10 questions to ask before you sign anything

Strategy and diagnosis (questions 1–5)

  1. Which specific process are you replacing, and what does it cost today?

Vague answers here ("streamlining workflows," "improving efficiency across the organization") are a signal that the diagnostic work hasn't been done. A serious proposal names a specific process, estimates what it costs in time or headcount or error rate, and explains what metric improves and by how much. If that conversation hasn't happened yet, you'd be funding it during implementation.

  1. How did you identify this use case as the highest-value starting point?

This is the 80/20 question, and it tends to be revealing. A rigorous engagement should start by mapping where operations are most inefficient or where data is actionable enough to support a reliable model, then targeting the slice of processes that drives disproportionate value. Ask the partner to walk through that prioritization. Did they spend time with your team asking about workflow and data before this meeting? Or did they arrive with a pre-built use case they run for every company in your sector? Those are very different engagements.

  1. What does your discovery process look like before you propose a solution?

Serious operating partners don't skip this step. The diagnostic work, mapping pain points, understanding data flows, identifying where the organization actually loses time, is what determines whether what gets built will survive contact with real operations. A proposal that arrives without it was written before anyone understood the problem.

  1. What business outcome are you committing to, and how will we measure it?

Push past "increased productivity." Ask for a defined reduction in processing time, cost per transaction, or error rate, with a named methodology and a realistic timeline. If the partner hedges or pivots to qualitative outcomes, pay attention to that. Operating partners who have shipped production systems know what's measurable. The ones who are improvising usually don't.

Operating partners who have shipped production systems know what's measurable.png
  1. Have you worked with companies at our stage, in our industry, with our data environment?

Generic AI strategies misfire on the details that turn out to matter most. A healthcare company's data governance requirements are not the same as a legal tech firm's. An agtech platform's data quality problems are different from a fintech company's compliance constraints. Industry experience isn't just a resume line. It changes what gets scoped, what gets deprioritized, and how long the data preparation work actually takes.

Implementation and production (questions 6–10)

  1. What's your deployment methodology, and what are the most common failure points you've seen between pilot and production?

This question is harder to fake than it looks. A team that has shipped real systems under real conditions can answer it without hesitating: data pipelines that weren't as clean as expected, integrations that broke at handoff, user adoption that stalled because the workflow change wasn't accounted for. If the answer stays at the level of process diagrams and reassurances, that's information. Operating partners who have navigated the gap between a working demo and a stable production system tend to lead with the problems they've learned to anticipate, not with how smooth their methodology is.

  1. What does your data pipeline look like before this AI touches it?

Ask who is responsible for cleaning, structuring, and validating your data before anything is trained or deployed. Research published in 2025 on the effects of data quality on machine learning performance found that model accuracy degrades directly and measurably as data quality drops, in ways that aren't visible until the system is under real operational load. If the proposal doesn't include an explicit data readiness assessment, the timeline and cost you were quoted are probably both wrong. If the answer to who owns data preparation involves figuring that out later, that's the ballpark you're in.

  1. What's the fallback when the model is wrong?

Every AI system produces errors. The question is whether your team has a defined process for catching them, correcting them, and feeding that signal back into the system. Ask how the solution handles low-confidence outputs. Ask what the escalation path looks like when something goes wrong at 11 p.m.. Ask how errors are logged and who reviews them. A partner who hasn't thought this through is building you something that will work fine in a demo and create problems on a Tuesday.

What's the fallback when the model is wrong.png
  1. How does this integrate with the tools your team already uses?

The AI implementations that actually get adopted are the ones that plug directly into existing workflows rather than requiring teams to log into a separate platform or change habits they've built over years. If your team has to leave their CRM or their reporting stack to access AI output, most of them won't. Ask for a specific integration architecture, not a general assurance that integrations are "supported."

  1. Who owns this after the engagement ends?

A service provider hands you a system. An operating partner makes sure someone on your team can operate, maintain, and iterate on it without calling them every time something needs to change. Ask who owns the system post-launch. Ask what training is included. Ask what happens six months from now if you need to modify a workflow or retrain on new data. Some dependency on the partner is fine. Unlimited dependency is a different contract than it looks like on day one.

What a real answer sounds like

The difference between a service provider and a strategic partner isn't visible in the pitch. It shows up in what happened before the pitch, and in what they're accountable for after it.

At Making Sense, some engagements start with a full discovery process: mapping which workflows are consuming the most time for the least return, identifying where data is trapped in systems nobody queries, and understanding why the last technology initiative stalled. That diagnostic work often turns up something unexpected: the process the executive flagged as the priority isn't the highest-leverage starting point.

Others start faster. When the problem is already well-defined and the data foundation is solid, Making Sense's AI Jumpstart Kits are designed to move from alignment to live deployment in weeks, not months. The 80/20 model behind them, 80% proven no-code tooling, 20% custom development, exists precisely for companies that don't need another six-month roadmap before seeing results.

The entry point varies. But the questions in this list exist precisely to make sure that wherever you start, you're starting on solid ground, with a partner who understands your business before they propose anything for it.

What a real answer sounds like in an AI strategy.png

Before you greenlight the next initiative

Run through this before the next conversation with a potential AI partner:

  • Can they name the specific process being replaced and its current measurable cost?
  • Do they have a prioritization rationale, or just an industry use case they're recycling?
  • Is there a formal discovery phase before solution design, or does the proposal come pre-built?
  • Is the committed outcome specific and measurable, with a named methodology?
  • Do their production references hold up, or are they citing pilot completions?
  • Is the data pipeline plan in the proposal, or is it "TBD during implementation"?
  • Have they mapped error handling and escalation clearly, or is that a later conversation?
  • Does the integration architecture connect to systems your team already lives in?
  • Is post-launch ownership, training, and support explicitly defined?

As MIT's NANDA initiative found in The GenAI Divide, the companies getting meaningful returns from AI tend to pick one specific pain point, execute it well, and partner with firms that have genuine domain expertise, while the majority stall on initiatives that were never properly scoped.

A real AI strategy starts with an honest answer to a straightforward question: where is the business actually losing value today? The tools, the models, the integrations, those are downstream. Getting that first answer right is what separates the 5% from everyone else.

Ready to figure out where AI can create measurable impact for your business? Explore Making Sense's AI & Data Strategy approach or reach out directly to start with a discovery conversation.


Mar 23, 2026

Say Hello!

Get the latest news and updates
logo footer making sense

|

Technology Fueling Growth

The 10 Questions That Expose a Fake AI Strategy