Home » Digital Transformation Resources » Your AI Strategy is 100% Correct and 0% Actionable. Here’s Why.

Your AI Strategy is 100% Correct and 0% Actionable. Here’s Why.

A wide-angle digital illustration shows a man in a suit at a futuristic control panel overlooking a conference room transformed into three glowing virtual highway lanes. The left lane, labeled "FAST LANE" in green, has a green car marked "PRODUCTIVITY". The middle lane, labeled "STANDARD LANE" in yellow, has a yellow car marked "CUSTOMER EXPERIENCE". The right lane, labeled "CONTROLLED LANE" in red with flashing "HIGH RISK" signs, has a red armored vehicle marked "HIGH RISK". The man is gesturing towards the yellow lane. The background is a stylized cityscape with digital data points.

An AI customer service agent at Cursor, an AI-powered coding assistant, invented a company policy that didn’t exist. When users contacted support about unexpected logouts, the AI bot confidently explained that the logouts were “expected behavior” under a new policy restricting each subscription to a single device. The response sounded definitive and official. Users didn’t suspect that “Sam,” the support agent, wasn’t human.

But there was no such policy. The AI had hallucinated the entire explanation. Within hours, the fabricated policy spread across Reddit and Hacker News. Frustrated users publicly canceled their subscriptions. The company’s co-founder scrambled to correct the record, explaining it was an “incorrect response from a front-line AI support bot.” The fallout was negative publicity, customer confusion, and damaged trust.

This story is now part of every conversation about AI adoption. It creates two equally dangerous responses. One is paralysis – “We’ll wait until AI is safer” – the other is recklessness – “We’ll move fast and fix problems as they arise.”

But maybe you already have an AI strategy. Perhaps your consultants delivered a framework. Gartner gave you a roadmap. Your leadership team nodded along in the presentation. And precisely nothing is happening.

Why? Because those frameworks give you all of “the what” and none of “the how.” They tell you AI is important but they don’t tell you how to move fast on the safe stuff while being appropriately careful with the dangerous stuff.

While you’re stuck in strategic planning, your competitors are pulling ahead. Some are deploying AI recklessly and will pay for it. Others have figured out something you haven’t.

The existential risk here is falling so far behind that you become irrelevant. The companies in pole position aren’t the ones with the best AI strategy documents, they’re the ones who’ve closed the execution gap.

The execution gap (why nothing is happening)

Most organizations now have AI strategy frameworks. Many came from the same sources: Gartner, McKinsey, internal strategic planning sessions. They cover vision and objectives, use case prioritization, technology architecture, governance principles, and success metrics.

These frameworks are 100% correct and 0% actionable. They tell you what to do but not how to do it. This creates what we call the “execution gap.”

What the execution gap looks like in practice:

The endless pilot: Your team has been “piloting” an AI chatbot for 9 months. It works. Everyone agrees it should go to production. But it’s still in pilot because no one knows who approves it, what the security review process is, or what “production-ready” even means for AI.

The innovation theater: Your Chief Digital Officer gives quarterly updates on AI initiatives. Lots of activity. Lots of exploration. Zero deployed applications actually serving customers or improving operations.

The governance gridlock: Someone proposes using AI for document summarization. It goes to the IT governance board. They send it to the security team. Security has questions for legal. Legal wants to review the vendor contract. Six months later, employees are using ChatGPT anyway, with zero oversight.

Why traditional governance fails for AI

Your governance processes were built for infrastructure projects with defined requirements and predictable behavior. AI is different.

Traditional IT governance asks: What are the requirements? What does the system do? How do we test that it works correctly? For AI, these questions don’t have clean answers. Requirements evolve. Behavior is probabilistic. “Works correctly” is context-dependent.

This problem isn’t unique to enterprises. Traditional governance models are too slow and lack the mechanisms required for dealing with AI’s unique capabilities. We see this at the national legislative level too – governments struggle to keep up with AI’s pace and characteristics. Your monthly governance committee faces the same problem.

This comes with a real cost. While you’re perfecting your governance process, your competitors are learning. They’re failing small, iterating fast, and building organizational AI literacy. You’re building the perfect framework for a technology that will have evolved twice over by the time you implement it.

The competitive risk of moving too slow

Most AI discussions focus on one type of risk: “What if our AI makes a mistake?”

But there’s a bigger risk that gets less attention: “What if our competitors figure this out while we’re still planning?”

The market reality

Companies in your industry are already deploying AI. Your competitors are using AI to automate customer service, deploying AI for fraud detection, using AI to personalize customer experiences in real-time. Some are doing it well. Some are doing it badly. All of them are learning.

Every AI deployment – successful or failed – teaches the organization how to write effective prompts, what use cases actually deliver value, how customers interact with AI, what governance is actually needed versus theoretical governance, and how to integrate AI into existing workflows.

Your competitors are climbing this learning curve. You’re not.

The “wait for perfection” trap

Some CEOs think: “We’ll let others make the mistakes, then we’ll deploy the mature version.”

This worked for enterprise software. It doesn’t work for AI because:

  • AI is not “feature complete”, it’s continuously evolving
  • The competitive advantage comes from organizational learning, not technology selection
  • By the time AI is “safe enough,” the market opportunity will have shifted

The calculation you should be making

Instead of asking “How do we eliminate AI risk?” ask: “What’s the cost of being 18 months behind our competitors in organizational AI literacy?”

You don’t need to deploy high-risk AI applications tomorrow. But you need to be deploying something. Building organizational muscle. Learning what works. Creating feedback loops.

The dangerous middle ground

The worst position: having an AI strategy but no execution. You get the budget commitment (you’re spending money on AI initiatives), zero competitive advantage (nothing is deployed), board frustration (quarterly updates with no measurable outcomes), and employee workarounds (shadow AI usage with no oversight). This creates urgency, but urgency without understanding is recklessness.

So what do you need to understand about AI that changes how you should govern it?

Why ai is fundamentally different – generative vs. agentic

There’s a fundamental difference between AI that gives advice and AI that takes action. Your governance approach must account for this.

Generative AI: Ask and Answer

Generative AI involves an “ask and answer” query that produces a static response. Think ChatGPT answering questions, AI writing draft emails, document summarization, research assistance.

The control point: Human judgment sits between AI output and action. If the AI hallucinates, you catch it before anything happens.

Risk profile: Contained. The AI can be wrong, but humans control what happens next. This is familiar territory – it’s like having a very smart assistant who sometimes gives bad advice.

Agentic AI: Ask and Act

Agentic AI involves an “ask and act” command, allowing it to take “autonomous action across systems and channels to complete tasks on a user’s behalf.”

Examples: Updating billing addresses across multiple systems. UK open banking initiatives where voice commands transfer money between accounts. AI orchestrating multiple tools to complete complex workflows.

The control point: The AI acts first. By the time you realize there’s a problem, the action already happened.

Risk profile: Fundamentally different. The difference between “here’s a recommendation” and “I’ve already done it” is everything.

This is where the Cursor hallucination incident occurred. The AI didn’t just suggest a policy – it executed customer service based on that invented policy. The mistake wasn’t theoretical. It was operational, with immediate business consequences.

Why this matters for your strategy

If your AI strategy doesn’t distinguish between generative and agentic AI, you’re planning for the wrong risk profile.

Generative AI governance questions:

  • How do we ensure output quality?
  • How do we prevent inappropriate content?
  • How do we maintain data privacy in prompts?

Agentic AI governance questions:

  • What actions can the AI take without human approval?
  • How do we monitor actions in real-time?
  • What’s our rollback plan when AI takes the wrong action?
  • How do we ensure the AI has appropriate permissions (and nothing more)?
  • What happens when the AI makes a mistake we can’t undo?

The market trajectory

The UK open banking initiative is heading toward agentic interactions. Voice commands to transfer money. AI agents conducting financial transactions. This is where the market is going. Your competitors will deploy agentic AI. The question is whether you’ll have the governance architecture for it.

Current API architectures weren’t built for this. Traditional APIs assume a human is orchestrating the calls. Agentic AI calls multiple APIs autonomously. This requires what we call a “control plane” – monitoring and control systems on top of existing infrastructure.

The governance implication is that you need to build this architecture before you need it. After the hallucination incident is too late.

The catalog of risks – what actually goes wrong

Understanding the difference between generative and agentic AI explains the “what.” Now you need to understand the “how” – what actually goes wrong when AI fails.

These aren’t theoretical risks. These are real failure modes from deployed systems.

Risk 1: Unpredictable behavior and hallucinations

AI invents information and presents it as fact. The Cursor incident – AI creating a non-existent company policy – is a textbook example.

Why this happens: These are “probabilistic systems” where you can’t fully predict outputs. AI doesn’t know what it doesn’t know. It generates plausible-sounding responses based on patterns, not truth.

Traditional software testing validates deterministic behavior. You test inputs, verify outputs. AI produces different outputs for the same input. How do you test that?

Business consequence: Reputational damage when AI gives customers false information. Customer confusion and loss of trust. Potential legal liability if customers act on false information. With agentic AI, the consequences are worse – actions taken based on hallucinated information.

The black-box nature of AI decision-making is unacceptable for high-stakes applications like air traffic control. You can’t always explain why the AI produced a specific output.

Risk 2: Data exposure and privacy violations

AI agents access and potentially expose data they shouldn’t. For example, an AI agent accessing a user’s spouse’s financial data without appropriate permissions.

Why this happens: This is not a bug but a consequence of how agentic systems work. AI agents need broad access to be effective. But broad access creates exposure risk.

Traditional role-based access control is too restrictive for AI to function effectively. But giving AI broad access creates privacy and security exposure.

Business consequence: Regulatory fines (GDPR, CCPA violations), lawsuits from affected customers, loss of customer trust. In regulated industries, this can be company-threatening.

Traditional systems have defined data flows. You know what data each system accesses. AI agents dynamically determine what data they need. You can’t predict in advance what data the AI will access to complete a task.

Risk 3: Malicious input and prompt injection

Attackers craft inputs that trick the AI into executing unauthorized actions.

Why this happens: Traditional applications validate inputs against known patterns. AI processes natural language, making them much harder to secure against adversarial inputs.

If the AI has privileges to conduct financial transactions, prompt injection could lead to unauthorized money transfers, data exfiltration, or system compromise.

Business consequence: Financial loss, security breach, regulatory scrutiny, potential criminal liability.

Risk 4: Compliance across jurisdictions

Your AI governance works in the US but violates EU AI Act requirements. Or vice versa.

Why this happens: US and EU have fundamentally different regulatory approaches to AI. A governance framework compliant with one may violate the other.

Business consequence: Limited ability to deploy AI globally. Competitive disadvantage versus companies that solve this. Potential regulatory action in jurisdictions where you’re non-compliant.

Risk 5: Lack of explainability

AI makes a decision. The customer or regulator challenges it. You can’t explain why the AI made that specific choice.

Why this matters: Regulatory requirements for explainability exist in many industries. Customer trust – people want to understand why decisions were made. Legal liability if you can’t justify AI decisions.

Business consequence: In regulated industries, this can prevent AI deployment entirely.

The key insight from cataloging these risks

Not all AI faces all these risks. The risks depend on whether it’s generative or agentic, what it has access to, what actions it can take, how customer-facing it is, and whether it operates in regulated domains.

This is why one-size-fits-all governance fails.

Not all ai is created equal – the value dimension

Beyond understanding what can go wrong, you need to understand what you’re trying to accomplish. Not all AI initiatives have the same strategic purpose. This matters for how you think about risk, investment, and governance.

Incremental Value AI

Incremental value AI improves operational efficiency. It frees up existing resources. It supports your current business model.

Examples: HR chatbot answering employee questions. AI summarizing customer service calls. Automated expense report processing. Meeting transcription and summarization.

Characteristics: Internal operational improvements. Measurable ROI in cost savings or time savings. Doesn’t change your competitive position. Supports existing processes more efficiently.

Business case: Straightforward. Does it pay for itself in X months through productivity gains?

Strategic implication: Important for efficiency, but won’t fundamentally change your market position. These are “better, faster, cheaper” initiatives.

Gamechanger Value AI

Gamechanger AI creates new value propositions or products. It fundamentally changes how you execute core competencies. External, customer-facing applications that generate revenue or transform operations.

Examples: Automated fraud detection that fundamentally changes risk management. AI that creates new customer-facing products or services. Systems that transform how you deliver core services.

Characteristics: Changes competitive position. Creates new revenue streams or transforms existing ones. Requires strategic investment. Often customer-facing or mission-critical.

Business case: Rather than ROI in months, think about competitive positioning. Does this create a defensible advantage? Does it change how customers interact with us? Does it transform our core operations?

Strategic implication: This is where AI creates competitive differentiation. This is where market shifts happen.

Why the distinction matters for governance

Incremental AI:

  • Usually lower risk (internal, contained)
  • Should move fast (competitive disadvantage if too slow)
  • Lightweight governance appropriate
  • Measured on efficiency gains

Gamechanger AI:

  • Almost always high risk
  • Requires strategic discussion and executive sponsorship
  • Comprehensive governance required
  • Measured on competitive impact

The pattern that emerges

Different AI initiatives have different risk profiles (what can go wrong), different AI types (generative vs. agentic), different strategic values (incremental vs. gamechanger), and different competitive urgency (some you can’t afford to delay).

One-size-fits-all governance treats all of these the same. That’s why low-risk initiatives sit in review for months while high-risk initiatives don’t get the scrutiny they need.

The AI Governance Highway – a framework for variable speed

Not all AI carries the same risk. The governance that makes sense for AI-powered financial decisions is overkill for AI document summarization.

We have to treat different risks differently and match the speed of deployment to the risk level. Think of it as a highway with three lanes, each moving at different speeds for different types of traffic.

Fast Lane: Low-Risk AI

What belongs here: Internal productivity tools. Generative AI with human review. Incremental value applications.

Example: Document summarization using Google Gemini for internal research.

Why it’s low risk: Internal use only. Humans review outputs before any action. No customer-facing decisions. Mistakes are easily caught and corrected. No regulatory implications.

Governance approach: Pre-approved for use. IT provides the tools, sets basic guardrails, and employees use them like any other productivity software.

Decision speed: Days to weeks. These should not sit in governance review for months.

The competitive implication: If your employees are waiting 6 months to use AI for meeting summaries while your competitor’s employees have been using it for a year, your competitor’s organization is simply more productive.

Standard Lane: Medium-Risk AI

What belongs here: Customer-facing applications with moderate consequences. These affect customer experience but don’t make critical decisions.

Examples: Customer-facing chatbots. Real-time website personalization. Applications that interact with customers but have human oversight.

Why it’s medium risk: Customer-facing (reputational risk). Limited scope of action. Non-critical business processes. Mistakes are visible but recoverable.

Governance approach: Defined review process for each use case. Not automatic approval, but not full governance board either. Clear criteria, specific approver, reasonable timeline.

Decision speed: Weeks to months, depending on complexity. The key is having a defined process, not an ad-hoc “we’ll figure it out” approach.

The competitive implication: Your competitors are using AI chatbots to handle customer queries 24/7. If you’re still in “planning phase,” you’re losing market share to companies with better customer experience.

Controlled Lane: High-Risk AI

What belongs here: AI that makes decisions affecting finances, compliance, safety, or core business operations. Failure leads to lawsuits, regulatory action, or significant business damage.

Examples: Credit decisioning at financial services companies. Healthcare diagnostics. Any application where failure has severe consequences.

Why it’s high risk: Critical business decisions. Regulatory implications. Financial consequences. Potential for lawsuits. Could literally cost lives (e.g. in healthcare).

Governance approach: Comprehensive oversight with ongoing monitoring. This is where your rigorous governance process belongs.

Decision speed: Months. These require careful vetting, testing, monitoring architecture, and ongoing oversight.

The competitive implication: This is where moving too fast kills you. The Cursor hallucination incident? That was controlled lane AI deployed with fast lane governance.

Why the AI Governance Highway solves the execution gap

Current state: Everything goes through the same governance process. Low-risk AI waits months. High-risk AI doesn’t get adequate scrutiny because the process is overwhelmed.

With the AI Governance Highway: Fast lane AI moves at “the speed of business innovation.” Standard lane AI has a defined, reasonable process. Controlled lane AI gets the comprehensive governance it actually needs. Resources are allocated appropriately. Clear “lane markers” tell you where each initiative belongs.

The competitive advantage

While your competitors are either paralyzed (everything stuck in the controlled lane) or reckless (treating everything as fast lane), you’re deploying strategically. Building organizational AI literacy with fast lane deployments. Moving carefully on controlled lane AI where mistakes are catastrophic. Gaining competitive advantage without courting disaster.

What building the highway requires

Building the AI Governance Highway requires clear lane markers (criteria for categorizing AI initiatives), defined governance processes for each lane, infrastructure for monitoring agentic AI in the controlled lane (the “control plane”), organizational alignment on which lane each initiative belongs in, and mechanisms that work across jurisdictions.

This is the “how” that’s missing from your strategy. Your current framework told you to “govern AI appropriately.” The AI Governance Highway shows you what appropriate governance actually looks like for different types of AI.

The hallucination incident revisited

That AI that made up a company policy at Cursor? It was controlled lane AI deployed with fast lane governance. Or possibly no governance at all.

The opposite mistake is when your team’s document summarization pilot has been stuck in review for 9 months. That’s fast lane AI trapped in controlled lane governance processes.

Both are execution failures caused by the same root problem: an inability to match governance to actual risk. Putting every initiative in the same lane regardless of what it actually needs.

The real competitive threat

Your competitors aren’t all going to fail spectacularly. Some will figure this out.

They’ll be deploying fast lane AI while you’re still planning. They’ll be building organizational AI literacy while your employees use ChatGPT in the shadows. They’ll be learning what works while you’re perfecting frameworks that never get implemented.

The gap between companies that have clear lane markers and companies that treat everything the same will compound monthly.

What’s coming in Part 2

Understanding why you need the AI Governance Highway is different from implementing it.

Part 2 will deliver a synthesized practical approach – the operational framework that addresses the execution gaps in strategic models. It will show you how to actually build and implement the three-lane system, rooted in existing standards and informed by real-world deployments.

Not more strategy. The practical implementation that makes the rubber meet the road.

Six months from now, your AI strategy will still be a document. The question is whether it will be a document that’s executing or a document that’s obsolete. Your competitors are making that choice right now. Some are choosing reckless speed. Some are choosing paralysis. A few are building governance architecture.

The companies that win will be the ones who closed the execution gap.

That’s what the AI Governance Highway is designed to do, and it’s coming up in Part 2…

Frank Oelschlager is a Partner and Managing Director at Ten Mile Square. He has helped businesses close the execution gap for more than 30 years. If you want to avoid both reckless speed and falling so far behind that you become irrelevant, Frank is the first person you should consult. Schedule a discovery call.

Scroll to Top