logo making sense

Latest posts

Explore our categories

What the IBM Moment Reveals About AI and Legacy Systems

A closer look by our Head of Innovation at how AI is transforming legacy modernization and what it means for CTOs in the next 24 months.

Feb 27, 2026

I’ve been thinking about the IBM move since it happened, mostly because it felt like a very specific kind of market reaction. It wasn’t a generic “AI is hot” rally or panic. It looked like investors were trying to price something technical that usually stays invisible until a due diligence process forces it into the open.

What made it interesting to me is that it touched a belief I still see everywhere: legacy complexity slows change, and that delay buys time. If AI starts to shorten the “time-to-understand” of large systems, even a little, that belief becomes less reliable.

What really moved the market

After the sell-off, COBOL became the convenient label in almost every conversation. I get why. But it didn’t match what I was seeing. What felt more relevant was the possibility that AI is reducing the cost of understanding legacy complexity, and what happens when that friction compresses.

In practice, complexity often behaves like inertia. When a system has grown through acquisitions, quick fixes, and years of one-off integrations, even basic change requires a level of certainty most teams can’t reach quickly.

Anthropic’s claim challenged that buffer in a very direct way. If AI tools can take a meaningful slice of the exploration and analysis work that used to eat up months, the hidden cost of complexity becomes harder to justify.

That’s why I read the market reaction as a signal about friction, not about a language, a mainframe, or a single vendor.

What AI can actually automate today in legacy environments

The fastest way to lose credibility with CTOs is to talk about AI like it’s either magic or useless. The practical reality sits in a narrower band, and that band is still extremely important.

Today, AI can assist with:

  • Large-scale codebase analysis, especially when you need to build a fast mental model of what exists
  • Dependency mapping, including patterns that are hard to spot when knowledge is distributed across teams
  • Refactoring support, particularly for repetitive transformations and “mechanical” cleanup work
  • Automated test generation, as scaffolding and coverage expansion, not as a substitute for intent
  • Documentation reconstruction, where the system “works” but nobody can explain why

Those are not headline-grabbing capabilities, but they change the early phase of modernization. They reduce the time teams spend searching for context.

At the same time, AI still cannot independently own:

  • Architectural redesign, including tradeoffs around performance, cost, scalability, and operability
  • Regulatory accountability, because compliance is rarely a pure code problem
  • Systemic risk management, especially when a workflow spans many systems and exceptions
  • Business logic reinterpretation, because intent lives in people, policies, and edge cases

In other words, AI can accelerate understanding and some execution. It doesn’t remove the need to decide what matters and what can safely change.

AI-assisted refactoring is becoming the new baseline

To me, the shift isn’t primarily about AI writing code. It’s about how quickly teams can get to clarity, and how that changes what “reasonable timelines” look like.

I’ve watched modernization efforts stall in the same place over and over. Not because the team can’t build, but because they’re still trying to understand what’s really in front of them. The first couple of weeks usually go to the unglamorous work: mapping real end-to-end flows, chasing edge cases that only show up in prod, and agreeing on which rule is actually the source of truth.

AI is starting to compress that stage. When a team can build a usable map of a system faster, the conversation moves earlier from “What do we have?” to “What are we going to do about it?”

That’s where standards and ownership start to matter even more. The teams that get the upside are the ones who treat AI as part of a disciplined engineering system, with clear review practices, testing expectations, and architectural guardrails. Without that, velocity turns into noise.

The shift in modernization economics

What I keep coming back to is that friction always shows up in the business model. It shapes cost, speed, and how much change the organization can absorb in a year.

When friction drops:

  • Speed expectations rise. Stakeholders stop accepting long “orientation periods” as inevitable.
  • Competitive benchmarks shift. “We’ll modernize eventually” becomes a risky position when peers iterate faster.
  • Tolerance for stagnation shrinks. What used to be survivable technical debt starts to look like strategic drag.

I also think we’re underestimating how quickly AI-readiness becomes a competitive signal in PE-backed environments. Not because every firm suddenly becomes an AI expert, but because diligence teams follow incentives. If AI is being adopted broadly across enterprises, the next question becomes operational: can this company convert AI experiments into production outcomes without breaking core workflows?

That’s why I believe AI-readiness will increasingly show up in technical due diligence over the next 12–24 months, not as a buzzword, but as a proxy for modernization velocity and architectural coherence.

I’m aware that citing 2024 data can feel like ancient history in AI terms. I’m using it anyway because it captures the direction of travel: Stanford’s 2025 AI Index Report shows that enterprise AI adoption jumped from 55% of organizations in 2023 to 78% in 2024, and corporate AI investment reached $252.3B in 2024. The point isn’t the specific number, it’s what it does to expectations when AI becomes normal inside companies.

The hidden layer: technical debt AI can now expose

One of the most valuable things I’m seeing AI do in legacy environments is surface debt teams have learned to live with. The patterns are usually familiar, even when the system is different.

AI can reveal:

  • Duplicate logic that accumulated quietly over time
  • Dependencies no one documented because they felt obvious at the time
  • Structural inconsistencies across environments
  • Performance bottlenecks that hide behind “it usually works”

That’s where the temptation kicks in. Once you can see the debt clearly, it’s easy to assume the path to fixing it is equally clear. It rarely is.

Finding three versions of the same rule doesn’t tell you which one the business truly relies on. Mapping a dependency doesn’t tell you how risky it is to remove it. Spotting a bottleneck doesn’t automatically produce a safe redesign for a system that has to run tomorrow morning.

AI makes the terrain clearer, but it also forces sharper choices. If I’m honest, this is the part that worries me more than the tooling. Once the system becomes legible, you still have to sequence decisions under real constraints: what you fix first, what you contain, what you rebuild, and what you deliberately leave alone because the organization isn’t ready to absorb the change yet. That’s when architecture stops feeling academic and starts feeling like operational responsibility.

What comes next for modernization

I don’t think this moment means legacy systems suddenly become easy to modernize. I also don’t think it means incumbents are doomed. What it does mean, in my view, is that the cost of understanding is starting to move, and that shifts how quickly technology organizations are expected to act.

AI is helping in two places at once now: it speeds up how quickly teams can understand a legacy system, and it also speeds up how quickly they can change it, through refactoring support, test scaffolding, and faster iteration. That combination is powerful, and it’s where the risk shifts. When both the diagnosis and the build cycle compress, it becomes easier to move forward without doing enough of the groundwork: agreeing on the source of truth, locking down data definitions, clarifying ownership, and sequencing changes around operational risk. That’s when you start seeing the downstream symptoms: reports that don’t reconcile, integrations that get brittle, and teams spending time re-litigating what a “customer” or an “order status” actually means.

The risk isn’t speed itself. It’s speed that outruns alignment.

Over the next 12 to 24 months, I expect AI-readiness to show up less as a buzzword and more as an execution signal, especially in PE-backed environments. The diligence question won’t be “Do you have AI?” It’ll be whether the company can ship change safely at a higher velocity, without letting core data definitions, ownership, and architectural coherence drift.

From where I sit, the partners who will matter are the ones who evolve their methods as the baseline changes, combining AI fluency with architectural discipline and governance. The teams that don’t adjust won’t necessarily fail immediately, but the gap will surface faster, through rework, operational drag, and decisions that get harder to defend.


Feb 27, 2026

Say Hello!

Get the latest news and updates
logo footer making sense

|

Technology Fueling Growth

What the IBM Moment Reveals About AI and Legacy Systems