What Can We Actually Do with AI?

The starting point for AI adoption isn't choosing the right technology — it's defining the right problem. Here's how to ask better questions and a practical framework for problem definition.

AXAI TransformationProblem DefinitionConsulting

"Shouldn't we be adopting AI too?" The moment this question comes up, things are already heading in the wrong direction.

Everyone's Asking 'How'

There's one question I hear more than any other in corporate settings these days.

"What can we do with AI?"

It echoes through every conference room. Executives grow anxious watching competitors announce AI initiatives. Teams scramble to figure out which tools to use right now. ChatGPT, Copilot, autonomous agents — the options are endless, yet nothing quite sticks.

Why?

Because they picked the tool first. When you're holding a hammer, everything looks like a nail. When you pick up AI first, every task looks like an "automation opportunity." But the companies that deliver real results start somewhere else entirely.

They start by defining the problem.

Why Is Problem Definition So Hard?

Everyone knows problem definition matters. But when you actually try to do it, most organizations fall into the same traps.

They can't tell the difference between an annoyance and a problem

"Writing reports takes way too long" is an annoyance, not a problem. The real problem is hiding behind it. Is the report slow because the data is scattered across systems? Because unclear decision criteria lead to endless revisions? Or because the report shouldn't exist at all? You have to dig that deep before the actual problem reveals itself.

They mistake solutions for problems

"We need a chatbot" isn't a problem — it's already a solution. The real problem might be: "60% of customer inquiries are repetitive, which prevents our support team from focusing on high-value conversations." Define it this way, and suddenly a chatbot might be the answer — or it might be redesigning the FAQ, or fixing the product UX altogether.

They chase problems they can't measure

"We want to improve operational efficiency" is a goal, not a problem definition. A problem needs to be specific and measurable. What is happening, how much of it, to whom, and what outcome is it producing? Without these four elements, there's no way to prove results even after AI is deployed.

The Anatomy of a Good Problem Definition

Here's a framework you can put to work immediately.

"Who + in what situation + because of what + is experiencing what outcome."

Let me show you what this looks like.

  • Bad example: "The sales team isn't productive enough"
  • Good example: "When responding to new leads, sales reps have to manually pull customer history and product information from three or more systems, resulting in an average initial response time of four hours — during which 30% of leads defect to competitors"

The second version makes it crystal clear where AI can step in. You could automate the data aggregation. You could use a predictive model to prioritize leads. You could generate initial response templates with generative AI. When the problem is sharp, the solution options open up simultaneously.

Three Practices for Defining Problems

1. Ask 'why' five times on the ground

This is a classic technique borrowed from the Toyota Production System, but it's even more powerful in the age of AI.

"Why is it slow?" → "Why is it manual?" → "Why are the systems separate?" → "Why haven't they been integrated?" → "Why wasn't that budget approved?"

By the fifth "why," you sometimes discover the issue isn't something AI should solve at all — it's an organizational structure or process problem. That's an important finding too.

2. Start where the data is

AI runs on data. No matter how well you define a problem, if the relevant data doesn't exist or isn't accessible, AI is powerless.

After defining your problem, always follow up with this question: "Do we actually have the data that describes this problem right now?" If the answer is no, collecting that data becomes your first project.

3. Assign an owner to the problem

Problem definition isn't a document — it's a responsibility. Someone needs to own the outcome personally, tying it to their performance. Otherwise, the problem definition will quietly dissolve into nothing.

The most common failure pattern in AI adoption projects is starting as "everyone's problem" and ending as "nobody's result."

AI Is Great at Answering. That's Why the Questions Matter More.

The capabilities of generative AI are evolving every day. It writes code, summarizes documents, creates images, and proposes strategies. But all of these capabilities only matter when paired with the right question.

Attach a perfect AI to the wrong problem, and you'll end up doing useless work with remarkable efficiency.

The starting point for AI transformation isn't adopting technology. It's figuring out what problems your organization truly needs to solve, and defining them clearly and measurably. This is the first and most serious thing any organization should do in the age of AI.

The tools will keep getting better. But the ability to ask the right questions will always be a human job.