1/26/26

A conversation on AI, data, and building what actually scales

What Works. What Doesn’t. What’s Next.

Make AI [real] for your business

The room at Nashville was full of people who had already started.
They weren’t asking whether to use AI. They were asking why it wasn’t working the way they were promised.

The room settles quickly once the conversation starts.

On one side, Steve Russell, [up]start.13 CEO, who spends most of his time helping companies trying to make AI usable rather than impressive. On the other, James Whitaker, who has spent the better part of 25 years inside Microsoft, now responsible for data and AI systems that support millions of machines across Azure.

No slides. No demos queued up.

James didn’t start with models or tools. He started with where things actually stood.

They were past experimentation. Past standing up infrastructure. Past the excitement phase.

They were deep in it.

“We’re about 90% done consolidating the data… we’ve established the governance model, and we have a fair number of AI agents stood up and functioning right now.”

And yet, even at that scale, the hardest lessons weren’t about technology. They were about people, process, and trust.

The Starting Point

When James stepped into his role, the organization wasn’t broken. It was functioning. But it was fragile.

“We had somewhere north of a thousand different data sources… spreadsheets, Kusto databases, SQL databases, notepads, sticky notes — everything else.”

Teams had done what teams always do under pressure: they built what they needed to keep moving. Individually, those decisions made sense. Collectively, they created a problem no one could see clearly yet.

That problem didn’t fully surface until AI entered the picture.

As teams began rolling out chatbots, dashboards, and AI-driven insights, something subtle but unsettling surfaced. The systems were working. The outputs were polished. But the answers didn’t line up.

What first looked like minor variance revealed a deeper issue. The same questions, asked in different places, produced different truths. Dozens of tools, trained on slightly different interpretations of the same data, each confidently telling their own version of the story.

That was the real problem. Certainty built on inconsistency.

“AI is very authoritative and very confident to tell you that it’s right — even when it’s not.”

That’s the moment when speed stops being an advantage and starts becoming risk.

Why Moving Fast Didn't Work Anymore

As Steve pushes on what failed early, the answer isn’t surprising. Autonomy without alignment worked until it didn’t. Teams optimized locally, but no one owned the global truth.

At the same time, expectations around AI were being shaped by demos rather than delivery. Leaders would see a polished walkthrough and assume production systems should look the same, just as quickly.

James doesn’t dismiss that ambition. He reframes it.

“That demo was six months of work behind the scenes for one data source — not a thousand.”

Another trap was the belief that everything had to be perfect before anything could begin. Clean all the data first. Then build.

That approach stalled more initiatives than it saved.

“If you try to plan for two years before executing, a year will go by and new products will come out — and a lot of it will be irrelevant.”

Waiting for certainty turned out to be riskier than moving forward deliberately.

The shift

The breakthrough was a decision about structure.

Instead of choosing between full centralization or total decentralization, the team separated concerns. Data would be centralized. Execution would remain federated.

Everyone would start from the same truth. Domain experts would still build what they needed.

“We wanted to centralize the data so we’re all starting from a single source of truth — but federate development so domain owners could still move fast.”

That balance helped moved the focusto proving value early. Small, visible wins that leadership and teams could see side by side with the old world.

“If you go to leadership and say, ‘I need two years before you see anything,’ that’s not a great conversation.”

So they didn’t.

They built a beachhead. They showed the delta. And once people saw it working, resistance faded.

Rebuilding Trust

Old systems weren’t shut off overnight. New ones ran alongside them. Reports were compared. Differences were investigated with domain owners in the loop.

Sometimes the new system was wrong. Sometimes the old one had been wrong for years.

That process mattered more than speed. It replaced debate with evidence.

Governance followed the same philosophy: making the right thing the easiest thing.

Templates replaced reinvention. Standards were enforced quietly. AI itself was used to monitor reporting at scale.

Governance became continuous, largely invisible, and effective because it supported momentum rather than fighting it.

The part no one can skip

As the conversation winds down, one reality becomes clear. None of this works without leadership alignment.

Grassroots teams felt the pain first. Leadership made it possible to fix.

James puts it plainly: “People see the problems. They don’t always see the solution. That’s not their job.”

Someone had to own the system, the trade-offs, and the delivery. And leadership had to stand behind that ownership when rebuilding caused friction.



What Comes Next

The next phase isn’t better dashboards or smarter recommendations. It’s AI systems that act.

Agentic AI raises the stakes. Decisions happen faster than humans can intervene. Errors propagate instantly.

The response isn’t fear or hesitation. It’s discipline. Parallel validation. Monitoring. Clear controls. Human oversight where it matters.

The conversation doesn’t end with a bold prediction. It ends with something more grounded.

Start. Find your Beachhead. Build the foundation. Prove value early. Iterate.

That’s what works.
That’s what doesn’t.
And that’s what comes next.

Most teams stop at the plan. This one didn’t.

Let's make AI [real] together.