1/23/26

What Works, What Doesn't, What's Next: Real Talk on AI and Data at Scale

A fireside chat with [up]start.13 CEO Steve Russell and James Whitaker (Microsoft)

Make AI [real] for your business

If you've ever watched a 20-minute demo and thought "why can't we just do that?" — this conversation is for you.

We cut through the AI hype to talk about what's actually happening inside data organizations trying to scale intelligently. James Whitaker, who manages AI and data deployment for Microsoft Azure's hardware division (north of 7 million computers), sat down with our CEO Steve Russell to discuss the messy reality of enterprise AI implementation.


The Setup: Where Microsoft Azure Actually Is

James's team is deep into their AI transformation. They've stood up Microsoft Fabric as their data ecosystem, consolidated 90% of their division's data sources, established governance models, and deployed multiple AI agents. Right now, those agents generate insights and recommendations that people still act on manually.

The next phase? Agentic AI that takes autonomous action.

But getting here wasn't clean. And that's the point.

What Doesn't Work: The YouTube Expert Problem

"There's a lot of jobs where you can watch a YouTube video and then be an expert," James said. "I needed to rebuild the engine on my boat — watched three or four videos, became an expert, rebuilt the engine. A lot of people see data engineering the same way."

The result? Microsoft's hardware division had over 1,000 different data sources being used internally. Spreadsheets, Kusto databases, SQL databases, notepads, sticky notes — all creating their own version of truth.

The Compounding Problem

When individuals built their own data models to "move fast" and "solve business problems," small discrepancies compounded over time. They'd filter data slightly differently. Define metrics with subtle variations. The team had roughly 50 chatbots internally, and asking three or four of them the same question would return different answers.

"AI is very authoritative and very confident to tell you that it's right, even when it's not."

That's when leadership noticed: multiple reports from different groups were showing different answers to the same questions.

What Works: Centralize Data, Federate Development

The solution wasn't to centralize everything — that's not realistic at enterprise scale. Microsoft's approach:

Centralize the data. One source of truth. Unified semantic layer. Clean foundation.

Federate the development. Let domain experts with business context build the reporting and AI solutions they need — but from that centralized foundation.

The Execution Reality

  • Started with full inventory of 1,000+ data sources

  • Consolidated down to ~600 manageable sources

  • Spent 8 months standing up infrastructure and ingesting data into Fabric

  • Built parallel systems to compare old reports vs. new reports

  • Created baseline Power BI reports for each group as templates

  • Established governance through training, roadshows, and documentation

The governance model: Microsoft created AI agents that scan thousands of reports and flag issues — connections to unsupported data sources, non-standard filters, housekeeping problems. Automated governance at scale.

The Buying Problem (And How They Solved It)

Convincing teams to pause their work, rebuild reports they'd spent a year developing, and lose some control over their backend systems? That's a tough sell.

What Broke Through

Executive sponsorship. Their CVP was frustrated with data quality and seeing divergent reports. Top-down support mattered.

Early wins and beachheads. Don't plan for two years then execute. Show success in small areas first. Build credibility. Demonstrate value quickly.

Clear communication framework. When someone asked for something, James's answer was always "yes" — but qualified:

  • Yes, no problem, we can fit that in

  • Yes, but I have to delay another project

  • Yes, but I need to bring in a partner like upstart13 to meet the deadline

  • Yes, but it's a temporary solution that builds technical debt

This honest framing helped people understand tradeoffs instead of just hearing "no" or empty promises.

The Timeline Shift

First six months: constant pushback. "Why are you doing this? We're fine the way we are."

Now: "We can't run fast enough. People are coming to us saying 'onboard our data next, unblock our solution next.'"


Critical Lessons for Enterprise AI

1. You Can't Wait for Perfect Data

"A lot of people get stuck on 'we need to clean the data, spend forever getting it 100% accurate before we move to the next step,'" James explained.

Instead: Create data pipelines that ingest sources, start cleaning during the process, then iterate. Get started. Build momentum.

2. Technology Changes Faster Than Perfect Plans

If you plan for two years like organizations did 20 years ago, new products will launch and your plan becomes irrelevant. Execute quickly. Iterate constantly.

3. Training Is Non-Negotiable

Microsoft invested heavily in roadshows, training videos, template reports, and documentation. They empowered data teams from within business units to do data engineering work — which reduced pressure on the central team and accelerated progress.

Without this education infrastructure, teams take shortcuts. Six months later, you're back where you started.

4. Parallel Systems Reduce Risk

When centralizing data that thousands of people depend on, Microsoft didn't just flip a switch. They:

  • Built parallel systems

  • Compared old reports vs. new reports

  • Worked with domain owners to understand discrepancies

  • Sometimes discovered the old report had been wrong for years

This comparison period built confidence before full migration.

5. Bring In Trusted Partners

Even with a 50-person expert team, Microsoft brought in multiple vendors to augment capacity and inject specialized expertise. James was direct: "Don't be afraid to reach out for help."

What's Next: The Agentic AI Challenge

The current AI agents generate insights. Humans still make decisions.

The next 12 months? AI agents that take autonomous action.

The new problem: When AI makes decisions faster than humans can, it can also cause problems faster than humans can. If an agent detects an issue and turns off half your fleet or pulls down customer capacity based on bad data, that's unacceptable.

The solution requires:

  • Extremely high data quality standards

  • Robust alerting and monitoring systems

  • Domain owners actively using reports to catch anomalies

  • Teams dedicated to monitoring AI agent behavior

  • Telemetry in place before agents go autonomous

The Bottom Line

Building AI that actually scales isn't about watching demos and implementing them next week. It's about:

  • Centralizing your data foundation while federating development to domain experts

  • Starting with a beachhead and iterating rather than planning perfectly for years

  • Investing in governance and education as hard infrastructure

  • Building credibility through early wins before asking for enterprise-wide change

  • Being honest about timelines and tradeoffs instead of overpromising

As James put it: "Establish that beachhead. Start pulling your data together. Don't feel like you should plan for two years and then execute."

The companies that will succeed with AI aren't the ones with the flashiest demos. They're the ones building the unsexy foundation that makes everything else possible.

Want to talk about what this looks like for your organization? Reach out! We've helped mid-market companies and enterprises like Microsoft build AI infrastructure that actually works — not just looks good in a demo.

Most teams stop at the plan. This one didn’t.

Let's make AI [real] together.