Transcripts

AI Enablement Roadmap

We’re already using generative AI—summarizing documents, working through spreadsheets, and moving faster every day. That is not going anywhere. The problem is that much of this activity is happening off to the side. Things drift outside of controls, definitions do not line up, ownership becomes unclear, and good ideas never make it past the experiment stage. At the same time, data is scattered everywhere, so the pressure builds. We need a way to keep moving fast without everything becoming messy.

The challenge is not AI. It is treating cloud, data, analytics, and AI as separate efforts instead of one connected approach. The interesting part is that AI can actually help build that strategy—generating code, context, and insight along the way—so it speeds things up instead of making them more complicated.

It starts with a secure cloud foundation: clear access, visible data flows, shared services, and governance built in from day one. This is the layer that defines how things actually work—who can use what, how data moves, and how everything stays governed.

On top of that sits the stitching—the layer that connects cloud, internal systems, and third-party tools. It provides repeatable patterns so organizations can scale up, scale down, build, or buy without constantly reworking everything. As tools and vendors change, the stitching adapts through governed paths, not fragile one-off integrations that keep breaking.

Organizations do not always need to replace the chat tools people are already using. They need to add structure so the work is brought under control and supports where the business is going. This typically plays out in three steps.

First, chat capture. Prompts, replies, and files move into managed storage so the work becomes visible and can generate insight.

Second, workspaces. That same activity becomes structured, repeatable, and reusable across teams instead of being trapped in individual browser sessions.

Third, integration with internal tools. Questions and actions move through a governed path into APIs, warehouses, and systems of record, so answers come from company data—not just the model.

The goal is simple: meet people where they are and move that behavior onto trusted rails so it drives real progress.

From there, generative AI moves inside the enterprise along two paths.

First, governed exploration across connected sources—spreadsheets, PDFs, warehouse tables, APIs, and operational systems—creating customized generative experiences that evolve with the business.

Second, product-grade analytics with embedded AI—turning insights into reusable, well-managed components that power real applications. One supports exploration, the other supports execution, and both run on the same foundation.

As organizations continue working in structured generative BI, prototyping becomes the mechanism for progress—but it cannot look the way it used to. It is no longer about proving something is possible with a quick demo or a compelling interface. The goal is a real path to production.

AI helps accelerate that process by generating code, context, and insight, so prototypes become more complete—not just faster. They must validate business value, architecture, data, governance, ownership, and cost at the same time. They must also span multiple use cases, identifying reusable capabilities rather than solving a single problem in isolation.

As real capabilities begin to emerge, patterns start to show up. Not just better generative AI, but repeatable ways to build.

This includes document and pipeline intelligence—turning documents, data, and workflow outputs into structured, usable information—and grounded natural language interaction, connecting that data into AI-powered experiences within internal applications.

Over time, these same patterns enable intelligence, automation, and multi-step workflows—not uncontrolled agents, but practical, well-managed assistants embedded directly into operations.

The goal is not to launch another AI program. It is to create one way of operating where cloud, data, governance, analytics, and AI work together. Everything builds on itself. Every prompt, file, and interaction contributes to something instead of getting lost.

In the near term, this reduces pressure—less shadow IT, clearer ownership, stronger compliance, and faster decisions. Over time, it compounds. Every prototype, integration, and reporting improvement strengthens the same foundation.

That is how innovation becomes repeatable, trust builds naturally, and AI becomes something that actually sticks. It becomes practical—not by chasing autonomy, but by building on patterns that can be reused.

The Data Enablement Strategy

Before we talk about AI, we need to talk about the foundation underneath it.

Most organizations are not struggling with access to AI. They are struggling with what AI is exposing: fragmented data, poor quality, disconnected systems, and low trust. AI did not create these problems—it magnified them at enterprise scale. Faster decisions require better information, better automation requires stronger control, and better outcomes require greater trust.

This will not be solved with the modern data stack, where vendors are pieced together to create a bloated architecture. It is solved by making the puzzle less complex through a simplified approach that brings speed, confidence, and enterprise value.

It creates clarity across six connected areas: how information moves, where it lives, whether it can be trusted, whether it can be found and understood, how business meaning is standardized, and how leaders interact with insight.

These are not separate investments. They are one connected system.

When they work together, they create a virtuous cycle. Better data creates better insight. Better insight creates better decisions. Better decisions create stronger business performance—and the need for even better insight. AI also strengthens that cycle by helping prepare data for better AI consumption.

Most organizations think AI starts with the model. It does not. It starts with how information moves.

If information is delayed, fragmented, or difficult to trust, AI simply scales the problem. What is needed is coordinated movement across operations, documents, workflows, and decisions.

Information must move reliably. It must be visible. It must be reusable so every new initiative builds on what already exists instead of starting over. This is also where AI creates immediate value—accelerating workflows, reducing manual effort, and shortening time to execution.

The goal is simple: move faster with less friction and more confidence.

Even with strong pipeline capabilities, not all information should be centralized—and in many cases, it is not practical to do so.

Some information must remain close to the teams that use it. Some lives inside operational systems, documents, and local files. Some changes too quickly or too specifically to force into one platform.

But when access becomes fragmented, definitions drift, reports stop aligning, and leadership receives conflicting answers, hesitation follows—and hesitation creates cost.

What is needed is unified access. Not necessarily a centralized platform, but one trusted channel to information. A place where both structured and unstructured information are governed consistently, security is clear, and duplication is reduced.

AI helps accelerate that foundation by defining shared business objects, generating governance and control patterns, and identifying reusable queries and request patterns that improve performance and consistency across disparate sources.

The goal is confidence in the information behind every decision.

Even with strong access, none of it matters if the information cannot be trusted.

Poor quality creates invisible costs: rework, reporting errors, missed opportunities, and operational inefficiency. At AI scale, that risk grows. Mistakes do not stay isolated—they multiply through automation.

Quality must move from cleanup to prevention. From fixing issues after the fact to identifying them before they impact the business.

This means stronger controls around completeness, consistency, and accuracy—and increasingly, using AI itself to improve quality upstream.

Because trust is not built in the presentation. It is built in the process.

Even trusted information has no value if people cannot find it or understand it.

This is where organizations lose speed. People rebuild reports, duplicate effort, and create workarounds because the right information is too difficult to access.

What is needed is clarity. Who owns this information? Where did it come from? Can it be trusted?

A strong catalog creates that visibility. It improves discoverability, lineage, context, and accountability. AI strengthens this even further, making discovery faster and governance stronger.

Even when information is available, another problem appears. Different parts of the business define success differently.

Revenue. Performance. Productivity.

The same words with different meanings.

That creates conflicting reporting and strategic misalignment.

What is needed is consistency—a shared business language.

This is what the semantic layer provides. It standardizes business definitions and simplifies underlying data models so dashboards, analytics, and AI all operate from the same understanding.

AI helps by standardizing metrics, translating technical names into business terms, and simplifying complex joins into clear semantic relationships.

That shared language creates the context LLMs need, improving accuracy, consistency, and trust.

Because the value of AI is not speed alone—it is trusted answers built on the right understanding.

Now that governance is in place, the focus shifts to visibility.

Historically, analytics meant dashboards, reports, and waiting. But business does not wait. Markets move, risk changes, and questions evolve daily.

Leaders need flexibility, not another static report.

Generative analytics changes that. It allows people to ask questions directly in natural language and receive insight grounded in trusted enterprise information—not just charts.

This drives faster decisions and stronger execution.

This is where the system compounds value.

Better insight creates better questions. Better questions improve better information. Better information strengthens the business.

AI is not replacing strategy. It is accelerating it.

Each of these capabilities matters, but the real value is how they work together.

Information moves efficiently. It lives in a governed environment. Its quality improves continuously. It is easy to find and understand. Its meaning is consistent. And it becomes instantly accessible through natural interaction.

That is the shift—from fragmented analytics to a connected intelligent system. From isolated AI use cases to a strategy that compounds value over time.

That is what a 360-degree data strategy enables.

Not just better data—but better decisions, stronger execution, and sustainable enterprise value.

And an added advantage: organizations can use AI to help get there.

Production Ready Prototyping

Most organizations are already experimenting with AI. Building something new is not today’s problem. The challenge is moving from isolated experiments—often without a clear path for reuse, governance, or production—to repeatable enterprise value.

Leadership is no longer asking whether AI can work. They are asking how to make it scalable, how to make it governable, and how to ensure each investment strengthens the next.

Today, the way organizations prototype needs to change.

Historically, prototypes were narrow: one workflow, one stack, one bet. Teams spent weeks proving a single concept, hoping that if it worked, they could get it to production and business value would follow.

That made sense when engineering moved slowly. But in the age of generative AI, that model no longer fits. The challenge is no longer whether functionality can be built—it can.

The real challenge is identifying capabilities that solve multiple use cases as we build, so prototypes create reusable enterprise value instead of isolated solutions. The goal is not just to prove something is possible. It is to ensure what we build today is sustainable and makes the next use case easier tomorrow.

That is where real enterprise value begins.

Generative AI creates a powerful opportunity to accelerate this.

First, it generates code. Teams can stand up services, workflows, APIs, interfaces, and automation dramatically faster. What once took weeks can now be explored in days.

Second, it generates context. It helps summarize documents, explain architecture, create shared understanding, and align business and technical teams faster. Less time is spent understanding the problem, and more time is spent solving it.

Third, it generates insight. It helps create narratives, identify patterns, compare options, and surface hypotheses before heavy engineering investment. Weaker paths are cut earlier. Better paths are funded sooner.

Together, code, context, and insight compress time and widen exploration. The same team can evaluate far more possibilities in the same calendar window.

But speed alone creates risk. Rapid prototyping without structure becomes fast chaos.

That is why every prototype must stay anchored in use cases.

Use cases define who the audience is, what workflow matters, what success looks like, and what governance boundaries must be respected. Prototypes are not demos—they are vehicles for validating real business outcomes.

Prototyping must also become multidimensional.

It cannot simply show impressive functionality connected to a model. Engineering, architecture, security, FinOps, and delivery planning must move together.

A prototype should not be treated like a disposable branch. It should be treated like a controlled slice of the eventual production ecosystem.

Architecture ensures what we build can scale.

Security ensures it can be trusted, with the right controls and enterprise boundaries from the beginning.

FinOps ensures exploration remains measurable and sustainable.

Delivery planning ensures we know what lands when—and how today’s investment reduces tomorrow’s effort.

This is what turns prototypes into production pathways.

This is where the concept of capabilities becomes critical.

A capability is not just technology, and not just a business objective. It is the pairing of both—business functionality joined to technical ability.

This is where GenAI’s ability to provide context and insight on the code it generates becomes powerful. It can look across codebases, understand how modules relate, and connect them to the business functions they support.

For example, document intake is functionality.

OCR, contextualization, and governed AI workflows are the technical abilities that make it real.

That pairing is the capability.

When those pairings become explicit, product, engineering, and finance are finally talking about the same thing—not just platforms like AWS Bedrock or Snowflake Cortex, and not just better insights, but clearly defined capabilities that connect business outcomes to platform investment.

That fundamentally changes prioritization. Because when capabilities are the focus, every implemented use case strengthens the next.

As delivery accelerates, cost understanding becomes just as important.

If AI allows us to build faster, we must be able to measure, attribute, and govern costs just as fast.

Cloud and token spend cannot become a monthly surprise. We need attribution by application, by workload, by document, and by prototype so leadership understands where money is going, what value it supports, and which investments deserve deeper funding.

Investment is no longer judged as a single project. It advances shared capabilities that make future use cases faster, cheaper, and easier to deliver.

That makes prioritization less political, less brittle, and far more structured.

This only works when capabilities are stitched into a shared ecosystem.

Not scattered apps. Not isolated pilots.

A stitched ecosystem where OCR connects to contextualization, context connects to insight, insight connects to decisions, and every new experience builds on shared architecture instead of starting over.

The value is not in building more prototypes. It is in how those capabilities compose.

It is the stitching that creates the necessary elasticity. It allows organizations to explore broadly while deepening intentionally—to experiment widely without losing enterprise coherence.

GenAI’s ability to generate code, context, and insight helps organizations get there.

That is the real opportunity.

Not faster prototypes in isolation, but broader innovation on shared rails, with clear economics and deliberate paths to production.

That is how AI moves from experimentation to operational intelligence—and from isolated innovation to sustainable enterprise advantage.

The Stitching

Organizations have always had to make trade-offs between building and buying—custom capabilities or third-party platforms. That is not new.

What has changed is AI.

AI has dramatically increased the speed of innovation, the number of available solutions, and the demand for integration across systems right now. Build, buy, integrate, and innovate are all happening at the same time—and at a much faster pace.

But they are not happening in a coordinated way. Over time, that creates tension, complexity, and fragmentation across the enterprise.

Instead of choosing between speed and control, organizations must follow an evolutionary path. But this path is not rigid—it is elastic.

It expands as new capabilities are introduced and contracts as systems become more aligned. Organizations move from early AI usage, to structured capabilities, to fully governed enterprise integration. Each step builds on the last, strengthening the system without locking it in place.

Fortunately, AI also accelerates the journey itself—allowing organizations to start faster, evolve more quickly, and implement change with far greater precision than ever before.

But this is not a linear progression, and it is not a fixed architecture. It is a system designed to adapt—elastic enough to absorb new tools, new workflows, and new demands without breaking or fragmenting.

Because without elasticity, every new capability creates tension, and that tension leads to complexity.

Beneath this entire evolution is the cloud foundation.

It is the operational backbone where workloads run, data is governed, and access is securely managed. It defines the environment in which AI capabilities scale and remain controlled.

As adoption grows, it keeps the organization stable, secure, and aligned—allowing innovation to accelerate without creating fragmentation, risk, or technical debt.

The foundation provides the structure for sustainable growth. It creates clear boundaries between what is externally accessible and what remains securely governed.

It centralizes shared capabilities like storage, data and document processing, and access control. As the ecosystem expands, it grows within a consistent framework—not as disconnected solutions.

But structure alone is not enough.

The environment is constantly evolving. New tools, new systems, and new integrations are continuously being introduced across the enterprise. Without a consistent way to connect them, flexibility quickly becomes fragmentation.

What should enable innovation instead creates complexity, and the organization begins to lose cohesion.

This is where a critical layer emerges: the stitching.

It sits between the cloud foundation and the internal and third-party systems that power AI across the enterprise. Accelerated by GenAI’s ability to generate code, context, and insight, it is the connective layer that enables growth and adaptation without losing structure or control.

It links systems through consistent patterns and shared standards. It enables integration without constant redesign and ensures governance extends across the entire ecosystem as technology and business needs evolve.

Connection strengthens the ecosystem, but it is not enough.

Integration solves for today—not for what is still missing.

Organizations need the ability to extend capabilities, close critical gaps, and innovate on top of existing platforms. Not everything should be built, and not everything should be limited by what was bought.

The goal is strategic flexibility—the ability to build and buy as needed.

Capabilities can be layered in, evolved, and reduced over time without disrupting the broader architecture. At the same time, none of this works without strong information governance, data quality and management, unified access to data, documents, and information, supported by consistent taxonomy and information organized in business terms.

This makes enterprise systems far more effective for LLM interpretation and stronger decision-making.

Governance does not need to be perfected upfront.

The elasticity of the stitching allows it to begin early and mature over time—strengthening control and trust without slowing innovation.

That elasticity creates a virtuous cycle.

Early exploration drives real learning. That learning shapes strategy: what to standardize, what to scale, and what to govern.

That strategy improves operational processes, creating more consistent and repeatable ways of working.

Those outcomes generate new insight, which fuels the next cycle of exploration.

Over time, exploration becomes more intentional, operations become more effective, and strategy becomes more grounded.

Innovation strengthens governance, and governance accelerates innovation.

Together, the foundation and the stitching create controlled elasticity.

Organizations can evolve continuously as both business needs and technology landscapes change—without losing alignment, governance, or control.

That is what allows AI to scale as a coordinated enterprise capability, not as a collection of disconnected efforts with limited shelf life.

The AI Acceleration Toolkit

Most organizations are already using AI, so adoption is not the problem. The challenge is fragmentation—isolated experiments, disconnected systems, and no coordinated path from innovation to enterprise impact. Organizations do not have the luxury of waiting for a perfect data strategy or fully built internal AI engineering capabilities because the pace of change is not slowing down.

The question becomes: how do organizations release the pressure building inside the business before it breaks the system, without disrupting long-term strategy? How do they move from scattered AI usage to something structured, repeatable, and scalable—and better yet, use those once-scattered activities to improve the strategy itself?

It starts with realizing that AI adoption is not a single implementation. It is an evolution—moving deliberately from exploration, to structured workflows, to integrated data, and ultimately to operational intelligence. That journey begins by meeting users where they already are, inside familiar GenAI platforms, guided by customized governance and supported by a secure cloud foundation.

AI initiatives often begin as independent exploration, with teams moving quickly to capture value. But AI cannot evolve in isolation. It must be intentionally coordinated and aligned to broader business strategy.

This is where an important component emerges: the stitching—a flexible connective layer built on the cloud foundation that aligns AI, data, and systems, turning independent innovation into cohesive enterprise intelligence.

Early in the evolution, users work in familiar GenAI platforms like Claude and ChatGPT. But instead of isolated conversations, every prompt, response, file, and interaction is captured with strong governance, creating structured and observable history with token usage, context, and provenance.

All of it operates behind the scenes on the cloud foundation through the stitching, turning informal experimentation into observable, governable enterprise capability.

As AI evolves, a virtuous cycle begins. Strategic alignment and governance create visibility, turning experimentation into learning. Those learnings improve data, strengthen systems, and create reusable capabilities.

The stitching enables these insights to connect and flow, so each step builds on the last, accelerating enterprise intelligence.

As AI evolves, conversations become curated, collaborative workspaces. Shared prompts, datasets, and outputs create repeatable, reusable workflows. The stitching enables context to flow from storage into AI and back again.

This begins bounded economics, with curated workspaces improving consistency, reuse, and token efficiency—turning individual exploration into collaborative enterprise capability.

As AI usage within the organization grows, demand for consistent data and system integration increases. The next evolution introduces governed access to internal systems, ensuring coordinated, reliable interactions and improved token economics through consistent context.

The stitching connects AI to enterprise tools through a single governed surface. Built on the cloud foundation, AI moves from curated workspaces to governed enterprise execution.

As AI usage matures, organizations learn from real data, governance, and user behavior. That maturity enables internalized applications with embedded AI, governed data, and purpose-built experiences.

Supported by the stitching and cloud foundation, AI becomes scalable, reusable, and the foundation for future AI innovation.

Generative Business Intelligence

Eventually, some organizations outgrow the governed public GenAI strategy.

It is no longer about making public GenAI safer. It becomes about asking: how do we use the insight gained from captured prompts, workspaces, and governed usage to build something more intentional inside the enterprise? And how do we do that without tearing down the data and service infrastructure built to get us here?

Some enterprises simply do not have the appetite to include public GenAI in their strategy at all. Regardless of how the enterprise got here, it must emulate the public GenAI user experience for the implementation to be valuable.

That is where Generative Business Intelligence begins—and where the evolutionary journey continues.

Generative BI must not treat a business question like a chatbot prompt. It must treat it like a managed analytic workflow.

The system plans. It runs structured steps against governed data, and it records what happened.

That includes trusted enterprise sources and the working files teams use every day—the uploaded spreadsheet, the partner extract, the reconciliation file.

This is not moving forward simply because that is what the model answered. It is repeatable, reviewable, and tied to evidence.

That is the difference between generative AI and enterprise decision-making.

To be on par with the public GenAI user experience, the architecture builds on a simple idea: people should be able to ask business questions in the words they already use.

What is driving revenue? Where is risk increasing? Why did exposure move?

The hard part is not the question. It is the answer.

Because real answers live across disparate sources—governed data, operational systems, documents, uploads, and especially the local files people rely on every day.

These local files are often the fastest path to context, even when they have not yet been fully integrated into the broader data strategy.

The business cannot wait for every source to be perfectly integrated, so the goal is one governed path from natural language, across varied sources, to answers people can actually trust.

Whether the stitching was built through the public GenAI journey or is being established now as part of Generative BI, it becomes the foundation—the governance, the semantics, and the trusted path between natural language and enterprise decisions.

Generative BI does not replace the stitching. It depends on it.

It uses the same controls, access, policy, lineage, and shared business definitions. Who is allowed to see the data? Which sources are approved? Which definitions matter? What lineage needs to be preserved?

Just as important, the same words must mean the same thing—and they must be understandable by the business.

Revenue. Risk. Exposure. Loss.

These cannot be reinvented by the assistant each time someone asks, and they cannot depend on technical table names or column labels.

The stitching provides that semantic layer: shared metrics, joins, glossary, and business definitions, so natural language does not become loose interpretation. It becomes a business-friendly path to governed logic that dramatically improves trust and dramatically improves LLM reasoning.

Because without governance and semantics, natural language feels easy—but the answers become unsafe and inconsistent.

That is how the business gets flexibility without creating chaos.

Public GenAI introduced the behavior. Generative BI operationalizes it.

It is not just a prompt window. It is a cohesive workflow to reach the answer—not simply ask and respond, but plan, execute, and refine.

That is the difference between a simple chatbot response and a managed analytic process.

Generative BI does not answer once and move on.

It works across governed data, documents, and local uploads like spreadsheets and reports, giving the business flexibility to work with what exists today—not just what was formally modeled months ago.

That means teams can adapt quickly without waiting for every source to become a formal pipeline. But everything still follows the same structured analytic flow.

The work stays governed. The logic stays traceable. And the answer stays tied to evidence. Narrative follows facts—not the other way around.

Local files can sit beside governed data without becoming a new source of truth.

Finally, the system has to be observable.

Not just what answer came back, but what ran, which sources were used, which rules applied, and what evidence supported the result.

That matters for trust. It matters for cost. And it matters for scale.

Because if hundreds of people are going to ask questions this way, the organization needs more than fast responses. It needs a record.

Generative BI creates that record so the business can move faster while staying accountable.

Generative BI allows three things to strengthen each other: clear definitions, reliable data, and the ability to ask questions naturally while still getting trusted answers.

When people know what the data means, they make better decisions.

When the data arrives consistently, those decisions move faster.

And when better questions are asked, gaps in definitions and quality become visible.

That creates the virtuous cycle—continuously improving the data, improving the decisions, and improving the business over time.