
The models work. The cloud infrastructure is in place. The data teams are capable. Yet the results from AI are still wanting, and the timelines are slipping. This is not an unusual situation; it is the expected reality for most enterprises deploying AI and machine learning in production environments. The problem is rarely with the AI itself. It is with the architecture feeding into AI. Most organizations have invested in a machine learning platform and a cloud-based data warehouse without first solving a more fundamental problem: how does data move from one system to another, with what latency, and with what guarantee? Until those questions are answered, no amount of model tuning will close the gap.
The answer is an event-driven architecture (EDA), an architecture style that has finally become the defining differentiator between organizations seeing AI success and those experiencing continued disappointment.
Milan Parikh is a Lead Enterprise Data Architect with 15 years of experience in cloud-native platforms, enterprise integration architecture, and AI-ready data infrastructure. A Fellow of the British Computer Society (FBCS) and Secretary of the BCS South Walse Branch and Enterprise Architecture Group, Milan specializes in Microsoft Dynamics 365, Azure iPaaS, Power Platform, and Microsoft Fabric. He is an International Keynote Speaker and Session Chair at IEEE World Conferences on AI, a judge at the CES Innovation Awards, and author of multiple research papers published in IEEE Xplore.
"Many organizations assume the AI model is at fault when results disappoint," says Milan. "But the model is only as good as the data being fed to it and that data is usually stale, incomplete, or arriving too late to matter."
Milan emphasizes that until organizations understand how data moves through their systems at what latency, with what guarantees, and with clear domain ownership, AI will always operate on stale or untrustworthy data. No amount of model tuning closes that architectural gap.
Traditional architectures operate on a request-response model: systems ask other systems for data. Event-driven architectures flip this model entirely. Systems publish events facts that something has happened, and other systems subscribe to those streams. A payment processed, a patient record updated, an inventory level breached: each is a durable, replayable event on a stream.
Platforms like Apache Kafka, Azure Event Hubs, and AWS Kinesis handle these streams at enterprise scale. An AI model consuming a payment processing stream does not need to query the payment system directly; it simply reads the stream. It requires no knowledge of the source system's schema, availability window, or internal implementation.
"The same stream that feeds real-time inference feeds model training," Milan explains. "Online and offline features share the same lineage. That is the capability gap organizations discover the hard way after their first production AI failure."

Whether in healthcare data platforms, financial services integration, or manufacturing modernization, Milan identifies three recurring failure points for organizations without event-driven infrastructure:
Poorly implemented event-driven architecture creates its own category of integration debt, undocumented streams, inconsistent schemas, unclear ownership, and absent governance. Milan has distilled four principles to prevent this:
"Estate-wide migration to event-driven architecture in a single program is exactly how organizations stall," says Milan. "The path forward is narrower, but faster."
Milan advises organizations to identify two or three business domains where data latency is already a documented constraint for an AI application in the current program of record roadmap. These become the starting points, not because they are easiest, but because they have a deliverable attached to them.
For legacy systems that cannot naturally publish events, change data capture tools such as Debezium can extract row-level changes from database transaction logs and stream them to the event platform with no application code modification required. This single capability eliminates the most common justification for why legacy systems block EDA adoption. Work on reinforcement learning for dynamic workflow optimization in CI/CD pipelines demonstrates that adaptive pipeline execution is achievable even within existing infrastructure constraints (Parikh et al., "Reinforcement Learning for Dynamic Workflow Optimization in CI/CD Pipelines," IEEE, 2025).
The approach is incremental by design. Build the first AI feature pipeline from the stream. Compare inference latency and accuracy against the batch-fed baseline. The resulting business case funds the next domain, and the one after. The schema registry and event catalog grow as shared infrastructure over time governance implemented one domain at a time, before the platform can accumulate the undocumented backlog it was built to replace.
As organizations invest in AI transformation, Milan offers a practical architectural roadmap that leaders can begin executing today.
If your AI outcomes are disappointing, audit the data pipeline before adjusting the model. Identify the latency at each stage, map who owns each data flow, and determine whether training and inference data share a common lineage. In most cases, the gap is in the architecture, not the algorithm.
Do not attempt a full platform migration. Select domains, fraud detection, clinical decision support, supply chain sensing where latency has a documented business cost. These are your highest-ROI starting points and the clearest business cases for continued EDA investment.
Stand up a schema registry and event catalog before the second domain joins the platform. Governance retrofitted to a mature event environment is exponentially harder than governance built into the foundation. The overhead of doing it early is minimal; the cost of not doing it compounds rapidly.
Every event stream should have a named owner, a documented schema, an SLA, and a known set of consumers. Streams without ownership become the undocumented data debt of tomorrow. When streams are treated with the same rigor as APIs or database schemas, the entire AI platform becomes more trustworthy and more maintainable.
EDA adoption requires investment decisions that span data engineering, platform architecture, and product roadmaps. Business leaders need to understand that improving AI outcomes is not solely a model problem; it is an infrastructure and governance decision. The organizations seeing AI ROI today made that investment 12 to 18 months ago.
Milan is clear that the shift to event-driven architecture is not a matter of if, but when. The question for every enterprise is whether that transition is proactively designed, governed, and delivering returns, or reactive, executed under pressure to rescue an AI program already in distress.
"The models, the people, and the budget are not the problem. The architecture is. Get the architecture right, and the rest of the AI strategy becomes a heck of a lot more doable."
