
Eyal Donath
As artificial intelligence, cloud computing, and data-heavy applications accelerate, public discussion has largely centered on software breakthroughs and increasingly powerful models. Much of the physical conversation has focused on data centers and compute capacity. Far less attention is paid to the network infrastructure that connects these systems to the real world.
Most progress in AI depends on four core inputs: algorithms, compute, data, and energy. Each has well-understood constraints. Algorithms advance through research. Compute scales through hardware. Data depends on access and quality. Energy limits cost and availability. What is often overlooked is that all four depend on reliable, high-capacity connectivity to function outside controlled environments.
While data centers generate and process intelligence, networks deliver it. Fiber routes, wireless access, and last-mile broadband determine whether AI systems can move data fast, consistently, and close enough to users to support real-time applications. For workloads that require low latency, high bandwidth, or continuous data exchange, network design becomes a defining constraint.
In practice, network performance is becoming a binding constraint. Broadband infrastructure connects data centers, enterprises, and end users, yet it rarely receives the same strategic attention as compute or software. As AI systems move from experimentation into everyday use, the performance of this underlying connectivity will increasingly shape what is technically and economically possible.
Modern digital systems move far more data than networks were designed to support even a decade ago. Artificial intelligence is no longer limited to batch processing or offline analysis. Increasingly, it operates in real time. Models generate content on demand, process live inputs, and perform inference continuously as users interact with systems.
These workloads behave very differently from traditional web traffic. Real-time translation, image generation, autonomous systems, immersive media, and enterprise analytics all require sustained bandwidth and predictable latency. As more intelligence is pushed closer to users, network performance becomes a determining factor in whether these applications function as intended.
"Compute often gets the attention, but connectivity determines what actually reaches the real world," said Eyal Donath, a technology and infrastructure strategist who has worked across large-scale telecommunications organizations and emerging digital infrastructure initiatives. "If data cannot move quickly and consistently between systems and users, even the most advanced models fall short of their potential."
Industry analysis consistently shows that data traffic continues to rise rapidly as cloud services and AI adoption expand. While large data centers attract significant investment, the access networks that connect businesses, communities, and end users often evolve more slowly. According to Donath, this gap increasingly defines where innovation can scale and where it cannot.
The core challenge is delivering reliable, real-time performance as digital systems become more data-intensive and time-sensitive. Modern applications depend on several factors working together, including sustained capacity, low latency, redundancy, and resilience. Increasingly, success depends on consistency under load, not just peak speeds in ideal conditions.
Many next-generation use cases push this requirement further. Real-time AI inference, interactive simulation, advanced robotics, immersive collaboration, and dynamically generated content all rely on near-instant data exchange. As models become more complex, small increases in latency can create outsized performance degradation. In practice, delays compound quickly, while movement toward near-zero latency unlocks disproportionate gains.
"These systems are extremely sensitive to network behavior," Donath says. "You can have world-class models and compute, but if data movement is inconsistent or delayed, performance collapses at the application level."
Infrastructure design has therefore shifted away from maximizing headline bandwidth alone. The focus is now on end-to-end performance across the entire network path. That means understanding how data flows through core networks, regional aggregation points, and local access infrastructure, and designing those components to operate as a cohesive system.
This systems-level view is becoming essential as enterprises distribute workloads across multiple cloud regions and edge locations. When connectivity between those environments is uneven, even well-architected software platforms struggle to deliver reliable real-world results.
The consequences of network limitations are most visible outside major urban centers. Many businesses, hospitals, schools, and local governments in smaller cities and rural regions rely on infrastructure that was never designed for continuous, data-heavy workloads.
"Connectivity gaps don't just affect entertainment or convenience," Donath says. "They shape where companies can operate, where talent can live, and which communities can fully participate in the digital economy."
Industry analysts consistently note that improving network capacity in underserved areas creates compounding benefits. Stronger connectivity supports remote and hybrid work, enables telemedicine and advanced education tools, and lowers barriers for local entrepreneurship and small business growth. In many regions, broadband access has become a prerequisite for economic competitiveness rather than a secondary utility.
Building infrastructure in these markets, however, is not a scaled-down version of urban deployment. Network density, permitting processes, terrain, and demand patterns differ significantly, requiring tailored technical and operational approaches.
"Deployment economics and regulatory conditions vary widely by geography," Donath says. "Addressing these gaps requires a combination of engineering judgment, disciplined capital planning, and long-term operational commitment. Capital alone is not enough."
As demand patterns evolve, infrastructure planning is changing with them. Networks are no longer designed solely as static utilities built for predictable traffic. Instead, operators are increasingly treating them as adaptive systems that must scale and respond alongside modern applications.
This shift is driven by the need to support multiple performance requirements simultaneously. Many networks now combine multiple transmission approaches to balance capacity, responsiveness, and cost, while relying more heavily on active monitoring and forward-looking capacity planning to avoid bottlenecks before they appear.
"What's changing is the mindset," Donath says. "Connectivity is no longer a background consideration. Network design directly shapes which technologies can be deployed, how reliably they perform, and where they can operate."
Organizations that align infrastructure planning with application requirements are better positioned to deploy advanced digital tools at scale without disruption.
As AI systems scale, performance gains increasingly depend on how quickly and reliably data can move, not just how efficiently it can be processed. Model complexity and data volumes continue to grow at exponential rates, while many applications now assume responses that feel effectively instantaneous.
That mismatch creates a new set of constraints. Even small delays compound when systems rely on constant feedback loops, real-time inference, and distributed workloads. In those environments, network design becomes a determining factor in whether advanced tools function as intended or fail under real-world conditions.
"The technology curve is steepening," Donath says. "As systems demand faster feedback and higher data throughput, the tolerance for latency and inconsistency keeps shrinking. The infrastructure underneath has to evolve just as quickly."
As digital services expand beyond traditional technology hubs and into everyday business operations, education, healthcare, and manufacturing, the role of network infrastructure is becoming harder to ignore. In many cases, the difference between theoretical capability and practical deployment comes down to a simple question: how efficiently data can move from point to point?
