
The Nervous System of AI: Why the “Best Effort” Era is Over
I remember the early days of Wi-Fi, developing some of the industry’s first 802.11a/b/g transceivers. Back then, the mission was singular and remarkably simple: cut the wire.
Wireless has always evolved around its biggest pain point. First speed, then density, then IoT. Every era shifts when a new problem becomes the one we can’t ignore.
In the early years, the entire industry was engaged in a breathless race to make the air look like Ethernet. We obsessed over modulation schemes and channel widths, fighting physics to push throughput from 2 Mbps to 11 Mbps to 54 Mbps, and eventually toward Gigabit performance. Companies stacked on proprietary “Turbo Modes” and pre-standard features to squeeze out every bit and position themselves competitively.
And we won. The speed gap closed. Wi-Fi didn’t just catch wired performance at the residential edge and the enterprise edge — in many places it surpassed it.
Once raw throughput was “good enough,” the priority shifted. We moved from chasing speed to chasing density:
Can we make this work in a packed stadium?
On a subway platform in Tokyo?
In a high-rise where 200 access points sit next to and on top of one another?
That era led us to borrow techniques from cellular: OFDMA, MU-MIMO, BSS Coloring — tools to solve the wireless “cocktail party problem,” the RF equivalent of a noisy room where many devices speak at once and the network must separate overlapping conversations.
Then came the third wave: the Internet of Things. Suddenly, the devices connecting to our networks weren’t just laptops and phones; they were sensors, cameras, thermostats, wearables, industrial controllers, and all kinds of headless endpoints no one wants to update until it’s too late. The number of “things” began to outpace the number of people.
We realized that hauling all that data back to the cloud was often wasteful, so we started pushing compute outward — toward gateways, access points, and edge nodes — processing data closer to where it was created. The mindset shifted from performance to outcomes. Sensor networks don’t require much bandwidth, and no one cares what protocol they are using; they care about how the data is being used to make their lives better.
Today, we are hitting a new inflection point — one that makes the previous shifts look incremental.
In many enterprise environments, human client growth is no longer the main scaling driver. The next explosion in networking isn’t coming from people watching Netflix or scrolling Instagram. It is coming from autonomous agents. And unlike people, AI agents do not forgive “best effort.”
To see why, imagine a modern fulfillment center. Not humans pushing carts, but a hive of hundreds of Autonomous Mobile Robots weaving past each other at speed. Each robot negotiates right-of-way with a central controller, with safety systems watching for conflicts — a single distributed organism connected by an invisible wireless tether.
If that tether stretches into a noticeable hiccup — tens of milliseconds in the wrong moment — the system doesn’t “buffer.” It stops. A momentary disruption becomes a full-aisle shutdown. This is where “best effort” becomes a business risk rather than a minor annoyance.
The New Consumer: Human vs. Machine
To understand why the network architecture must change, you have to understand the difference between a human user and an AI agent.
Humans are incredibly adaptive. If you are on a Teams call and the video freezes for 500 milliseconds, you might grimace and cry out to your deity of choice, but your brain fills in the gap. If a web page takes an extra second to load, you wait. We are built to tolerate variance. Our networks were designed around this tolerance; we built best-effort systems that prioritized maximum throughput over consistent timing.
AI agents (robots, autonomous logistics bots, digital twins, and XR interfaces) are not adaptive in the same way. They require precision.
If a warehouse robot loses reliable connectivity at the wrong moment, it doesn’t “buffer”; it performs a safety stop. If an XR experience slips into noticeable lag, the user gets disoriented, or nauseous (“clean up on aisle 3”). These “users” don’t care about peak speed. To an AI agent, performance isn’t measured in gigabits per second; it’s measured in bounded variance.
Determinism means engineering to strict upper bounds on latency, jitter, and packet loss, and then meeting those bounds every time. “Good” is no longer a high average throughput. “Good” is the mathematical guarantee that 99.9999% of packets will arrive within a fixed window (e.g., 10 ms), regardless of RF congestion, multipath, or compute/buffer delay.
We are moving from an era of bandwidth to an era of determinism.
Wireless as the AI Nervous System
If the modern data center — with its massive GPU clusters — is the brain of the AI revolution, the wireless edge is the nervous system.
A brain in a jar is useless. To function, intelligence needs sensory input from the physical world. It needs to know who is in the room, where the asset is, what the environmental context is, and what the expected action (intent) will be.
This is the new mandate for the wireless edge. We must pivot from building “dumb pipes” that simply move data to building a sensory fabric that feeds context and intent to the enterprise AI.
This shift requires three fundamental architectural changes.
- Determinism over throughput
We need to stop marketing “fast” and start engineering “predictable.” The industry is acknowledging this reality, and Wi-Fi 8 is shaping up to emphasize ultra-high reliability in hostile RF environments, not just another massive jump in peak PHY rate.
This is a tacit admission that the race for raw speed is no longer the primary battle. The future of wireless lies in scheduling the air with the same seriousness we apply to wired switching: prioritization, admission control, traffic classification, roaming behavior that doesn’t spike tail latency, and continuous measurement of what the network is actually delivering.
Whether via private 5G or reliability-focused Wi-Fi evolution, the network must support SLA-like behavior for latency-sensitive machine traffic. For network designers, this flips the planning model: instead of asking “How fast can we make it?” we now ask “What is the worst-case delay this robot, vehicle, or agent can survive?” Determinism becomes the budget we engineer around.
- Identity as the control plane
In a world of autonomous agents, the distinction between “Wi-Fi” and “cellular” is often a distraction. The agent doesn’t care about the protocol; it cares about the outcome. We need a unified identity layer that can abstract away the radio physics.
A security robot moving from the parking lot (5G) into a warehouse (Wi-Fi) shouldn’t experience a policy gap. The policy must follow the identity, not the port.
In practice, this means policies can no longer live primarily in VLANs or subnets. They must live with the identity itself — tied to a device, workload, or agent — and remain consistent as it roams across spectrum, transport, topology, and physical location.
- The network as an immune system (distributed zero trust)
When humans click on phishing links, we train them to be better. You cannot “train” an infected thermostat or a compromised sensor. As we flood our networks with headless devices, the attack surface expands exponentially.
Security can no longer be a perimeter overlay; it must be intrinsic to the fabric. In this model, the chain of trust starts at the edge. The access point stops being a passive pipe and becomes an enforcement point: identity-based segmentation, continuous verification, and rapid containment at the first hop.
Architecturally, the edge is no longer a passive on-ramp; it is the first line of defense that can shrink blast radius immediately and feed high-fidelity telemetry into centralized policy and response.
The Opportunity
We spent the last 20 years building networks that were excellent at delivering content to people. The next 20 years will be about building networks that deliver context from the physical world to AI models.
This is not just an upgrade cycle. It is a fundamental reimagining of why we build networks in the first place. The edge is no longer just about connectivity. It is the sensory interface for the AI era.
If you’re a network or infrastructure leader looking at this shift, the key question isn’t “how fast can the wireless network go?” The question is: can we support real-time, deterministic applications? Can we make policy follow identity across domains? Can we contain threats where they originate, not after they spread?
The technology to build this exists today. The “things” are already here. The agents are waking up.
We are done designing for human patience. Now, we must build the nervous system for machine precision. The 'Best Effort' era is over. The Deterministic era has begun.
I remember the early days of Wi-Fi, developing some of the industry’s first 802.11a/b/g transceivers. Back then, the mission was singular and remarkably simple: cut the wire.
Wireless has always evolved around its biggest pain point. First speed, then density, then IoT. Every era shifts when a new problem becomes the one we can’t ignore.
In the early years, the entire industry was engaged in a breathless race to make the air look like Ethernet. We obsessed over modulation schemes and channel widths, fighting physics to push throughput from 2 Mbps to 11 Mbps to 54 Mbps, and eventually toward Gigabit performance. Companies stacked on proprietary “Turbo Modes” and pre-standard features to squeeze out every bit and position themselves competitively.
And we won. The speed gap closed. Wi-Fi didn’t just catch wired performance at the residential edge and the enterprise edge — in many places it surpassed it.
Once raw throughput was “good enough,” the priority shifted. We moved from chasing speed to chasing density:
Can we make this work in a packed stadium?
On a subway platform in Tokyo?
In a high-rise where 200 access points sit next to and on top of one another?
That era led us to borrow techniques from cellular: OFDMA, MU-MIMO, BSS Coloring — tools to solve the wireless “cocktail party problem,” the RF equivalent of a noisy room where many devices speak at once and the network must separate overlapping conversations.
Then came the third wave: the Internet of Things. Suddenly, the devices connecting to our networks weren’t just laptops and phones; they were sensors, cameras, thermostats, wearables, industrial controllers, and all kinds of headless endpoints no one wants to update until it’s too late. The number of “things” began to outpace the number of people.
We realized that hauling all that data back to the cloud was often wasteful, so we started pushing compute outward — toward gateways, access points, and edge nodes — processing data closer to where it was created. The mindset shifted from performance to outcomes. Sensor networks don’t require much bandwidth, and no one cares what protocol they are using; they care about how the data is being used to make their lives better.
Today, we are hitting a new inflection point — one that makes the previous shifts look incremental.
In many enterprise environments, human client growth is no longer the main scaling driver. The next explosion in networking isn’t coming from people watching Netflix or scrolling Instagram. It is coming from autonomous agents. And unlike people, AI agents do not forgive “best effort.”
To see why, imagine a modern fulfillment center. Not humans pushing carts, but a hive of hundreds of Autonomous Mobile Robots weaving past each other at speed. Each robot negotiates right-of-way with a central controller, with safety systems watching for conflicts — a single distributed organism connected by an invisible wireless tether.
If that tether stretches into a noticeable hiccup — tens of milliseconds in the wrong moment — the system doesn’t “buffer.” It stops. A momentary disruption becomes a full-aisle shutdown. This is where “best effort” becomes a business risk rather than a minor annoyance.
The New Consumer: Human vs. Machine
To understand why the network architecture must change, you have to understand the difference between a human user and an AI agent.
Humans are incredibly adaptive. If you are on a Teams call and the video freezes for 500 milliseconds, you might grimace and cry out to your deity of choice, but your brain fills in the gap. If a web page takes an extra second to load, you wait. We are built to tolerate variance. Our networks were designed around this tolerance; we built best-effort systems that prioritized maximum throughput over consistent timing.
AI agents (robots, autonomous logistics bots, digital twins, and XR interfaces) are not adaptive in the same way. They require precision.
If a warehouse robot loses reliable connectivity at the wrong moment, it doesn’t “buffer”; it performs a safety stop. If an XR experience slips into noticeable lag, the user gets disoriented, or nauseous (“clean up on aisle 3”). These “users” don’t care about peak speed. To an AI agent, performance isn’t measured in gigabits per second; it’s measured in bounded variance.
Determinism means engineering to strict upper bounds on latency, jitter, and packet loss, and then meeting those bounds every time. “Good” is no longer a high average throughput. “Good” is the mathematical guarantee that 99.9999% of packets will arrive within a fixed window (e.g., 10 ms), regardless of RF congestion, multipath, or compute/buffer delay.
We are moving from an era of bandwidth to an era of determinism.
Wireless as the AI Nervous System
If the modern data center — with its massive GPU clusters — is the brain of the AI revolution, the wireless edge is the nervous system.
A brain in a jar is useless. To function, intelligence needs sensory input from the physical world. It needs to know who is in the room, where the asset is, what the environmental context is, and what the expected action (intent) will be.
This is the new mandate for the wireless edge. We must pivot from building “dumb pipes” that simply move data to building a sensory fabric that feeds context and intent to the enterprise AI.
This shift requires three fundamental architectural changes.
- Determinism over throughput
We need to stop marketing “fast” and start engineering “predictable.” The industry is acknowledging this reality, and Wi-Fi 8 is shaping up to emphasize ultra-high reliability in hostile RF environments, not just another massive jump in peak PHY rate.
This is a tacit admission that the race for raw speed is no longer the primary battle. The future of wireless lies in scheduling the air with the same seriousness we apply to wired switching: prioritization, admission control, traffic classification, roaming behavior that doesn’t spike tail latency, and continuous measurement of what the network is actually delivering.
Whether via private 5G or reliability-focused Wi-Fi evolution, the network must support SLA-like behavior for latency-sensitive machine traffic. For network designers, this flips the planning model: instead of asking “How fast can we make it?” we now ask “What is the worst-case delay this robot, vehicle, or agent can survive?” Determinism becomes the budget we engineer around.
- Identity as the control plane
In a world of autonomous agents, the distinction between “Wi-Fi” and “cellular” is often a distraction. The agent doesn’t care about the protocol; it cares about the outcome. We need a unified identity layer that can abstract away the radio physics.
A security robot moving from the parking lot (5G) into a warehouse (Wi-Fi) shouldn’t experience a policy gap. The policy must follow the identity, not the port.
In practice, this means policies can no longer live primarily in VLANs or subnets. They must live with the identity itself — tied to a device, workload, or agent — and remain consistent as it roams across spectrum, transport, topology, and physical location.
- The network as an immune system (distributed zero trust)
When humans click on phishing links, we train them to be better. You cannot “train” an infected thermostat or a compromised sensor. As we flood our networks with headless devices, the attack surface expands exponentially.
Security can no longer be a perimeter overlay; it must be intrinsic to the fabric. In this model, the chain of trust starts at the edge. The access point stops being a passive pipe and becomes an enforcement point: identity-based segmentation, continuous verification, and rapid containment at the first hop.
Architecturally, the edge is no longer a passive on-ramp; it is the first line of defense that can shrink blast radius immediately and feed high-fidelity telemetry into centralized policy and response.
The Opportunity
We spent the last 20 years building networks that were excellent at delivering content to people. The next 20 years will be about building networks that deliver context from the physical world to AI models.
This is not just an upgrade cycle. It is a fundamental reimagining of why we build networks in the first place. The edge is no longer just about connectivity. It is the sensory interface for the AI era.
If you’re a network or infrastructure leader looking at this shift, the key question isn’t “how fast can the wireless network go?” The question is: can we support real-time, deterministic applications? Can we make policy follow identity across domains? Can we contain threats where they originate, not after they spread?
The technology to build this exists today. The “things” are already here. The agents are waking up.
We are done designing for human patience. Now, we must build the nervous system for machine precision. The 'Best Effort' era is over. The Deterministic era has begun.



