
Quantum computing conversations tend to get pulled toward the exotic: superposition, quantum entanglement, and what a world with exponentially faster compute will look like. But at the Xcelerated Computing Conference in New York, Solidigm’s Jeniece Wnorowski and I spoke with Burns Healey, quantum infrastructure lead for Dell, who offered a grounded perspective. For quantum technology to matter, it first needs a place to land, and that place is working alongside the classical data center.
It’s a framing that shapes everything about how Dell is approaching the market and one with practical implications for technology decision makers weighing when and how quantum fits into their roadmap.
Our conversation started with reconsidering the terms we use when we talk about quantum technology. “Quantum computers are almost a bit of a misnomer,” Burns said. “When you say quantum computer, I prefer to use the term quantum accelerator, because really that’s what they are. They’re an add-on to HPC or data center infrastructure that give you specialized options for computing specific workloads.”
This perspective that quantum technology is best considered as an extension of high-performance computing environments can be helpful to enterprise leaders who may feel pressure to engage with quantum. Organizations attempting to adopt quantum before they’ve pushed classical computing to its limits are, in his view, getting ahead of themselves.
“Going to a quantum computer before you’ve attempted to use classical HPC or large data center environments is a bit like trying to run before you’ve walked,” he said. “Only once you hit those limits in your data center, in your HPC environment, will you start to think about what quantum can do that you can’t currently do.”
Much of the early quantum conversation focused on physical qubit (quantum bit) counts and error rates, but recent conversations in quantum computing have shifted toward logical qubits and error correction as the field considers what usable quantum will look like.
Burns drew a direct analogy to classical computing. Just as error-correcting code allows applications to run more reliably, logical qubits aim to provide a stable, abstracted layer above the physical qubit substrate.
“The way we use them from a vendor and hardware supplier viewpoint is that we are going to aim to abstract away a lot of that physical layer complexity from the end user,” he said. “It’s a lowering the barrier to entry question in my mind, and the best way we can help onboard new people to the technology.”
When you think of quantum computers as quantum accelerators, the importance of the infrastructure that enables quantum and classical computing to work seamlessly becomes paramount. Rather than building quantum processing units (QPUs), Dell is helping produce the ecosystem and infrastructure appliances that will make quantum devices usable within real data center environments. A major challenge in that area is latency between quantum and classical systems. Burns pointed to Dell’s collaboration with NVIDIA as a current example of this work.
NVIDIA has developed a framework called NVQLink, designed to minimize the round-trip latency between QPUs and classical compute. Using NVQLink on Dell PowerEdge servers, the two companies recently demonstrated sub-4-microsecond latency, a result Burns described as meaningful progress toward the kind of tight integration that real quantum workloads will require.
“We’re really looking at what the technology needs in terms of specifications and hitting those targets to make this infrastructure usable for real quantum computing,” he said.
Dell is also engaged with quantum partners including QuEra and IQM, as well as a joint research initiative with Ernst & Young, all documented on Dell’s hybrid quantum-classical computing page.
When asked what needs to happen technically and operationally for quantum to move from research settings to deployable infrastructure, Burns identified two parallel tracks.
On the software side, progress is already underway. Frameworks like IBM’s open-source Qiskit are helping developers work with quantum gates and algorithms today. The next meaningful shift will come when developers can work at a Python-level abstraction, or eventually through application-specific tools that require no quantum expertise at all.
On the hardware side, cabling is one of the more pressing unsolved problems. Superconducting qubit systems require analog signals routed to each individual qubit. At 50 or 100 qubits, that is manageable. At thousands or millions of qubits, the cabling architecture becomes an issue. Ideas to address this include embedding classical components inside dilution refrigerators and more sophisticated multiplexing approaches, both of which introduce their own challenges.
Dell’s positioning in the quantum space is as perceptive as you would expect from one of the world’s classical computing giants. Rather than competing with QPU vendors, the company is focused on the infrastructure layer that will make quantum systems usable in real enterprise environments.
Burns’s framing of quantum as an accelerator, not a computer, is a useful corrective for organizations trying to calibrate their engagement with the technology. For most enterprises, the near-term question is not whether to adopt quantum, but how to ensure that classical infrastructure is ready when quantum workloads become viable. The organizations with the strongest HPC foundations will be best positioned to take advantage of it.
Listen to our conversation in the full podcast episode, and for more information about Dell’s hybrid quantum-classical computing work is available on Dell’s quantum computing site.