

夜色直播鈥檚 H-Series team has hit the ground running in 2023, achieving a new performance milestone. The H1-1 trapped ion quantum computer has achieved a Quantum Volume (QV) of 32,768 (215), the highest in the industry to date.

The team previously increased the QV to 8,192 (or 213) for the System Model H1 system in September, less than six months ago. The next goal was a QV of 16,384 (214). However, continuous improvements to the H1-1's controls and subsystems advanced the system enough to successfully reach 214 as expected, and then to go one major step further, and reach a QV of 215.
The Quantum Volume test is a full-system benchmark that produces a single-number measure of a quantum computer鈥檚 general capability. The benchmark takes into account qubit number, fidelity, connectivity, and other quantities important in building useful devices.聽While other measures such as gate fidelity and qubit count are significant and worth tracking, neither is as comprehensive as Quantum Volume which better represents the operational ability of a quantum computer.
Dr. Brian Neyenhuis, Director of Commercial Operations, credits reductions in the phase noise of the computer鈥檚 lasers as one key factor in the increase.
"We've had enough qubits for a while, but we've been continually pushing on reducing the error in our quantum operations, specifically the two-qubit gate error, to allow us to do these Quantum Volume measurements,鈥 he said.聽
The 夜色直播 team improved memory error and elements of the calibration process as well.聽
鈥淚t was a lot of little things that got us to the point where our two-qubit gate error and our memory error are both low enough that we can pass these Quantum Volume circuit tests,鈥 he said.聽
The work of increasing Quantum Volume means improving all the subsystems and subcomponents of the machine individually and simultaneously, while ensuring all the systems continue to work well together. Such a complex task takes a high degree of orchestration across the 夜色直播 team, with the benefits of the work passed on to H-Series users.聽
To illustrate what this 5-digit Quantum Volume milestone means for the H-Series, here are 5 perspectives that reflect 夜色直播 teams and H-Series users.
Dr. Henrik Dreyer is Managing Director and Scientific Lead at 夜色直播鈥檚 office in Munich, Germany. In the context of his work, an improvement in Quantum Volume is important as it relates to gate fidelity.聽
鈥淎s application developers, the signal-to-noise ratio is what we're interested in,鈥 Henrik said. 鈥淚f the signal is small, I might run the circuits 10 times and only get one good shot. To recover the signal, I have to do a lot more shots and throw most of them away. Every shot takes time."
鈥淭he signal-to-noise ratio is sensitive to the gate fidelity. If you increase the gate fidelity by a little bit, the runtime of a given algorithm may go down drastically,鈥 he said. 鈥淔or a typical circuit, as the plot shows, even a relatively modest 0.16 percentage point improvement in fidelity, could mean that it runs in less than half the time.鈥
To demonstrate this point, the 夜色直播 team has been benchmarking the System Model H1 performance on circuits relevant for near-term applications. The graph below shows repeated benchmarking of the runtime of these circuits before and after the recent improvement in gate fidelity. The result of this moderate change in fidelity is a 3x change in runtime.聽The runtimes calculated below are based on the number of shots required to obtain accurate results from the benchmarking circuit 鈥 the example uses 430 arbitrary-angle two-qubit gates and an accuracy of 3%.

Dr. Natalie Brown and Dr, Ciaran Ryan-Anderson both work on quantum error correction at 夜色直播. They see the QV advance as an overall boost to this work.聽
鈥淗itting a Quantum Volume number like this means that you have low error rates, a lot of qubits, and very long circuits,鈥 Natalie said. 鈥淎nd all three of those are wonderful things for quantum error correction. A higher Quantum Volume most certainly means we will be able to run quantum error correction better. Error correction is a critical ingredient to large-scale quantum computing. The earlier we can start exploring error correction on today鈥檚 small-scale hardware, the faster we鈥檒l be able to demonstrate it at large-scale.鈥
Ciaran said that H1-1's low error rates allow scientists to make error correction better and start to explore decoding options.
鈥淚f you can have really low error rates, you can apply a lot of quantum operations, known as gates,鈥 Ciaran said. "This makes quantum error correction easier because we can suppress the noise even further and potentially use fewer resources to do it, compared to other devices.鈥
鈥淭his accomplishment shows that gate improvements are getting translated to full-system circuits,鈥 said Dr. Charlie Baldwin, a research scientist at 夜色直播.聽
Charlie specializes in quantum computing performance benchmarks, conducting research with the Quantum Economic Development Consortium (QED-C).
鈥淥ther benchmarking tests use easier circuits or incorporate other options like post-processing data. This can make it more difficult to determine what part improved,鈥 he said. 鈥淲ith Quantum Volume, it鈥檚 clear that the performance improvements are from the hardware, which are the hardest and most significant improvements to make.鈥澛
鈥淨uantum Volume is a well-established test. You really can鈥檛 cheat it,鈥 said Charlie.
Dr. Ross Duncan, Head of Quantum Software, sees Quantum Volume measurements as a good way to show overall progress in the process of building a quantum computer.
鈥淨uantum Volume has merit, compared to any other measure, because it gives a clear answer,鈥 he said.聽
鈥淭his latest increase reveals the extent of combined improvements in the hardware in recent months and means researchers and developers can expect to run deeper circuits with greater success.鈥澛
夜色直播鈥檚 business model is unique in that the H-Series systems are continuously upgraded through their product lifecycle. For users, this means they continually and immediately get access to the latest breakthroughs in performance. The reported improvements were not done on an internal testbed, but rather implemented on the H1-1 system which is commercially available and used extensively by users around the world.
鈥淎s soon as the improvements were implemented, users were benefiting from them,鈥 said Dr. Jenni Strabley, Sr. Director of Offering Management. 鈥淲e take our Quantum Volume measurement intermixed with customers鈥 jobs, so we know that the improvements we鈥檙e seeing are also being seen by our customers.鈥
Jenni went on to say, 鈥淐ontinuously delivering increasingly better performance shows our commitment to our customers鈥 success with these early small-scale quantum computers as well as our commitment to accuracy and transparency. That鈥檚 how we accelerate quantum computing.鈥
This latest QV milestone demonstrates how the 夜色直播 team continues to boost the performance of the System Model H1, making improvements to the two-qubit gate fidelity while maintaining high single-qubit fidelity, high SPAM fidelity, and low cross-talk.
The average single-qubit gate fidelity for these milestones was 99.9955(8)%, the average two-qubit gate fidelity was 99.795(7)% with fully connected qubits, and state preparation and measurement fidelity was 99.69(4)%.
For both tests, the 夜色直播 team ran 100 circuits with 200 shots each, using standard QV optimization techniques to yield an average of 219.02 arbitrary angle two-qubit gates per circuit on the 214 test, and 244.26 arbitrary angle two-qubit gates per circuit on the 215 test.
The 夜色直播 H1-1 successfully passed the quantum volume 16,384 benchmark, outputting heavy outcomes 69.88% of the time, and passed the 32,768 benchmark, outputting heavy outcomes 69.075% of the time. The heavy output frequency is a simple measure of how well the measured outputs from the quantum computer match the results from an ideal simulation. Both results are above the two-thirds passing threshold with high confidence. More details on the Quantum Volume test can be found .


Quantum Volume data and analysis code can be accessed on . Contemporary benchmarking data can be accessed at .
1
夜色直播,聽the world鈥檚 largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. 夜色直播鈥檚 technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, 夜色直播 leads the quantum computing revolution across continents.聽
By Konstantinos Meichanetzidis
When will quantum computers outperform classical ones?
This question has hovered over the field for decades, shaping billion-dollar investments and driving scientific debate.
The question has more meaning in context, as the answer depends on the problem at hand. We already have estimates of the quantum computing resources needed for Shor鈥檚 algorithm, which has a superpolynomial advantage for integer factoring over the best-known classical methods, threatening cryptographic protocols. Quantum simulation allows one to glean insights into exotic materials and chemical processes that classical machines struggle to capture, especially when strong correlations are present. But even within these examples, estimates change surprisingly often, carving years off expected timelines. And outside these famous cases, the map to quantum advantage is surprisingly hazy.
Researchers at 夜色直播 have taken a fresh step toward drawing this map. In a new theoretical framework, Harry Buhrman, Niklas Galke, and Konstantinos Meichanetzidis introduce the concept of 鈥渜ueasy instances鈥 (quantum easy) 鈥 problem instances that are comparatively easy for quantum computers but appear difficult for classical ones.

Traditionally, computer scientists classify problems according to their worst-case difficulty. Consider the problem of Boolean satisfiability, or SAT, where one is given a set of variables (each can be assigned a 0 or a 1) and a set of constraints and must decide whether there exists a variable assignment that satisfies all the constraints. SAT is a canonical NP-complete problem, and so in the worst case, both classical and quantum algorithms are expected to perform badly, which means that the runtime scales exponentially with the number of variables. On the other hand, factoring is believed to be easier for quantum computers than for classical ones. But real-world computing doesn鈥檛 deal only in worst cases. Some instances of SAT are trivial; others are nightmares. The same is true for optimization problems in finance, chemistry, or logistics. What if quantum computers have an advantage not across all instances, but only for specific 鈥減ockets鈥 of hard instances? This could be very valuable, but worst-case analysis is oblivious to this and declares that there is no quantum advantage.
To make that idea precise, the researchers turned to a tool from theoretical computer science: Kolmogorov complexity. This is a way of measuring how 鈥渞egular鈥 a string of bits is, based on the length of the shortest program that generates it. A simple string like 0000000000 can be described by a tiny program (鈥減rint ten zeros鈥), while the description of a program that generates a random string exhibiting no pattern is as long as the string itself. From there, the notion of instance complexity was developed: instead of asking 鈥渉ow hard is it to describe this string?鈥, we ask 鈥渉ow hard is it to solve this particular problem instance (represented by a string)?鈥 For a given SAT formula, for example, its polynomial-time instance complexity is the size of the smallest program that runs in polynomial time and decides whether the formula is satisfiable. This smallest program must be consistently answering all other instances, and it is also allowed to declare 鈥淚 don鈥檛 know鈥.
In their new work, the team extends this idea into the quantum realm by defining polynomial-time quantum instance complexity as the size of the shortest quantum program that solves a given instance and runs on polynomial time. This makes it possible to directly compare quantum and classical effort, in terms of program description length, on the very same problem instance. If the quantum description is significantly shorter than the classical one, that problem instance is one the researchers call 鈥渜耻别补蝉测鈥: quantum-easy and classically hard. These queasy instances are the precise places where quantum computers offer a provable advantage 鈥 and one that may be overlooked under a worst-case analysis.
The playful name captures the imbalance between classical and quantum effort. A queasy instance is one that makes classical algorithms struggle, i.e. their shortest descriptions of efficient programs that decide them are long and unwieldy, while a quantum computer can handle the same instance with a much simpler, faster, and shorter program. In other words, these instances make classical computers 鈥渜ueasy,鈥 while quantum ones solve them efficiently and finding them quantum-easy. The key point of these definitions lies in demonstrating that they yield reasonable results for well-known optimisation problems.
By carefully analysing a mapping from the problem of integer factoring to SAT (which is possible because factoring is inside NP and SAT is NP-complete) the researchers prove that there exist infinitely many queasy SAT instances. SAT is one of the most central and well-studied problems in computer science that finds numerous applications in the real-world. The significant realisation that this theoretical framework highlights is that SAT is not expected to yield a blanket quantum advantage, but within it lie islands of queasiness 鈥 special cases where quantum algorithms decisively win.

Finding a queasy instance is exciting in itself, but there is more to this story. Surprisingly, within the new framework it is demonstrated that when a quantum algorithm solves a queasy instance, it does much more than solve that single case. Because the program that solves it is so compact, the same program can provably solve an exponentially large set of other instances, as well. Interestingly, the size of this set depends exponentially on the queasiness of the instance!
Think of it like discovering a special shortcut through a maze. Once you鈥檝e found the trick, it doesn鈥檛 just solve that one path, but reveals a pattern that helps you solve many other similarly built mazes, too (even if not optimally). This property is called algorithmic utility, and it means that queasy instances are not isolated curiosities. Each one can open a doorway to a whole corridor with other doors, behind which quantum advantage might lie.
Queasy instances are more than a mathematical curiosity; this is a new framework that provides a language for quantum advantage. Even though the quantities defined in the paper are theoretical, involving Turing machines and viewing programs as abstract bitstrings, they can be approximated in practice by taking an experimental and engineering approach. This work serves as a foundation for pursuing quantum advantage by targeting problem instances and proving that in principle this can be a fruitful endeavour.
The researchers see a parallel with the rise of machine learning. The idea of neural networks existed for decades along with small scale analogue and digital implementations, but only when GPUs enabled large-scale trial and error did they explode into practical use. Quantum computing, they suggest, is on the cusp of its own heuristic era. 鈥净耻谤颈蝉迟颈肠蝉鈥 will be prominent in finding queasy instances, which have the right structure so that classical methods struggle but quantum algorithms can exploit, to eventually arrive at solutions to typical real-world problems. After all, quantum computing is well-suited for small-data big-compute problems, and our framework employs the concepts to quantify that; instance complexity captures both their size and the amount of compute required to solve them.
Most importantly, queasy instances shift the conversation. Instead of asking the broad question of when quantum computers will surpass classical ones, we can now rigorously ask where they do. The queasy framework provides a language and a compass for navigating the rugged and jagged computational landscape, pointing researchers, engineers, and industries toward quantum advantage.
From September 16th 鈥 18th, (QWC) brought together visionaries, policymakers, researchers, investors, and students from across the globe to discuss the future of quantum computing in Tysons, Virginia.
夜色直播 is forging the path to universal, fully fault-tolerant quantum computing with our integrated full-stack. With our quantum experts were on site, we showcased the latest on 夜色直播 Systems, the world鈥檚 highest-performing, commercially available quantum computers, our new software stack featuring the key additions of Guppy and Selene, our path to error correction, and more.
Dr. Patty Lee Named the Industry Pioneer in Quantum
The Quantum Leadership Awards celebrate visionaries transforming quantum science into global impact. This year at QWC, Dr. Patty Lee, our Chief Scientist for Hardware Technology Development, was named the Industry Pioneer in Quantum! This honor celebrates her more than two decades of leadership in quantum computing and her pivotal role advancing the world鈥檚 leading trapped-ion systems. .
Keynote with 夜色直播's CEO,聽Dr. Rajeeb聽Hazra
At QWC 2024, 夜色直播鈥檚 President & CEO, Dr. Rajeeb 鈥淩aj鈥 Hazra, took the stage to showcase our commitment to advancing quantum technologies through the unveiling of our roadmap to universal, fully fault-tolerant quantum computing by the end of this decade. This year at QWC 2025, Raj shared the progress we鈥檝e made over the last year in advancing quantum computing on both commercial and technical fronts and exciting insights on what鈥檚 to come from 夜色直播. .
Panel Session:聽Policy Priorities for Responsible Quantum and AI
As part of the Track Sessions on Government & Security, 夜色直播鈥檚 Director of Government Relations, Ryan McKenney, discussed 鈥淧olicy Priorities for Responsible Quantum and AI鈥 with Jim Cook from Actions to Impact Strategies and Paul Stimers from Quantum Industry Coalition.
Fireside Chat:聽Establishing a Pro-Innovation Regulatory Framework
During the Track Session on Industry Advancement, 夜色直播鈥檚 Chief Legal Officer, Kaniah Konkoly-Thege,聽and Director of Government Relations, Ryan McKenney, discussed the importance of 鈥淓stablishing a Pro-Innovation Regulatory Framework鈥.
In the world of physics, ideas can lie dormant for decades before revealing their true power. What begins as a quiet paper in an academic journal can eventually reshape our understanding of the universe itself.
In 1993, nestled deep in the halls of Yale University, physicist Subir Sachdev and his graduate student Jinwu Ye stumbled upon such an idea. Their work, originally aimed at unraveling the mysteries of 鈥渟pin fluids鈥, would go on to ignite one of the most surprising and profound connections in modern physics鈥攁 bridge between the strange behavior of quantum materials and the warped spacetime of black holes.
Two decades after the paper was published, it would be pulled into the orbit of a radically different domain: quantum gravity. Thanks to work by renowned physicist Alexei Kitaev in 2015, the model found new life as a testing ground for the mind-bending theory of holography鈥攖he idea that the universe we live in might be a projection, from a lower-dimensional reality.
Holography is an exotic approach to understanding reality where scientists use holograms to describe higher dimensional systems in one less dimension. So, if our world is 3+1 dimensional (3 spatial directions plus time), there exists a 2+1, or 3-dimensional description of it. In the words of Leonard Susskind, a pioneer in quantum holography, "the three-dimensional world of ordinary experience鈥攖he universe filled with galaxies, stars, planets, houses, boulders, and people鈥攊s a hologram, an image of reality coded on a distant two-dimensional surface." 聽
The 鈥淪YK鈥 model, as it is known today, is now considered a quintessential framework for studying strongly correlated quantum phenomena, which occur in everything from superconductors to strange metals鈥攁nd even in black holes. In fact, The SYK model has also been used to study one of physics鈥 true final frontiers, quantum gravity, with the authors of the paper calling it 鈥渁 paradigmatic model for quantum gravity in the lab.鈥 聽
The SYK model involves Majorana fermions, a type of particle that is its own antiparticle. A key feature of the model is that these fermions are all-to-all connected, leading to strong correlations. This connectivity makes the model particularly challenging to simulate on classical computers, where such correlations are difficult to capture. Our quantum computers, however, natively support all-to-all connectivity making them a natural fit for studying the SYK model.
Now, 10 years after Kitaev鈥檚 watershed lectures, we鈥檝e made new progress in studying the SYK model. In a new paper, . By exploiting our system鈥檚 native high fidelity and all-to-all connectivity, as well as our scientific team鈥檚 deep expertise across many disciplines, we were able to study the SYK model at a scale three times larger than the previous best experimental attempt.
While this work does not exceed classical techniques, it is very close to the classical state-of-the-art. The biggest ever classical study was done on 64 fermions, while our recent result, run on our smallest processor (System Model H1), included 24 fermions. Modelling 24 fermions costs us only 12 qubits (plus one ancilla) making it clear that we can quickly scale these studies: our System Model H2 supports 56 qubits (or ~100 fermions), and Helios, which is coming online this year, will have over 90 qubits (or ~180 fermions).
However, working with the SYK model takes more than just qubits. The SYK model has a complex Hamiltonian that is difficult to work with when encoded on a computer鈥攓uantum or classical. Studying the real-time dynamics of the SYK model means first representing the initial state on the qubits, then evolving it properly in time according to an intricate set of rules that determine the outcome. This means deep circuits (many circuit operations), which demand very high fidelity, or else an error will occur before the computation finishes.
Our cross-disciplinary team worked to ensure that we could pull off such a large simulation on a relatively small quantum processor, laying the groundwork for quantum advantage in this field.
First, the team adopted a to run the simulation. By using random sampling, among other methods, the TETRIS algorithm allows one to compute the time evolution of a system without the pernicious discretization errors or sizable overheads that plague other approaches. TETRIS is particularly suited to simulating the SYK model because with a high level of disorder in the material, simulating the SYK Hamiltonian means averaging over many random Hamiltonians. With TETRIS, one generates random circuits to compute evolution (even with a deterministic Hamiltonian). Therefore, when applying TETRIS on SYK, for every shot one can just generate a random instance of the Hamiltonain, and generate a random circuit on TETRIS at the same time. This simple approach enables less gate counts required per shot, meaning users can run more shots, naturally mitigating noise.
In addition, the team 鈥渟parsified鈥 the SYK model, which means 鈥減runing鈥 the fermion interactions to reduce the complexity while still maintaining its crucial features. By combining sparsification and the TETRIS algorithm, the team was able to significantly reduce the circuit complexity, allowing it to be run on our machine with high fidelity.
They didn鈥檛 stop there. The team also proposed two new noise mitigation techniques, ensuring that they could run circuits deep enough without devolving entirely into noise. The two techniques both worked quite well, and the team was able to show that their algorithm, combined with the noise mitigation, performed significantly better and delivered more accurate results. The perfect agreement between the circuit results and the true theoretical results is a remarkable feat coming from a co-design effort between algorithms and hardware.
As we scale to larger systems, we come closer than ever to realizing quantum gravity in the lab, and thus, answering some of science鈥檚 biggest questions.