The central question that pre-occupies our team has been:
“How can quantum structures and quantum computers contribute to the effectiveness of AI?”
In previous work we have made notable advances in answering this question, and this article is based on our most recent work in the new papers [, ], and most notably the experiment in [].
This article is one of a series that we will be publishing alongside further advances – advances that are accelerated by access to the most powerful quantum computers available.
Large language Models (LLMs) such as ChatGPT are having an impact on society across many walks of life. However, as users have become more familiar with this new technology, they have also become increasingly aware of deep-seated and systemic problems that come with AI systems built around LLM’s.
The primary problem with LLMs is that nobody knows how they work - as inscrutable “black boxes” they aren’t “interpretable”, meaning we can’t reliably or efficiently control or predict their behavior. This is unacceptable in many situations. In addition, Modern LLMs are incredibly expensive to build and run, costing serious – and potentially unsustainable –amounts of power to train and use. This is why more and more organizations, governments, and regulators are insisting on solutions.
But how can we find these solutions, when we don’t fully understand what we are dealing with now?1
At ҹɫֱ, we have been working on natural language processing (NLP) using quantum computers for some time now. We are excited to have recently carried out experiments [] which demonstrate not only how it is possible to train a model for a quantum computer in a scalable manner, but also how to do this in a way that is interpretable for us. Moreover, we have promising theoretical indications of the usefulness of quantum computers for interpretable NLP [].
In order to better understand why this could be the case, one needs to understand the ways in which meanings compose together throughout a story or narrative. Our work towards capturing them in a new model of language, which we call DisCoCirc, is reported on extensively in this .
In new work referred to in this article, we embrace “compositional interpretability” as proposed in [] as a solution to the problems that plague current AI. In brief, compositional interpretability boils down to being able to assign a human friendly meaning, such as natural language, to the components of a model, and then being able to understand how they fit together2.
A problem currently inherent to quantum machine learning is that of being able to train at scale. We avoid this by making use of “compositional generalization”. This means we train small, on classical computers, and then at test time evaluate much larger examples on a quantum computer. There now exist quantum computers which are impossible to simulate classically. To train models for such computers, it seems that compositional generalization currently provides the only credible path.
1. Text as circuits
DisCoCirc is a circuit-based model for natural language that turns arbitrary text into “text circuits” [, , ]. When we say that arbitrary text becomes ‘text-circuits’ we are converting the lines of text, which live in one dimension, into text-circuits which live in two-dimensions. These dimensions are the entities of the text versus the events in time.
To see how that works, consider the following story. In the beginning there is Alex and Beau. Alex meets Beau. Later, Chris shows up, and Beau marries Chris. Alex then kicks Beau.
The content of this story can be represented as the following circuit:
Figure 1. A text circuit for a simple story, involving three actors Alex, Beau andChris, who have a number of interactions with one another, making up a story –the circuit is to be read from top to bottom.
2. From text circuits to quantum circuits
Such a text circuit represents how the ‘actors’ in it interact with each other, and how their states evolve by doing so. Initially, we know nothing about Alex and Beau. Once Alex meets Beau, we know something about Alex and Beau’s interaction, then Beau marries Chris, and then Alex kicks Beau, so we know quite a bit more about all three, and in particular, how they relate to each other.
Let’s now take those circuits to be quantum circuits.
In the last section we will elaborate more why this could be a very good choice. For now it’s ok to understand that we simply follow the current paradigm of using vectors for meanings, in exactly the same way that this works in LLMs. Moreover, if we then also want to faithfully represent the compositional structure in language3, we can rely on theorem 5.49 from our book Picturing Quantum Processes, which informally can be stated as follows:
If the manner in which meanings of words (represented by vectors) compose obeys linguistic structure, then those vectors compose in exactly the same way as quantum systems compose.4
In short, a quantum implementation enables us to embrace compositional interpretability, as defined in our recent paper [].
3. Text circuits on our quantum computer
So, what have we done? And what does it mean?
We implemented a “question-answering” experiment on our ҹɫֱ quantum computers, for text circuits as described above. We know from our new paper [] that this is very hard to do on a classical computer due to the fact that as the size of the texts get bigger they very quickly become unrealistic to even try to do this on a classical computer, however powerful it might be. This is worth emphasizing. The experiment we have completed would scale exponentially using classical computers – to the point where the approach becomes intractable.
The experiment consisted of teaching (or training) the quantum computer to answer a question about a story, where both the story and question are presented as text-circuits. To test our model, we created longer stories in the same style as those used in training and questioned these. In our experiment, our stories were about people moving around, and we questioned the quantum computer about who was moving in the same direction at the end of the stories. A harder alternative one could imagine, would be having a murder mystery story and then asking the computer who was the murderer.
And remember - the training in our experiment constitutes the assigning of quantum states and gates to words that occur in the text.
Figure 2. The question-answering task for the language of text circuits as implementable on a quantum computer from []. Above the dotted line is the text we consider. Below are upside-down text circuits which constitute the question we ask. The boxes with words are parameterized as quantum gates. The diagram on the left constitutes one possible answer to the question, and the one on the right the other. Can you figure out what the text is and what the questions are?
4. Compositional generalization
The major reason for our excitement is that the training of our circuits enjoys compositional generalization. That is, we can do the training on small-scale ordinary computers, and do the testing, or asking the important questions, on quantum computers that can operate in ways not possible classically. Figure 4 shows how, despite only being trained on stories with up to 8 actors, the test accuracy remains high, even for much longer stories involving up to 30 actors.
Training large circuits directly in quantum machine learning, leads to difficulties which in many cases undo the potential advantage. Critically - compositional generalization allows us to bypass these issues.
Figure 3. A simplified plot from [] showing that increasing the sizes of circuits when testing doesn’t affect the accuracy, after training small-scale on ordinary computers. The number of actors correlates with the text size. H1-1 is the name of the ҹɫֱ quantum computer that was used.
5. Real-world comparison: ChatGPT
We can compare the results of our experiment on a quantum computer, to the success of a classical LLM ChatGPT (GPT-4) when asked the same questions.
What we are considering here is a story about a collection of characters that walk in a number of different directions, and sometimes follow each other. These are just some initial test examples, but it does show that this kind of reasoning is not particularly easy for LLMs.
The input to ChatGPT was:
What we got from ChatGPT:
Can you see where ChatGPT went wrong?
ChatGPT’s score (in terms of accuracy) oscillated around 50% (equivalent to random guessing). Our text circuits consistently outperformed ChatGPT on these tasks. Future work in this area would involve looking at prompt engineering – for example how the phrasing of the instructions can affect the output, and therefore the overall score.
Of course, we note that ChatGPT and other LLM’s will issue new versions that may or may not be marginally better with ‘question-answering’ tasks, and we also note that our own work may become far more effective as quantum computers rapidly become more powerful.
6. What’s next?
We have now turned our attention to work that will show that using vectors to represent meaning and requiring compositional interpretability for natural language takes us mathematically natively into the quantum formalism. This does not mean that there doesn't exist an efficient classical method for solving specific tasks, and it may be hard to prove traditional hardness results whenever there is some machine learning involved. This could be something we might have to come to terms with, just as in classical machine learning.
At ҹɫֱ we possess the most powerful quantum computers currently available. Our recently published roadmap is going to deliver more computationally powerful quantum computers in the short and medium term, as we extend our lead and push towards universal, fault tolerant quantum computers by the end of the decade. We expect to show even better (and larger scale) results when implementing our work on those machines. In short, we foresee a period of rapid innovation as powerful quantum computers that cannot be classically simulated become more readily available. This will likely be disruptive, as more and more use cases, including ones that we might not be currently thinking about, come into play.
Interestingly and intriguingly, we are also pioneering the use of powerful quantum computers in a hybrid system that has been described as a ‘quantum supercomputer’ where quantum computers, HPC and AI work together in an integrated fashion and look forward to using these systems to advance our work in language processing that can help solve the problem with LLM’s that we highlighted at the start of this article.
1 And where do we go next, when we don’t even understand what we are dealing with now? On previous occasions in the history of science and technology, when efficient models without a clear interpretation have been developed, such as the Babylonian lunar theory or Ptolemy’s model of epicycles, these initially highly successful technologies vanished, making way for something else.
2 Note that our conception of compositionality is more general than the usual one adopted in linguistics, which is due to Frege. A discussion can be found in [].
3 For example, using pregroups here as linguistic structure, which are the cups and caps of PQP.
4 That is, using the tensor product of the corresponding vector spaces.
About ҹɫֱ
ҹɫֱ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ҹɫֱ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ҹɫֱ leads the quantum computing revolution across continents.
Blog
August 28, 2025
Quantum Computing Joins the Next Frontier in Genomics
The Sanger Institute illustrates the value of quantum computing to genomics research
ҹɫֱ supports developments in a field that promises to deliver a profound and positive societal impact
Twenty-five years ago, scientists accomplished a task likened to a biological : the sequencing of the entire human genome.
The Human Genome Project revealed a complete human blueprint comprising around 3 billion base pairs, the chemical building blocks of DNA. It led to breakthrough medical treatments, scientific discoveries, and a new understanding of the biological functions of our body.
Thanks to technological advances in the quarter-century since, what took 13 years and cost $2.7 billion then in under 12 minutes for a few hundred dollars. Improved instruments such as next-generation sequencers and a better understanding of the human genome – including the availability of a “reference genome” – have aided progress, alongside enormous advances in algorithms and computing power.
But even today, some genomic challenges remain so complex that they stretch beyond the capabilities of the most powerful classical computers operating in isolation. This has sparked a bold search for new computational paradigms, and in particular, quantum computing.
Quantum Challenge: Accepted
The is pioneering this new frontier. The program funds research to develop quantum algorithms that can overcome current computational bottlenecks. It aims to test the classical boundaries of computational genetics in the next 3-5 years.
One consortium – led by the University of Oxford and supported by prestigious partners including the Wellcome Sanger Institute, the Universities of Cambridge, Melbourne, and Kyiv Academic University – is taking a leading role.
“The overall goal of the team’s project is to perform a range of genomic processing tasks for the most complex and variable genomes and sequences – a task that can go beyond the capabilities of current classical computers” – Wellcome Sanger Institute , July 2025
Selecting ҹɫֱ
Earlier this year, the Sanger Institute selected ҹɫֱ as a technology partner in their bid to succeed in the Q4Bio challenge.
Our flagship quantum computer, System H2, has for many years led the field of commercially available systems for qubit fidelity and consistently holds the global record for Quantum Volume, currently benchmarked at 8,388,608 (223).
In this collaboration, the scientific research team can take advantage of ҹɫֱ’s full stack approach to technology development, including hardware, software, and deep expertise in quantum algorithm development.
“We were honored to be selected by the Sanger Institute to partner in tackling some of the most complex challenges in genomics. By bringing the world’s highest performing quantum computers to this collaboration, we will help the team push the limits of genomics research with quantum algorithms and open new possibilities for health and medical science.” – Rajeeb Hazra, President and CEO of ҹɫֱ
Quantum for Biology
At the heart of this endeavor, the consortium has announced a bold central mission for the coming year: to encode and process an entire genome using a quantum computer. This achievement would be a potential world-first and provide evidence for quantum computing’s readiness for tackling real-world use cases.
Their chosen genome, the bacteriophage PhiX174, carries symbolic weight, as its sequencing his second Nobel Prize for Chemistry in 1980. Successfully encoding this genome quantum mechanically would represent a significant milestone for both genomics and quantum computing.
Bacteriophage PhiX174, published under a Creative Commons License https://commons.wikimedia.org/wiki/File:Phi_X_174.png
Sooner than many expect, quantum computing may play an essential role in tackling genomic challenges at the very frontier of human health. The Sanger Institute and ҹɫֱ’s partnership reminds us that we may soon reach an important step forward in human health research – one that could change medicine and computational biology as dramatically as the original Human Genome Project did a quarter-century ago.
“Quantum computational biology has long inspired us at ҹɫֱ, as it has the potential to transform global health and empower people everywhere to lead longer, healthier, and more dignified lives.” – Ilyas Khan, Founder and Chief Product Officer of ҹɫֱ
Glossary of terms: Understanding how quantum computing supports complex genomic research
Term
Definition
Algorithms
A set of rules or processes for performing calculations or solving computational problems.
Classical Computing
Computing technology based on binary information storage (bits represented as 0 or 1).
DNA Sequence
The exact order of nucleotides (A, T, C, G) within a DNA molecule.
Genome
The complete set of genetic material (DNA) present in an organism.
Graph-based Genome (Sequence Graph)
A non-linear network representation of genomic sequences capturing the diversity and relationships among multiple genomes.
High Performance Compute (HPC)
Advanced classical computing systems designed for handling computationally intensive tasks, simulations, and data processing.
Pangenome
A collection of multiple genome sequences representing genetic diversity within a population or species.
Precision Medicine
Tailored medical treatments based on individual genetic, environmental, and lifestyle factors.
ҹɫֱ
The world’s largest quantum computing company, ҹɫֱ systems lead the world for the rigorous Quantum Volume benchmark and were the first to offer commercial access to highly reliable “Level 2 – resilient” quantum computing.
Quantum Bit (Qubit)
Basic unit of quantum information, which unlike classical bits, can exist in multiple states simultaneously (superposition).
Quantum Computing
Computing approach using quantum-mechanical phenomena (e.g., superposition, entanglement, interference) for enhanced problem-solving capabilities.
Quantum Pangenomics
Interdisciplinary field combining quantum computing with genomics to address computational challenges in analyzing genetic data and pangenomes.
Every year, The IEEE International Conference on Quantum Computing and Engineering – or – brings together engineers, scientists, researchers, students, and others to learn about advancements in quantum computing.
This year’s conference from August 31st – September 5th, is being held in Albuquerque, New Mexico, a burgeoning epicenter for quantum technology innovation and the home to our new location that will support ongoing collaborative efforts to advance the photonics technologies critical to furthering our product development.
Throughout IEEE Quantum Week, our quantum experts will be on-site to share insights on upgrades to our hardware, enhancements to our software stack, our path to error correction, and more.
Meet our team at Booth #507 and join the below sessions to discover how ҹɫֱ is forging the path to fault-tolerant quantum computing with our integrated full-stack.
September 2nd
Quantum Software 2.1: Open Problems, New Ideas, and Paths to Scale 1:15 – 2:10pm MDT | Mesilla
We recently shared the details of our new software stack for our next-generation systems, including Helios (launching in 2025). ҹɫֱ’s Agustín Borgna will deliver a lighting talk to introduce Guppy, our new, open-source programming language based on Python, one of the most popular general-use programming languages for classical computing.
September 3rd
PAN08: Progress and Platforms in the Era of Reliable Quantum Computing 1:00 – 2:30pm MDT | Apache
We are entering the era of reliable quantum computing. Across the industry, quantum hardware and software innovators are enabling this transformation by creating reliable logical qubits and building integrated technology stacks that span the application layer, middleware and hardware. Attendees will hear about current and near-term developments from Microsoft, ҹɫֱ and Atom Computing. They will also gain insights into challenges and potential solutions from across the ecosystem, learn about Microsoft’s qubit-virtualization system, and get a peek into future developments from ҹɫֱ and Microsoft.
BOF03: Exploring Distributed Quantum Simulators on Exa-scale HPC Systems 3:00 – 4:30pm MDT | Apache
The core agenda of the session is dedicated to addressing key technical and collaborative challenges in this rapidly evolving field. Discussions will concentrate on innovative algorithm design tailored for HPC environments, the development of sophisticated hybrid frameworks that seamlessly combine classical and quantum computational resources, and the crucial task of establishing robust performance benchmarks on large-scale CPU/GPU HPC infrastructures.
September 4th
PAN11: Real-time Quantum Error Correction: Achievements and Challenges 1:00 – 2:30pm MDT | La Cienega
This panel will explore the current state of real-time quantum error correction, identifying key challenges and opportunities as we move toward large-scale, fault-tolerant systems. Real-time decoding is a multi-layered challenge involving algorithms, software, compilation, and computational hardware that must work in tandem to meet the speed, accuracy, and scalability demands of FTQC. We will examine how these challenges manifest for multi-logical qubit operations, and discuss steps needed to extend the decoding infrastructure from intermediate-scale systems to full-scale quantum processors.
September 5th
Keynote by NVIDIA 8:00 – 9:30am MDT | Kiva Auditorium
During his keynote talk, NVIDIA’s Head of Quantum Computing Product, Sam Stanwyck, will detail our partnership to fast-track commercially scalable quantum supercomputers. Discover how ҹɫֱ and NVIDIA are pushing the boundaries to deliver on the power of hybrid quantum and classical compute – from integrating NVIDIA’s CUDA-Q Platform with access to ҹɫֱ’s industry-leading hardware to the recently announced NVIDIA Quantum Research Center (NVAQC).
Featured Research at the IEEE Poster Session:
Visible Photonic Component Development for Trapped-Ion Quantum Computing September 2nd from 6:30 - 8:00pm MDT | September 3rd from 9:30 - 10:00am MDT | September 4th from 11:30 - 12:30pm MDT Authors: Elliot Lehman, Molly Krogstad, Molly P. Andersen, Sara Cambell, Kirk Cook, Bryan DeBono, Christopher Ertsgaard, Azure Hansen, Duc Nguyen, Adam Ollanik, Daniel Ouellette, Michael Plascak, Justin T. Schultz, Johanna Zultak, Nicholas Boynton, Christopher DeRose,Michael Gehl, and Nicholas Karl
Scaling Up Trapped-Ion Quantum Processors with Integrated Photonics September 2nd from 6:30 - 8:00pm MDT and 2:30 - 3:00pm MDT | September 4th from 9:30 - 10:00am MDT Authors: Molly Andersen, Bryan DeBono, Sara Campbell, Kirk Cook, David Gaudiosi, Christopher Ertsgaard, Azure Hansen, Todd Klein, Molly Krogstad, Elliot Lehman, Gregory MacCabe, Duc Nguyen, Nhung Nguyen, Adam Ollanik, Daniel Ouellette, Brendan Paver, Michael Plascak, Justin Schultz and Johanna Zultak
Research Collaborations with the Local Ecosystem
In a partnership that is part of a long-standing relationship with Los Alamos National Laboratory, we have been working on new methods to make quantum computing operations more efficient, and ultimately, scalable.
Learn more in our Research Paper:
Our teams collaborated with Sandia National Laboratories demonstrating our leadership in benchmarking. In this paper, we implemented a technique devised by researchers at Sandia to measure errors in mid-circuit measurement and reset. Understanding these errors helps us to reduce them while helping our customers understand what to expect while using our hardware.
We’re not just catching up to classical computing, we’re evolving from it
From machine learning to quantum physics, tensor networks have been quietly powering the breakthroughs that will reshape our society. Originally developed by the legendary Nobel laureate Roger Penrose, they were first used to tackle esoteric problems in physics that were previously unsolvable.
Today, tensor networks have become indispensable in a huge number of fields, including both classical and quantum computing, where they are used everywhere from quantum error correction (QEC) decoding to quantum machine learning.
In , we teamed up with luminaries from the University of British Columbia, California Institute of Technology, University of Jyväskylä, KBR Inc, NASA, Google Quantum AI, NVIDIA, JPMorgan Chase, the University of Sherbrooke, and Terra Quantum AG to provide a comprehensive overview of the use of tensor networks in quantum computing.
Standing on the shoulders of giants
Part of what drives our leadership in quantum computing is our commitment to building the best scientific team in the world. This is precisely why we hired Dr. Reza Haghshenas, one of the world’s leading experts in tensor networks, and a co-author on the paper.
Dr. Haghshenas has been researching tensor networks for over a decade across both academia and industry. Dr. Haghshenas did postdoctoral work under , a leading figure in the use of tensor networks for quantum physics and chemistry.
“Working with Dr. Garnet Chan at Caltech was a formative experience for me”, remarked Dr. Haghshenas. “While there, I contributed to the development of quantum simulation algorithms and advanced classical methods like tensor networks to help interpret and simulate many-body physics.”
Since joining ҹɫֱ, Dr. Haghshenas has led projects that bring tensor network methods into direct collaboration with experimental hardware teams — exploring quantum magnetism on real quantum devices and helping demonstrate early signs of quantum advantage. He also contributes to , helping the broader research community access these methods.
Dr. Haghshenas’ work sits in a broad and vibrant ecosystem exploring novel uses of tensor networks. Collaborations with researchers like Dr. Chan at Caltech, and NVIDIA have brought GPU-accelerated tools to bear on the forefront of applying tensor networks to quantum chemistry, quantum physics, and quantum computing.
A powerful simulation tool
Of particular interest to those of us in quantum computing, the best methods (that we know of) for simulating quantum computers with classical computers rely on tensor networks. Tensor networks provide a nice way of representing the entanglement in a quantum algorithm and how it spreads, which is crucial but generally quite difficult for classical algorithms. In fact, it’s partly tensor networks’ ability to represent entanglement that makes them so powerful for quantum simulation. Importantly, it is our in-house expertise with tensor networks that makes us confident we are indeed moving past classical capabilities.
A theory of evolution
Tensor networks are not only crucial to cutting-edge simulation techniques. At ҹɫֱ, we're working on understanding and implementing quantum versions of classical tensor network algorithms, from quantum matrix product states to holographic simulation methods. In doing this, we are leveraging decades of classical algorithm development to advance quantum computing.
A topic of growing interest is the role of tensor networks in QEC, particularly in a process known as decoding. QEC works by encoding information into an entangled state of multiple qubits and using syndrome measurements to detect errors. These measurements must then be decoded to identify the specific error and determine the appropriate correction. This decoding step is challenging—it must be both fast (within the qubit’s coherence time) and accurate (correctly identifying and fixing errors). Tensor networks are emerging as one of the most for tackling this task.
Looking forward (and backwards, and sideways...)
Tensor networks are more than just a powerful computational tool — they are a bridge between classical and quantum thinking. As this new paper shows, the community’s understanding of tensor networks has matured into a robust foundation for advancing quantum computing, touching everything from simulation and machine learning to error correction and circuit design.
At ҹɫֱ, we see this as an evolutionary step, not just in theory, but in practice. By collaborating with top minds across academia and industry, we're charting a path forward that builds on decades of classical progress while embracing the full potential of quantum mechanics. This transition is not only conceptual but algorithmic, advancing how we formulate and implement methods utilizing efficiently both classical and quantum computing. Tensor networks aren’t just helping us keep pace with classical computing; they’re helping us to transcend it.