ҹɫֱ

ҹɫֱ is developing new frameworks for artificial intelligence

February 2, 2024

How do machines “learn”? 

While recent years have seen incredible advancements in Artificial Intelligence (AI), no-one really knows how these ‘first-gen’ systems actually work. New work at ҹɫֱ is helping to develop different frameworks for AI that we can understand - making it interpretable and accountable and therefore far more fit for purpose. 

The current fascination with AI systems built around generative Large Language Models (LLMs) is entirely understandable, but lost amid the noise and excitement is the simple fact that AI tech in its current form is basically a “black box” that we can’t look into or examine in any meaningful manner. This is because when computer scientists were starting to figure out how to make machines ‘human like’ and ‘think’, they turned to our best model for a thinking machine, the human brain. The human brain essentially consists of neural networks, and so computer scientists developed artificial neural networks. 

However, just as we don’t fully understand how human intelligence works, it’s also true that we don’t really understand how current artificial intelligence works – neural networks are notoriously difficult to interpret and understand. This is broadly described as the “interpretability” issue in AI. 

It is self-evident that interpretability is crucial for all kinds of reasons – AI has the power to cause serious harm alongside immense good. It is critical that users understand why a system is making the decisions it does. When we read and hear about ‘safety concerns’ with AI systems, interpretability and accountability are key issues.

At ҹɫֱ we have been working on this issue for some time – and we began way before AI systems such as generative LLM’s became fashionable. In our AI team based out of Oxford, we have been focused on the development of frameworks for “compositional models” of artificial intelligence. Our intentions and aims are to build artificial intelligence that is interpretable and accountable. We do this in part by using a type of math called “category theory” that has been used in everything from classical computer programming to neuroscience.

Category theory has proven to be a sort of “Rosetta stone”, as John Baez put it, for understanding our universe in an expansive sense – category theory is helpful for things as seemingly disparate as physics and cognition. In a very general sense, categories represent things and ways to go between things, or in other words, a general science of systems and processes. Using this basic framework to understand cognition, we can build new artificial intelligences that are more useful to us – and we can build them on quantum computers, which promise remarkable computing power.

Our AI team, led by Dr. Stephen Clark, Head of AI at ҹɫֱ, has published a applying these concepts to image recognition. They used their compositional quantum framework for cognition and AI to demonstrate how concepts like shape, color, size, and position can be learned by machines – including quantum computers.

“In the current environment with accountability and transparency being talked about in artificial intelligence, we have a body of research that really matters, and which will fundamentally affect the next generation of AI systems. This will happen sooner than many anticipate” said Ilyas Khan, ҹɫֱ’s founder.

is part of a in quantum computing and artificial intelligence, which holds great promise for our future - as the authors say, “the advantages this may bring, especially with the advent of larger, fault-tolerant quantum computers in the future, is still being worked out by the research community, but the possibilities are intriguing at worst and transformational at best.”

About ҹɫֱ

ҹɫֱ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ҹɫֱ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ҹɫֱ leads the quantum computing revolution across continents. 

Blog
May 12, 2025
ҹɫֱ Dominates the Quantum Landscape: New World-Record in Quantum Volume

Back in 2020, we to increase our Quantum Volume (QV), a measure of computational power, by 10x per year. 

Today, we’re pleased to share that we’ve followed through on our commitment: Our System Model H2 has reached a Quantum Volume of 2²³ = 8,388,608, proving not just that we always do what we say, but that our quantum computers are leading the world forward. 

The QV benchmark was developed by IBM to represent a machine’s performance, accounting for things like qubit count, coherence times, qubit connectivity, and error rates. :

“the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research."

Our announcement today is precisely what sets us apart from the competition. No one else has been bold enough to make a similar promise on such a challenging metric – and no one else has ever completed a five-year goal like this.

We chose QV because we believe it’s a great metric. For starters, it’s not gameable, like other metrics in the ecosystem. Also, it brings together all the relevant metrics in the NISQ era for moving towards fault tolerance, such as gate fidelity and connectivity. 

Our path to achieve a QV of over 8 million was led in part by Dr. Charlie Baldwin, who studied under the legendary Ivan H. Deutsch. Dr. Baldwin has made his name as a globally renowned expert in quantum hardware performance over the past decade, and it is because of his leadership that we don’t just claim to be the best, but that we can prove we are the best. 

Figure 1: All known published Quantum Volume measurements.
Sources: [][][][][]

Alongside the world’s biggest quantum volume, we have the industry’s . To that point, the table below breaks down the leading commercial specs for each quantum computing architecture. 

Table 1: Leading commercial spec for each listed architecture or demonstrated capabilities on commercial hardware.

We’ve never shied away from benchmarking our machines, because we know the results will be impressive. It is our provably world-leading performance that has enabled us to demonstrate:

As we look ahead to our next generation system, Helios, ҹɫֱ’s Senior Director of Engineering, Dr. Brian Neyenhuis, reflects: “We finished our five-year commitment to Quantum Volume ahead of schedule, showing that we can do more than just maintain performance while increasing system size. We can improve performance while scaling.” 

Helios’ performance will exceed that of our previous machines, meaning that ҹɫֱ will continue to lead in performance while following through on our promises. 

As the undisputed industry leader, we’re racing against no one other than ourselves to deliver higher performance and to better serve our customers.

technical
All
Blog
May 1, 2025
GenQAI: A New Era at the Quantum-AI Frontier

At the heart of quantum computing’s promise lies the ability to solve problems that are fundamentally out of reach for classical computers. One of the most powerful ways to unlock that promise is through a novel approach we call Generative Quantum AI, or GenQAI. A key element of this approach is the (GQE).

GenQAI is based on a simple but powerful idea: combine the unique capabilities of quantum hardware with the flexibility and intelligence of AI. By using quantum systems to generate data, and then using AI to learn from and guide the generation of more data, we can create a powerful feedback loop that enables breakthroughs in diverse fields.

Unlike classical systems, our quantum processing unit (QPU) produces data that is extremely difficult, if not impossible, to generate classically. That gives us a unique edge: we’re not just feeding an AI more text from the internet; we’re giving it new and valuable data that can’t be obtained anywhere else.

The Search for Ground State Energy

One of the most compelling challenges in quantum chemistry and materials science is computing the properties of a molecule’s ground state. For any given molecule or material, the ground state is its lowest energy configuration. Understanding this state is essential for understanding molecular behavior and designing new drugs or materials.

The problem is that accurately computing this state for anything but the simplest systems is incredibly complicated. You cannot even do it by brute force—testing every possible state and measuring its energy—because  the number of quantum states grows as a double-exponential, making this an ineffective solution. This illustrates the need for an intelligent way to search for the ground state energy and other molecular properties.

That’s where GQE comes in. GQE is a methodology that uses data from our quantum computers to train a transformer. The transformer then proposes promising trial quantum circuits; ones likely to prepare states with low energy. You can think of it as an AI-guided search engine for ground states. The novelty is in how our transformer is trained from scratch using data generated on our hardware.

Here's how it works:

  • We start with a batch of trial quantum circuits, which are run on our QPU.
  • Each circuit prepares a quantum state, and we measure the energy of that state with respect to the Hamiltonian for each one.
  • Those measurements are then fed back into a transformer model (the same architecture behind models like GPT-2) to improve its outputs.
  • The transformer generates a new distribution of circuits, biased toward ones that are more likely to find lower energy states.
  • We sample a new batch from the distribution, run them on the QPU, and repeat.
  • The system learns over time, narrowing in on the true ground state.

To test our system, we tackled a benchmark problem: finding the ground state energy of the hydrogen molecule (H₂). This is a problem with a known solution, which allows us to verify that our setup works as intended. As a result, our GQE system successfully found the ground state to within chemical accuracy.

To our knowledge, we’re the first to solve this problem using a combination of a QPU and a transformer, marking the beginning of a new era in computational chemistry.

The Future of Quantum Chemistry

The idea of using a generative model guided by quantum measurements can be extended to a whole class of problems—from to materials discovery, and potentially, even drug design.

By combining the power of quantum computing and AI we can unlock their unified full power. Our quantum processors can generate rich data that was previously unobtainable. Then, an AI can learn from that data. Together, they can tackle problems neither could solve alone.

This is just the beginning. We’re already looking at applying GQE to more complex molecules—ones that can’t currently be solved with existing methods, and we’re exploring how this methodology could be extended to real-world use cases. This opens many new doors in chemistry, and we are excited to see what comes next.

technical
All
Blog
April 11, 2025
ҹɫֱ’s partnership with RIKEN bears fruit

Last year, we joined forces with RIKEN, Japan's largest comprehensive research institution, to install our hardware at RIKEN’s campus in Wako, Saitama. This deployment is part of RIKEN’s project to build a quantum-HPC hybrid platform consisting of high-performance computing systems, such as the supercomputer Fugaku and ҹɫֱ Systems.  

Today, marks the first of many breakthroughs coming from this international supercomputing partnership. The team from RIKEN and ҹɫֱ joined up with researchers from Keio University to show that quantum information can be delocalized (scrambled) using a quantum circuit modeled after periodically driven systems.  

"Scrambling" of quantum information happens in many quantum systems, from those found in complex materials to black holes.  Understanding information scrambling will help researchers better understand things like thermalization and chaos, both of which have wide reaching implications.

To visualize scrambling, imagine a set of particles (say bits in a memory), where one particle holds specific information that you want to know. As time marches on, the quantum information will spread out across the other bits, making it harder and harder to recover the original information from local (few-bit) measurements.

While many classical techniques exist for studying complex scrambling dynamics, quantum computing has been known as a promising tool for these types of studies, due to its inherently quantum nature and ease with implementing quantum elements like entanglement. The joint team proved that to be true with their latest result, which shows that not only can scrambling states be generated on a quantum computer, but that they behave as expected and are ripe for further study.

Thanks to this new understanding, we now know that the preparation, verification, and application of a scrambling state, a key quantum information state, can be consistently realized using currently available quantum computers. Read the paper , and read more about our partnership with RIKEN here.  

partnership
All
technical
All