Who is responsible for migrating your systems to quantum-safe algorithms? Is it your vendors or your cybersecurity team?
The customers I speak to are not always clear on this question. But from my perspective, the answer is your cybersecurity team. They have the ultimate responsibility of ensuring your organization is secure in a post-quantum future. However, they will need a lot of help from your technology vendors.
This article outlines what you should expect (or demand) from your vendors, and what remains the responsibility of your cyber team.
A general vendor does not offer specific cryptographic services to you. Instead, they provide a business service that uses cryptography to maintain security and resilience.
Consider the accounting platform SAP. It is no doubt riddled with cryptography, yet its purpose is to manage your finances. Therefore, SAP’s focus will be on migrating their underlying cryptography to post-quantum technologies, while maintaining your business services without interruption.
You should expect a general vendor to share a quantum-safe migration roadmap with you, complete with timelines. They should explain the activities they will complete to address the quantum threat, and how they will impact you as a user.
Although your vendor will not begin migration until the NIST post-quantum algorithms are standardised next year, you should expect them to already have a roadmap in place. If they don’t, this is a cause for concern.
Some vendors may already offer a test version of their product, which uses post-quantum algorithms. This allows your cyber team to experiment with the impact on performance or interoperability.
A cryptographic vendor provides you with services directly related to cryptography, such as network security, data encryption or key management.
The expectations that apply to general vendors also apply to cryptographic vendors. However, you will need more information from your cryptographic vendors to pull off a smooth migration.
Cryptographic vendors must provide you with detailed guidance on how to migrate between their current product suite and the new versions that use post-quantum algorithms. For instance, you might need to understand how to re-process legacy data so that it’s protected by the new algorithms. Similarly, network security vendors will need to provide detailed instructions on migrating traffic flows while maintaining uptime.
I would expect cryptographic vendors to be far more hands-on during your migration. Expect to have discussions of your deployment architecture with their account management teams, and don’t be afraid to ask the hard technical questions.
The flow of information will not be one-way. You should be prepared to share information with your vendors to help them help you.
Having your migration plan developed, at least at a high level, will be critical for meaningful conversations with your vendors. This will allow you to contrast their timelines for migration versus your expectations.
Vendors will also benefit from understanding how you use their products in conjunction with products from other vendors. The goal here is to spot edge cases, where you risk business downtime because the vendor wasn’t anticipating how you were using their product.
Finally, make sure you know the configuration of your deployment. The devil is in the details when it comes to planning migration, so be prepared to tell your vendor which features you are using and how you’ve configured product security settings.
While your vendors should provide a lot of help and guidance, they are not responsible for everything.
Your cybersecurity team will be responsible for planning your overall migration strategy, including prioritising which systems to migrate first. This will involve understanding the relative importance of business systems, and the requirements for data security.
While vendors should provide some guidance for interoperability, ultimately the IT and cybersecurity teams are responsible for ensuring updates to one service do not impact another service.
Finally, you must ensure your IT and cyber teams are leading the conversation with your end users. You cannot rely on vendors to manage the communication with your customers and internal stakeholders.
A good vendor will already be talking to you about their plans for quantum-safe migration.
For mass-market products, this might be via blog posts and thought-leadership articles. For products with a deeper client/vendor relationship, the topic of quantum-safe migration should already be appearing in quarterly business reviews.
For cryptographic vendors, you should also be expecting test versions to be available today, to allow for experimentation.
Overall, if any vendor is not able to talk about their plans for quantum-safe migration today, even at a high level, then you should flag this as a cause for concern.
ҹɫֱ, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. ҹɫֱ’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, ҹɫֱ leads the quantum computing revolution across continents.
Quantum computing companies are poised to exceed $1 billion in revenues by the close of 2025, to McKinsey & Company, underscoring how today’s quantum computers are already delivering customer value in their current phase of development.
This figure is projected to reach upwards of $37 billion by 2030, rising in parallel with escalating demand, as well as with the scale of the machines and the complexity of problem sets of which they will be able to address.
Several systems on the market today are fault-tolerant by design, meaning they are capable of suppressing error-causing noise to yield reliable calculations. However, the full potential of quantum computing to tackle problems of true industrial relevance, in areas like medicine, energy, and finance, remains contingent on an architecture that supports a fully fault-tolerant universal gate set with repeatable error correction—a capability that, until now, has eluded the industry.
ҹɫֱ is the first—and only—company to achieve this critical technical breakthrough, universally recognized as the essential precursor to scalable, industrial-scale quantum computing. This milestone provides us with the most de-risked development roadmap in the industry and positions us to fulfill our promise to deliver our universal, fully fault-tolerant quantum computer, Apollo, by 2029.
In this regard, ҹɫֱ is the first company to step from the so-called “NISQ” (noisy intermediate-scale quantum) era towards utility-scale quantum computers.
A quantum computer uses operations called gates to process information in ways that even today’s fastest supercomputers cannot. The industry typically refers to two types of gates for quantum computers:
A system that can run both gates is classified as and has the machinery to tackle the widest range of problems. Without non-Clifford gates, a quantum computer is non-universal and restricted to smaller, easier sets of tasks - and it can always be simulated by classical computers. This is like painting with a full palette of primary colors, versus only having one or two to work with. Simply put, a quantum computer that cannot implement ‘non-Clifford’ gates is not really a quantum computer.
A fault-tolerant, or error-corrected, quantum computer detects and corrects its own errors (or faults) to produce reliable results. ҹɫֱ has the best and brightest scientists dedicated to keeping our systems’ error rates the lowest in the world.
For a quantum computer to be fully fault-tolerant, every operation must be error-resilient, across Clifford gates and non-Clifford gates, and thus, performing “a full gate set” with error correction. While some groups have performed fully fault-tolerant gate sets in academic settings, these demonstrations were done with only a few qubits and —too high for any practical use.
Today, we have published that establishes ҹɫֱ as the first company to develop a complete solution for a universal fully fault-tolerant quantum computer with repeatable error correction, and error rates low enough for real-world applications.
The describes how scientists at ҹɫֱ used our System Model H1-1 to perfect magic state production, a crucial technique for achieving a fully fault-tolerant universal gate set. In doing so, they set a record magic state infidelity (7x10-5), 10x better than any .
Our simulations show that our system could reach a magic state infidelity of 10^-10, or about one error per 10 billion operations, on a larger-scale computer with our current physical error rate. We anticipate reaching 10^-14, or about one error per 100 trillion operations, as we continue to advance our hardware. This means that our roadmap is now derisked.
Setting a record magic state infidelity was just the beginning. The paper also presents the first break-even two-qubit non-Clifford gate, demonstrating a logical error rate below the physical one. In doing so, the team set another record for two-qubit non-Clifford gate infidelity (2x10-4, almost 10x better than our physical error rate). Putting everything together, the team ran the first circuit that used a fully fault-tolerant universal gate set, a critical moment for our industry.
In the , co-authored with researchers at the University of California at Davis, we demonstrated an important technique for universal fault-tolerance called “code switching”.
Code switching describes switching between different error correcting codes. The team then used the technique to demonstrate the key ingredients for universal computation, this time using a code where we’ve previously demonstrated full error correction and the other ingredients for universality.
In the process, the team set a new record for magic states in a distance-3 error correcting code, over 10x better than with error correction. Notably, this process only cost 28 qubits . This completes, for the first time, the ingredient list for a universal gate setin a system that also has real-time and repeatable QEC.
Innovations like those described in these two papers can reduce estimates for qubit requirements by an order of magnitude, or more, bringing powerful quantum applications within reach far sooner.
With all of the required pieces now, finally, in place, we are ‘fully’ equipped to become the first company to perform universal fully fault-tolerant computing—just in time for the arrival of Helios, our next generation system launching this year, and what is very likely to remain as the most powerful quantum computer on the market until the launch of its successor, Sol, arriving in 2027.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that ҹɫֱ continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that ҹɫֱ is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Our team is participating in (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the ҹɫֱ team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
ҹɫֱ is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!