NEWS
ORCA Pioneers Hybrid Quantum-Classical Platform for AI and Quantum Innovation at PSNC with NVIDIA CUDA-Q
Near-term and future applications of quantum processors will be hybrid, combining the unique capabilities of both quantum and classical computing. While many large portions of workflows will continue to run on classical supercomputers, integrating quantum co-processors into these environments will allow for solving problems that are beyond the reach of classical computing alone. Given the rapid pace of progress and adoption of quantum computing, it is important to start building and evaluating the software and hardware infrastructure that will enable the ongoing development of hybrid workflows for real-world use cases.
To this end, ORCA Computing has worked with the Poznan Supercomputing and Networking Center (PSNC) and NVIDIA to bring together their expertise in AI supercomputing, quantum information science, and quantum technology to build and deploy a quantum-classical infrastructure capable of demonstrating the importance of quantum-enhanced applications.
A key result in this collaboration has been the development of a quantum-classical machine learning workflow for biological imaging. This workflow uses the NVIDIA CUDA-Q development platform [1] to leverage two ORCA PT-1 photonic quantum computing systems and two NVIDIA H100 Tensor Core GPUs within PSNC’s resource management environment. A hybrid neural network was trained across the quantum and classical processors to solve a classification task on a biological dataset. This is the first ever demonstration of a fully functional multi-user, multi-QPU and multi-GPU platform. This demonstrates the importance of providing state of the art quantum-classical capabilities to researchers, allowing them to explore larger-scale algorithms that can scale with the underlying quantum technology. More information about this demonstration can be found at [1].
As part of this collaboration, we also took a longer view and developed new algorithms that can use this infrastructure to unlock new use cases. This work focused on the synergistic intersection of AI and quantum. First, we studied how quantum may one day help AI, by exploring how quantum processors can be used alongside neural networks to yield high-quality results in a generative AI task. Conversely, we then investigated how AI techniques can be used to design better quantum circuits for larger-scale quantum computers.
Quantum for AI
Continuing the trend for current AI models to scale and deliver performance improvements places increasing demands on AI supercomputing hardware. As the costs and energy requirements of large AI datacenters increase, there is growing interest in sustainability within supercomputing.
Working with NVIDIA and PSNC, ORCA Computing has further developed its suite of proprietary algorithms for hybrid quantum-classical generative models – with a focus on generative adversarial networks (GANs). Whereas classical GANs rely entirely on classical neural networks, hybrid GANs also draw on a quantum processor as part of a “generator” neural network. Previous work has shown that hybrid GANs can outperform purely classical GANs on several types of problems, with particularly promising results in chemistry and therapeutics [3,4]. Fundamentally built around hybrid GPU and QPU architectures, the NVIDIA CUDA-Q development platform is a natural choice for implementing algorithms such as hybrid GANs.
As part of this project, ORCA worked to build an improved hybrid StyleGAN model [5]. This was trained using 8 NVIDIA A100 Tensor Core GPUs and a quantum processor simulated by CUDA-Q’s GPU-accelerated simulation tools, operating to the specifications of the ORCA PT-2. To be released in 2025, the PT-2 will operate with 16 photons in 32 qumodes, which is a scale appropriate for training GANs on real-world data. For training, we focused on the FFHQ dataset of human faces, which is a well-studied and extensively benchmarked dataset in the literature. We used the ORCA SDK and its integrations with PyTorch to build these large-scale hybrid models.
Some example data produced by our trained model is shown below, demonstrating that high-quality data can be generated with this approach. To quantify the performance of the model, we use the “FID” metric, achieving a score of approximately 2.28 on the FFHQ dataset. This score is a new record for a hybrid quantum/classical generative modeling approach. Where our previous work directly injected the results of a quantum processor into the generator neural network, in this project we achieved improved results using “quantum-vector-quantized embedding”. This involves using an embedding lookup for each photon number state in each qumode of the PT Series processor.
With this work, ORCA has demonstrated that the hybrid GAN approach can scale and leverage a multi-GPU and multi-QPU environment to produce high-quality data. However, for the kind of image data studied here, existing models already achieve very high-quality results, and margins for future improvements using a quantum processor are low. We anticipate that the full potential of these hybrid approaches will be realized in fields such as chemistry and therapeutics, where the performance of existing models is often limited. These scaling experiments, as well as the demonstration by ORCA, PSNC and NVIDIA of a hybrid algorithm running on a biological dataset, are the first steps toward showing the potential of this approach.
AI for Quantum
Through its work with PSNC and NVIDIA, ORCA is proud to unveil a new “Resource State Generator with Pre-trained Transformers” (RS-GPT) algorithm that uses AI to improve the design of photonic quantum processors. This algorithm trains a transformer model, inspired by the open-source Llama large language model, to determine an optimal sequence of photonic operations able to create a specified “resource state” from single photons. In photonic quantum computing, resource states are small, entangled states that can be combined to perform measurement-based universal quantum computation. Finding efficient ways of generating these resource states is an important step towards building large-scale photonic quantum computers.
In a manner analogous to how large language models use sequences of letters as building blocks for sentences and paragraphs, RS-GPT uses the building blocks of linear optics — photon sources, beam splitters, and detectors — to form resource state-generating operations. These components are universal for quantum computation with photonics [2]. We train RS-GPT to lay out photon sources, beam splitters and detectors to produce valid circuits for generating certain states. The types of states we aim to produce are Bell states, which are fundamental building blocks of larger quantum states within quantum optics. The following figure shows an example of a Bell state generator, where 4 input photons enter a circuit with 8 channels or “qumodes”, a sequence of beam splitters is applied, and then the detection of 2 photons at two locations in the circuit heralds the creation of a Bell state in the remaining qumodes.
We trained the RS-GPT algorithm using NVIDIA H100 GPUs, via a method inspired by the Generative Quantum Eigensolver (GQE) algorithm, previously developed by researchers from the University of Toronto, St Jude Children’s Research Hospital, and NVIDIA [6,7]. Logit-matching, the loss introduced in GQE, trains the model to bias its distributions towards sampling better circuits for a Variational Quantum Eigensolver. For RS-GPT we utilize the same biasing strategy with an additional training procedure to ensure that our transformer produces an output that can be translated into a photonic circuit. An overview of RS-GPT is shown below.
During training, RS-GPT learns where to place beam splitters and detectors within a circuit to best produce Bell states, given any combination of input photons to the circuit. Allowing for input photons to occur in different combinations of input ports caters to the physically relevant scenario in linear optics in which photon sources are probabilistic, meaning that photons may be generated at any given input. Moving forward, RS-GPT can be generalized to more complex resource state generation scenarios. This provides an important tool, since although optimal solutions to resource state generation are known for idealized circuits, there is no general result for more general scenarios that account for various physical constraints. These can include allowing for loss or constraints on circuit topology. RS-GPT can be applied to these cases too.
ORCA is working with PSNC and NVIDIA towards releasing a demonstration of RS-GPT within the CUDA-Q development platform, providing users of hybrid quantum-classical infrastructure hands-on access to the algorithm.
Perspective
This project achieved three key objectives. Firstly, we demonstrated the importance of practical infrastructure able to support multi-user, multi-QPU, and multi-GPU workflows. Secondly, we showed how this infrastructure can prototype a quantum-enhanced use-case in the biological domain. Finally, we developed new algorithms at the intersection of quantum and AI that can exploit such hybrid infrastructures. Our hybrid GAN models can help improve the performance of generative models for image generation or generative chemistry, and our RS-GPT algorithm uses AI to improve the design of photonic quantum processors. We are now working towards onboarding new users and exploring additional use cases that leverage hybrid quantum/classical infrastructures.
References
[1] NVIDIA CUDA-Q platform, https://nvidia.github.io/cuda-quantum/latest/index.html
[2] Pankovich, Brendan, et al. “Flexible entangled-state generation in linear optics.” Physical Review A 110.3 (2024): 032402.
[3] https://orcacomputing.com/bp-and-orca-computing-team-up-to-quantum-powered-computational-chemistry/
[4] https://orcacomputing.com/quantum-enhanced-vaccine-design-on-the-orca-pt-series/
[5] Karras, Tero, Samuli Laine, and Timo Aila. “A style-based generator architecture for generative adversarial networks.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[6] Nakaji, Kouhei, et al. “The generative quantum eigensolver (GQE) and its application for ground state search.” arXiv preprint arXiv:2401.09253 (2024).
[7] https://developer.nvidia.com/blog/advancing-quantum-algorithm-design-with-gpt/