image

NEWS

image

Benchmarking by ORCA and Toyota Motor Show that Hybrid Quantum–Classical AI can Substantially Reduce Energy Consumption Today

As enterprises grapple with the spiraling power consumption cost and carbon footprint of AI, and data centers face a significant energy supply deficit, the industry is seeking alternatives to deliver superior performance per watt on AI models. To this end, ORCA Computing, Toyota Motor and Toyota Tsusho have completed a focused benchmarking study whose initial results demonstrate that photonic quantum processors can reduce AI energy consumption. The results show measurable reductions in classical computational requirements across several methods for image classification when implemented in a hybrid quantum–classical approach, offering one of the clearest examples yet of quantum technology delivering near-term commercial advantage.

Collaboration Focused on Practical Impact

The project explored how ORCA’s photonic quantum systems could be integrated into industry standard machine-learning workflows. Rather than redesign architectures or introduce exotic pipelines, the team tested common, practical interfaces between quantum processors and classical deep-learning models, including Vision Transformers (ViTs) and Convolutional Neural Network (CNN)s, alongside reservoir computing and experiments where quantum processors assisted in the training of classical neural networks. The shared goal across all work streams was to understand whether quantum systems reduce energy cost in a way that is immediately useful for industrial AI, while also exploring trade-offs with model accuracy.

Key Outcomes

One of the most significant findings was that hybrid quantum–classical ViTs and CNNs consistently outperformed their classical baselines while reducing computational operations by over 20 percent. Because these operations closely correlate with GPU load and energy consumption, the improvement offers a direct indication of reduced computational cost.
The study also examined quantum reservoir computing, which delivered notable reductions of over 80% in classical compute time. Although model accuracy was less than classical CNNs, the approach demonstrated a clear path toward extremely lightweight, energy-efficient inference models with further optimization.

Finally, quantum-assisted training showed that weight updates for CNNs could be performed on the quantum system while maintaining accuracy comparable to classical training. This shift of workload from GPUs to QPUs highlights a compelling opportunity to reduce energy usage during training, often the most computationally expensive stage of AI development.

Why These Results Matter Now

Taken together, the findings show that hybrid quantum–classical models can reduce FLOPs, lower classical computation, and improve model performance without requiring any major changes to existing machine-learning pipelines. The immediacy of these benefits is a particularly important milestone: quantum processors are no longer limited to theoretical speed-ups or abstract benchmarks. They are beginning to enhance real workflows today, in ways that matter for industry-scale AI.

This validation strongly supports ORCA’s strategy of building quantum systems that integrate naturally with classical infrastructure, enabling organizations like Toyota Motor, and Toyota Tsusho, to extract value early and derive ongoing benefits as quantum system continuing to grow in capability.

Looking Ahead

This collaboration demonstrates the practical and scalable potential of hybrid quantum–classical machine learning. The work between ORCA and Toyota Motor now lays the foundation for larger-scale experiments, including extensions to production-relevant datasets and more complex models in areas such as material design and realistic image recognition.