In recent years, the demand for AI computing power has soared, GPU components are in short supply, and energy consumption anxiety has become a hidden concern in the industry. However, the Cambridge team of Microsoft Research chose to redefine computing with light. This system is calledAnalog Optical Computer The system of Analog Optical Computer (AOC) is assembled from commercially available parts such as mobile phone camera sensors, Micro LEDs, lenses, etc., but in experiments it has shown that the speed and energy efficiency are 100 times higher than that of GPUs, and it has successfully landed on自然雜誌.

Optical computing returns to the stage
The concept of optical computing was actually proposed as early as the 1960s, but due to long-standing limitations in manufacturing technology, it remained theoretical. After four years of research and development, the Microsoft team integrated optical and analog circuits into a single circuit, performing matrix operations using light and handling nonlinearities and addition and subtraction electronically. Each iteration takes just 20 nanoseconds until it converges to a "fixed-point" solution.
This architecture not only eliminates the need for costly digital-to-analog conversion but also possesses inherent noise immunity, enabling optical computing to run stably on real hardware for the first time.
Our breakthrough work on an analog optical computer points to new ways to solve complex real-world problems with much greater efficiency. Super to see this published today in @Nature. https://t.co/vjx8lYmze5
- Satya Nadella (@satyanadella) September 3, 2025
Financial and medical practical verification
To prove that AOC is not just a laboratory toy, the Microsoft team selected two major scenarios, finance and healthcare, for verification.
In a settlement optimization problem being solved in collaboration with Barclays Bank, AOC found the optimal solution in just seven iterations. On the medical side, compressed sensing imaging of magnetic resonance imaging (MRI) results was transformed into a runnable optimization problem. Not only did it reconstruct 7 × 32 brain slices, but it also simulated 32 variables of real MRI data through a "digital twin" approach. This demonstrates the potential to reduce scan times from 20 minutes to 30 minutes, significantly reducing the burden on patients.

A new path for AI
Researchers also discovered that AOC's fixed-point search mechanism is well-suited for AI models requiring iterative convergence, such as the Deep Equilibrium Network (DEQ). On MNIST and Fashion-MNIST image classification and nonlinear regression tasks, AOC achieved near 99% agreement with simulation results. Furthermore, through time multiplexing, AOC was able to scale to the equivalent of 4096 weights, demonstrating its ability to run not only small models but also larger AI inference models.
Energy efficiency gap of two orders of magnitude
Microsoft estimates that a mature version of AOC could reach 500 TOPS per watt, far exceeding the current NVIDIA H100's approximately 4.5 TOPS per watt, representing an energy efficiency gap of two orders of magnitude. If modularization is expanded to encompass 0.1-20 million weight groups in the future, AOC could become the "new infrastructure" for low-power AI inference.
From experiment to racetrack
Leading this research, Principal Manager Francesca Parmigiani, Principal Researcher Jiaqi Chu, and machine learning expert Jannes Gladrow, led a multidisciplinary team to transform a concept from half a century ago into reality. They also sought to engage more researchers by opening up the digital twin. Research leader Hitesh Ballani stated that their goal is to make AOC an integral part of future AI infrastructure.








