This breakthrough is achieved through the development of a Self-Powered Artificial Synapse that can mimic the human eye’s selective filtering mechanism to process visual data efficiently.
Challenges in Machine Vision and the Role of Neuromorphic Computing
As artificial intelligence and smart devices continue to evolve, machine vision is taking an increasingly pivotal role as a key enabler of modern technologies. Unfortunately, despite much progress, machine vision systems still face a major problem: processing the enormous amounts of visual data generated every second requires substantial power, storage, and computational resources. This limitation makes it difficult to deploy visual recognition capabilities in edge devices―such as smartphones, drones, or autonomous vehicles.
Interestingly, the human visual system offers a compelling alternative model. Unlike conventional machine vision systems that have to capture and process every detail, our eyes and brain selectively filter information, allowing for higher efficiency in visual processing while consuming minimal power. Neuromorphic computing, which mimics the structure and function of biological neural systems, has thus emerged as a promising approach to overcome existing hurdles in computer vision. However, two major challenges have persisted. The first is achieving color recognition comparable to human vision, whereas the second is eliminating the need for external power sources to minimize energy consumption.
Development of the Self-Powered Artificial Synapse by Tokyo University of Science
Against this backdrop, a research team led by Associate Professor Takashi Ikuno from the School of Advanced Engineering, Department of Electronic Systems Engineering, Tokyo University of Science (TUS), Japan, has developed a groundbreaking solution. Their paper, published in Volume 15 of the journal Scientific Reports on May 12, 2025, introduces a Self-Powered Artificial Synapse capable of distinguishing colours with remarkable precision. The study was co-authored by Mr. Hiroaki Komatsu and Ms. Norika Hosoda, also from TUS.
The researchers created their device by integrating two different dye-sensitized solar cells, which respond differently to various wavelengths of light. Unlike conventional optoelectronic artificial synapses that require external power sources, the proposed synapse generates its electricity via solar energy conversion. This self-powering capability makes it particularly suitable for edge computing applications, where energy efficiency is crucial.
High-Resolution Colour Discrimination and Logical Operations
As evidenced through extensive experiments, the resulting system can distinguish between colours with a resolution of 10 nanometres across the visible spectrum―a level of discrimination approaching that of the human eye. Moreover, the device also exhibited bipolar responses, producing positive voltage under blue light and negative voltage under red light. This makes it possible to perform complex logic operations that would typically require multiple conventional devices. "The results show great potential for the application of this next-generation optoelectronic device, which enables high-resolution colour discrimination and logical operations simultaneously, to low-power artificial intelligence (AI) systems with visual recognition," notes Dr. Ikuno.
Real-World Application in Movement Recognition
To demonstrate a real-world application, the team used their device in a physical reservoir computing framework to recognize different human movements recorded in red, green, and blue. The system achieved an impressive 82% accuracy when classifying 18 different combinations of colours and movements using just a single device, rather than the multiple photodiodes needed in conventional systems.
Impact Across Industries: Autonomous Vehicles, Healthcare, and Consumer Electronics
The implications of this research extend across multiple industries. In autonomous vehicles, these devices could enable more efficient recognition of traffic lights, road signs, and obstacles. In healthcare, they could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities. "We believe this technology will contribute to the realization of low-power machine vision systems with colour discrimination capabilities close to those of the human eye, with applications in optical sensors for self-driving cars, low-power biometric sensors for medical use, and portable recognition devices," remarks Dr. Ikuno.
Overall, this work represents a significant step toward bringing the wonders of computer vision to edge devices, enabling our everyday devices to see the world more like we do.