The progression of intelligent sensing and edge computing technologies has brought about significant challenges for traditional vision systems that rely on CMOS sensors, particularly concerning processing efficiency and energy consumption. In current system architectures, the sensing, memory, and computing components operate independently, necessitating frequent data transfers. This not only complicates the system but also increases latency and power usage. To tackle these challenges, visual neuromorphic computing architectures have been developed. These innovative architectures merge optical sensing with initial data processing directly at the sensor level, facilitating real-time preprocessing of input data locally. Within these systems, neurons and synaptic devices play a pivotal role in information encoding, weight adjustment, and computational tasks.
