A collaborative research team from Tohoku University and the University of the Future in Japan has made a breakthrough by employing a real-time machine learning framework to train rat cortical neurons. These neurons have been successfully taught to autonomously generate complex temporal signals. The team has developed a 'closed-loop reservoir computing' system, which is capable of self-learning and generating both periodic and chaotic waveforms. Remarkably, this system can perform AI computing tasks without relying on external input.
At the heart of this system is the utilization of PDMS microfluidic thin films to constrain neuronal connections. This approach has enabled the construction of two distinct network structures: lattice and hierarchical. These structures are designed to enhance the dimensionality of network dynamics, thereby improving the system's performance. Test results have demonstrated that the lattice network, in particular, excels in this regard. The system is adept at generating a variety of waveforms and approximating chaotic trajectories with precision.
During the learning phase, the system exhibits a high degree of accuracy, with the correlation between predictions and target signals surpassing 0.8. However, the technology is not without its challenges. Performance bottlenecks have been identified, including an increase in errors after the cessation of training and delays in the feedback loop. These issues limit the system's ability to track rapidly changing waveforms.
Looking ahead, the research team is committed to overcoming these limitations. They plan to reduce delays by implementing dedicated hardware, which they believe will pave the way for expanded applications in fields such as brain-machine interfaces and neuroprosthetic devices. This innovative research holds great promise for the future of AI and neurotechnology.
