Abstract
Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time-division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field-effect transistors, with their large on-off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.
Original language | English |
---|---|
Journal | Advanced Materials |
DOIs | |
Publication status | Accepted/In press - 2025 |
Externally published | Yes |
Keywords
- 2D perovskite
- carbon nanotubes
- deep neural networks
- graphene
- reconfigurable synapse
- time-division multiplexing architecture