Training Neural Networks for Analog AI Hardware (available)

Starting Date: June 2026
Prerequisites: Having good knowledge of Python programming is essential. Familiarity with Deep Learning and the PyTorch framework is also important.
Will results be assigned to University: No

Modern AI models achieve impressive performance but require enormous amounts of energy when trained and run on conventional GPU hardware. A promising alternative is analog in-memory computing, where neural network computations are performed directly inside memory devices such as resistive crossbar arrays. This approach can dramatically improve the efficiency of AI systems, but it also introduces new challenges including noise, limited precision, and device variability.

In this project, you will explore how neural networks can be trained to operate reliably on such emerging hardware. Using simulation frameworks such as IBM Analog Hardware Acceleration Kit and NeuroSim, you will train and evaluate models under realistic hardware constraints.

The project will involve running experiments, analysing how hardware imperfections affect model performance, and exploring techniques that improve robustness. The project will focus on recurrent neural networks (RNNs).

This project offers an opportunity to work at the intersection of machine learning and next-generation AI hardware.

Skills: Python; basic machine learning; PyTorch.

Reading

  1. Rasch, M.J., Mackin, C., Le Gallo, M. et al. Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators. Nat Commun 14, 5282 (2023). https://doi.org/10.1038/s41467-023-40770-4
  2. Le Gallo, Manuel, et al. Using the IBM analog in-memory hardware acceleration kit for neural network training and inference. APL Machine Learning 1.4 (2023).