Alexa Steinbrück / Better Images of AI / Licenced by CC-BY 4.0
In their paper Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar and Fadi Kurdahi have reviewed recent frameworks in the literature that address mixed-precision neural network training. Here, they tell us more about mixed-precision neural networks and the main findings from their survey.
Mixed-precision neural networks are neural networks with varying precision (i.e., bitwidth allocation) across layers, kernels or weights. They are now gaining momentum as the need for energy-efficient and high throughput AI hardware is growing. Binary neural networks are considered the most efficient to be deployed on hardware, however, they exhibit a non-negligible drop in the model accuracy compared to floating-point neural networks which give the best accuracy and worst energy and latency efficiency. Mixed-precision neural networks strike the balance between model accuracy and energy and latency efficiency (e.g., less resource-demanding hardware deployment).
We summarized most of the recent frameworks in the literature that address mixed-precision neural network training. In particular, we categorized these frameworks according to their optimization technique, thoroughly summarized the methods used and results reported by each, and juxtaposed those frameworks by listing their pros and cons. In addition, we provide recommendations for important future research directions in mixed-precision neural networks.
We conducted a thorough literature review of many of the recent mixed-precision neural network papers in the literature, analyzed these works, and drew insights and came up with comparisons. Unfortunately, the programming codes for algorithms in most of these works are not available. This prevented us from performing an experimental study under the same constraints to have an apples-to-apples comparison. But we tried to be as consistent to the maximal extent and report some comparison tables under a similar setup to find the best frameworks. In addition, we gave some recommendations based on these summary tables and provided some comparisons against binary neural networks which is the golden reference for energy and latency efficiency.
The main findings can be summarized in the following points:
We focus on the needs of current systems when it comes to neural networks. We recommend that future works in mixed-precision neural networks focus on hardware-awareness, the trade-off between model compression and accuracy, run-time speed of the algorithms, and the support for run-time reconfigurability to adapt to changing run-time system requirements.
Our background is more towards hardware. Hence, we are currently working to better evolve the hardware models, architecture, and DNN model mapping in the optimization process to achieve better energy, latency, and area efficiency. In addition, we consider the in-memory compute paradigm which by default provides orders of magnitude efficiency compared to the von Neumann computing paradigm which is considered in most of the surveyed mixed precision works.
Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar, Fadi Kurdahi.
Mariam Rakka received a BE degree (with high distinction) in Computer and Communications Engineering with two minors in Mathematics and Business Administration from the American University of Beirut in 2020 and an MEng degree in Electrical and Computer Engineering from UC, Irvine in 2022. Currently, Mariam is a PhD student at UC, Irvine. Her research focuses on in-memory computing technologies, hardware accelerators, rare fail event estimation, and hardware-friendly neural networks. She was a DAC’21 young fellow. Mariam joined Arm for her summer 2022 internship. |
|
Mohammed E. Fouda received a BSc degree (Hons.) in Electronics and Communications Engineering and a MSc degree in Engineering Mathematics from the Faculty of Engineering, Cairo University, Cairo, Egypt, in 2011 and 2014, respectively. Fouda received a PhD degree from the University of California, Irvine, USA, in 2020. From April 2020 – March 2022, he worked as an assistant researcher at the University of California, Irvine. Currently, he is a senior research scientist at Rain Neuromorphics Inc. His research interests include analog AI hardware, neuromorphic circuits and systems, and brain-inspired computing. |
|
Pramod Khargonekar received a BTech degree in electrical engineering in 1977 from IIT Bombay, a MS degree in mathematics in 1980, and PhD degree in electrical engineering in 1981, from the University of Florida. His early career was at the University of Florida and the University of Minnesota. In June 2016, he assumed his current position as Vice Chancellor for Research and Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Irvine. He is a Fellow of IEEE, IFAC, and AAAS. His research interests include control systems theory and applications, machine learning for control, and renewable integration in smart electric grids. |
|
Fadi Kurdahi received a BE degree in electrical engineering from the American University of Beirut in 1981 and a PhD from the University of Southern California in 1987. Since then, he has been a Faculty with the Electrical Engineering and Computer Science Department, University of California at Irvine, and is currently the Director of the Center for Embedded & Cyber-physical Systems, He is a fellow of the IEEE and the AAAS. He received the Distinguished Alumnus Award from his Alma Mater, the American University of Beirut, in 2008. |