ΑΙhub.org
 

A survey of mixed-precision neural networks

by
09 November 2022



share this:

Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network.Alexa Steinbrück / Better Images of AI / Licenced by CC-BY 4.0

In their paper Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar and Fadi Kurdahi have reviewed recent frameworks in the literature that address mixed-precision neural network training. Here, they tell us more about mixed-precision neural networks and the main findings from their survey.

Could you tell us about mixed-precision neural networks – what are they and why are they an interesting area for study?

Mixed-precision neural networks are neural networks with varying precision (i.e., bitwidth allocation) across layers, kernels or weights. They are now gaining momentum as the need for energy-efficient and high throughput AI hardware is growing. Binary neural networks are considered the most efficient to be deployed on hardware, however, they exhibit a non-negligible drop in the model accuracy compared to floating-point neural networks which give the best accuracy and worst energy and latency efficiency. Mixed-precision neural networks strike the balance between model accuracy and energy and latency efficiency (e.g., less resource-demanding hardware deployment).

What aspects of mixed-precision neural networks have you covered in your survey?

We summarized most of the recent frameworks in the literature that address mixed-precision neural network training. In particular, we categorized these frameworks according to their optimization technique, thoroughly summarized the methods used and results reported by each, and juxtaposed those frameworks by listing their pros and cons. In addition, we provide recommendations for important future research directions in mixed-precision neural networks.

Could you explain your methodology for conducting the survey?

We conducted a thorough literature review of many of the recent mixed-precision neural network papers in the literature, analyzed these works, and drew insights and came up with comparisons. Unfortunately, the programming codes for algorithms in most of these works are not available. This prevented us from performing an experimental study under the same constraints to have an apples-to-apples comparison. But we tried to be as consistent to the maximal extent and report some comparison tables under a similar setup to find the best frameworks. In addition, we gave some recommendations based on these summary tables and provided some comparisons against binary neural networks which is the golden reference for energy and latency efficiency.

What were your main findings?

The main findings can be summarized in the following points:

  • Mixed-precision neural networks are significant as the path for achieving energy-delay-area efficiency.
  • The optimal mixed-precision deep neural network (DNN) that balances the tradeoff between accuracy and hardware savings is still an unsolved problem.
  • There is a growing need for more dynamic mixed-precision DNNs that can be reconfigured at run-time according to the changing requirements of the running application.
  • The overhead of the algorithm/technique used to assign the mixed precision is of importance too.
  • There is a growing interest in joint optimization approaches where not only the mixed precision problem is tackled, but also pruning and even neural architectural search which leads to achieving the highest energy-delay-area efficiency.

As part of your work you proposed some guidelines for future mixed-precision frameworks. Could you tell us a bit about those recommendations?

We focus on the needs of current systems when it comes to neural networks. We recommend that future works in mixed-precision neural networks focus on hardware-awareness, the trade-off between model compression and accuracy, run-time speed of the algorithms, and the support for run-time reconfigurability to adapt to changing run-time system requirements.

What are your plans for future work in this area?

Our background is more towards hardware. Hence, we are currently working to better evolve the hardware models, architecture, and DNN model mapping in the optimization process to achieve better energy, latency, and area efficiency. In addition, we consider the in-memory compute paradigm which by default provides orders of magnitude efficiency compared to the von Neumann computing paradigm which is considered in most of the surveyed mixed precision works.

Read the work in full

Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar, Fadi Kurdahi.

About the authors

Mariam Rakka received a BE degree (with high distinction) in Computer and Communications Engineering with two minors in Mathematics and Business Administration from the American University of Beirut in 2020 and an MEng degree in Electrical and Computer Engineering from UC, Irvine in 2022. Currently, Mariam is a PhD student at UC, Irvine. Her research focuses on in-memory computing technologies, hardware accelerators, rare fail event estimation, and hardware-friendly neural networks. She was a DAC’21 young fellow. Mariam joined Arm for her summer 2022 internship.

Mohammed E. Fouda received a BSc degree (Hons.) in Electronics and Communications Engineering and a MSc degree in Engineering Mathematics from the Faculty of Engineering, Cairo University, Cairo, Egypt, in 2011 and 2014, respectively. Fouda received a PhD degree from the University of California, Irvine, USA, in 2020. From April 2020 – March 2022, he worked as an assistant researcher at the University of California, Irvine. Currently, he is a senior research scientist at Rain Neuromorphics Inc. His research interests include analog AI hardware, neuromorphic circuits and systems, and brain-inspired computing.

Pramod Khargonekar received a BTech degree in electrical engineering in 1977 from IIT Bombay, a MS degree in mathematics in 1980, and PhD degree in electrical engineering in 1981, from the University of Florida. His early career was at the University of Florida and the University of Minnesota. In June 2016, he assumed his current position as Vice Chancellor for Research and Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Irvine. He is a Fellow of IEEE, IFAC, and AAAS. His research interests include control systems theory and applications, machine learning for control, and renewable integration in smart electric grids.

Fadi Kurdahi received a BE degree in electrical engineering from the American University of Beirut in 1981 and a PhD from the University of Southern California in 1987. Since then, he has been a Faculty with the Electrical Engineering and Computer Science Department, University of California at Irvine, and is currently the Director of the Center for Embedded & Cyber-physical Systems, He is a fellow of the IEEE and the AAAS. He received the Distinguished Alumnus Award from his Alma Mater, the American University of Beirut, in 2008.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by

DataLike: Interview with Tẹjúmádé Àfọ̀njá

"I place an emphasis on wellness and meticulously plan my schedule to ensure I can make meaningful contributions to what's important to me."

Beyond the mud: Datasets, benchmarks, and methods for computer vision in off-road racing

Off-road motorcycle racing poses unique challenges that push the boundaries of what existing computer vision systems can handle
17 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association