ΑΙhub.org
 

A survey of mixed-precision neural networks


by
09 November 2022



share this:

Nine small images with schematic representations of differently shaped neural networks, a human hand making a different gesture is placed behind each network.Alexa Steinbrück / Better Images of AI / Licenced by CC-BY 4.0

In their paper Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar and Fadi Kurdahi have reviewed recent frameworks in the literature that address mixed-precision neural network training. Here, they tell us more about mixed-precision neural networks and the main findings from their survey.

Could you tell us about mixed-precision neural networks – what are they and why are they an interesting area for study?

Mixed-precision neural networks are neural networks with varying precision (i.e., bitwidth allocation) across layers, kernels or weights. They are now gaining momentum as the need for energy-efficient and high throughput AI hardware is growing. Binary neural networks are considered the most efficient to be deployed on hardware, however, they exhibit a non-negligible drop in the model accuracy compared to floating-point neural networks which give the best accuracy and worst energy and latency efficiency. Mixed-precision neural networks strike the balance between model accuracy and energy and latency efficiency (e.g., less resource-demanding hardware deployment).

What aspects of mixed-precision neural networks have you covered in your survey?

We summarized most of the recent frameworks in the literature that address mixed-precision neural network training. In particular, we categorized these frameworks according to their optimization technique, thoroughly summarized the methods used and results reported by each, and juxtaposed those frameworks by listing their pros and cons. In addition, we provide recommendations for important future research directions in mixed-precision neural networks.

Could you explain your methodology for conducting the survey?

We conducted a thorough literature review of many of the recent mixed-precision neural network papers in the literature, analyzed these works, and drew insights and came up with comparisons. Unfortunately, the programming codes for algorithms in most of these works are not available. This prevented us from performing an experimental study under the same constraints to have an apples-to-apples comparison. But we tried to be as consistent to the maximal extent and report some comparison tables under a similar setup to find the best frameworks. In addition, we gave some recommendations based on these summary tables and provided some comparisons against binary neural networks which is the golden reference for energy and latency efficiency.

What were your main findings?

The main findings can be summarized in the following points:

  • Mixed-precision neural networks are significant as the path for achieving energy-delay-area efficiency.
  • The optimal mixed-precision deep neural network (DNN) that balances the tradeoff between accuracy and hardware savings is still an unsolved problem.
  • There is a growing need for more dynamic mixed-precision DNNs that can be reconfigured at run-time according to the changing requirements of the running application.
  • The overhead of the algorithm/technique used to assign the mixed precision is of importance too.
  • There is a growing interest in joint optimization approaches where not only the mixed precision problem is tackled, but also pruning and even neural architectural search which leads to achieving the highest energy-delay-area efficiency.

As part of your work you proposed some guidelines for future mixed-precision frameworks. Could you tell us a bit about those recommendations?

We focus on the needs of current systems when it comes to neural networks. We recommend that future works in mixed-precision neural networks focus on hardware-awareness, the trade-off between model compression and accuracy, run-time speed of the algorithms, and the support for run-time reconfigurability to adapt to changing run-time system requirements.

What are your plans for future work in this area?

Our background is more towards hardware. Hence, we are currently working to better evolve the hardware models, architecture, and DNN model mapping in the optimization process to achieve better energy, latency, and area efficiency. In addition, we consider the in-memory compute paradigm which by default provides orders of magnitude efficiency compared to the von Neumann computing paradigm which is considered in most of the surveyed mixed precision works.

Read the work in full

Mixed-Precision Neural Networks: A Survey, Mariam Rakka, Mohammed E. Fouda, Pramod Khargonekar, Fadi Kurdahi.

About the authors

Mariam Rakka received a BE degree (with high distinction) in Computer and Communications Engineering with two minors in Mathematics and Business Administration from the American University of Beirut in 2020 and an MEng degree in Electrical and Computer Engineering from UC, Irvine in 2022. Currently, Mariam is a PhD student at UC, Irvine. Her research focuses on in-memory computing technologies, hardware accelerators, rare fail event estimation, and hardware-friendly neural networks. She was a DAC’21 young fellow. Mariam joined Arm for her summer 2022 internship.

Mohammed E. Fouda received a BSc degree (Hons.) in Electronics and Communications Engineering and a MSc degree in Engineering Mathematics from the Faculty of Engineering, Cairo University, Cairo, Egypt, in 2011 and 2014, respectively. Fouda received a PhD degree from the University of California, Irvine, USA, in 2020. From April 2020 – March 2022, he worked as an assistant researcher at the University of California, Irvine. Currently, he is a senior research scientist at Rain Neuromorphics Inc. His research interests include analog AI hardware, neuromorphic circuits and systems, and brain-inspired computing.

Pramod Khargonekar received a BTech degree in electrical engineering in 1977 from IIT Bombay, a MS degree in mathematics in 1980, and PhD degree in electrical engineering in 1981, from the University of Florida. His early career was at the University of Florida and the University of Minnesota. In June 2016, he assumed his current position as Vice Chancellor for Research and Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Irvine. He is a Fellow of IEEE, IFAC, and AAAS. His research interests include control systems theory and applications, machine learning for control, and renewable integration in smart electric grids.

Fadi Kurdahi received a BE degree in electrical engineering from the American University of Beirut in 1981 and a PhD from the University of Southern California in 1987. Since then, he has been a Faculty with the Electrical Engineering and Computer Science Department, University of California at Irvine, and is currently the Director of the Center for Embedded & Cyber-physical Systems, He is a fellow of the IEEE and the AAAS. He received the Distinguished Alumnus Award from his Alma Mater, the American University of Beirut, in 2008.




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence