ΑΙhub.org
 

Why ChatGPT struggles with math


by
07 November 2024



share this:

Have you ever tried to use an AI tool like ChatGPT to do some math and found it doesn’t always add up? It turns out there’s a reason for that.

As large language models (LLMs) like OpenAI’s ChatGPT become more ubiquitous, people increasingly rely on them for work and research assistance. Yuntian Deng, assistant professor at the David R. Cheriton School of Computer Science, discusses some of the challenges in LLMs’ reasoning capabilities, particularly in math, and explores the implications of using these models to aid problem-solving.

What flaw did you discover in ChatGPT’s ability to do math?

As I explained in a recent post on X, the latest reasoning variant of ChatGPT o1, struggles with large-digit multiplication, especially when multiplying numbers beyond nine digits. This is a notable improvement over the previous ChatGPT-4o model, which struggled even with four-digit multiplication, but it’s still a major flaw.

What implications does this have regarding the tool’s ability to reason?

Large-digit multiplication is a useful test of reasoning because it requires a model to apply principles learned during training to new test cases. Humans can do this naturally. For instance, if you teach a high school student how to multiply nine-digit numbers, they can easily extend that understanding to handle ten-digit multiplication, demonstrating a grasp of the underlying principles rather than mere memorization.

In contrast, LLMs often struggle to generalize beyond the data they have been trained on. For example, if an LLM is trained on data involving multiplication of up to nine-digit numbers, it typically cannot generalize to ten-digit multiplication.

As LLMs become more powerful, their impressive performance on challenging benchmarks can create the perception that they can “think” at advanced levels. It’s tempting to rely on them to solve novel problems or even make decisions. However, the fact that even o1 struggles with reliably solving large-digit multiplication problems indicates that LLMs still face challenges when asked to generalize to new tasks or unfamiliar domains.

Why is it important to study how these LLMs “think”?

Companies like OpenAI haven’t fully disclosed the details of how their models are trained or the data they use. Understanding how these AI models operate allows researchers to identify their strengths and limitations, which is essential for improving them. Moreover, knowing these limitations helps us understand which tasks are best suited for LLMs and where human expertise is still crucial.




University of Waterloo

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence