ΑΙhub.org
 

Why ChatGPT struggles with math


by
07 November 2024



share this:

Have you ever tried to use an AI tool like ChatGPT to do some math and found it doesn’t always add up? It turns out there’s a reason for that.

As large language models (LLMs) like OpenAI’s ChatGPT become more ubiquitous, people increasingly rely on them for work and research assistance. Yuntian Deng, assistant professor at the David R. Cheriton School of Computer Science, discusses some of the challenges in LLMs’ reasoning capabilities, particularly in math, and explores the implications of using these models to aid problem-solving.

What flaw did you discover in ChatGPT’s ability to do math?

As I explained in a recent post on X, the latest reasoning variant of ChatGPT o1, struggles with large-digit multiplication, especially when multiplying numbers beyond nine digits. This is a notable improvement over the previous ChatGPT-4o model, which struggled even with four-digit multiplication, but it’s still a major flaw.

What implications does this have regarding the tool’s ability to reason?

Large-digit multiplication is a useful test of reasoning because it requires a model to apply principles learned during training to new test cases. Humans can do this naturally. For instance, if you teach a high school student how to multiply nine-digit numbers, they can easily extend that understanding to handle ten-digit multiplication, demonstrating a grasp of the underlying principles rather than mere memorization.

In contrast, LLMs often struggle to generalize beyond the data they have been trained on. For example, if an LLM is trained on data involving multiplication of up to nine-digit numbers, it typically cannot generalize to ten-digit multiplication.

As LLMs become more powerful, their impressive performance on challenging benchmarks can create the perception that they can “think” at advanced levels. It’s tempting to rely on them to solve novel problems or even make decisions. However, the fact that even o1 struggles with reliably solving large-digit multiplication problems indicates that LLMs still face challenges when asked to generalize to new tasks or unfamiliar domains.

Why is it important to study how these LLMs “think”?

Companies like OpenAI haven’t fully disclosed the details of how their models are trained or the data they use. Understanding how these AI models operate allows researchers to identify their strengths and limitations, which is essential for improving them. Moreover, knowing these limitations helps us understand which tasks are best suited for LLMs and where human expertise is still crucial.




University of Waterloo

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence