ΑΙhub.org
 

Why ChatGPT struggles with math


by
07 November 2024



share this:

Have you ever tried to use an AI tool like ChatGPT to do some math and found it doesn’t always add up? It turns out there’s a reason for that.

As large language models (LLMs) like OpenAI’s ChatGPT become more ubiquitous, people increasingly rely on them for work and research assistance. Yuntian Deng, assistant professor at the David R. Cheriton School of Computer Science, discusses some of the challenges in LLMs’ reasoning capabilities, particularly in math, and explores the implications of using these models to aid problem-solving.

What flaw did you discover in ChatGPT’s ability to do math?

As I explained in a recent post on X, the latest reasoning variant of ChatGPT o1, struggles with large-digit multiplication, especially when multiplying numbers beyond nine digits. This is a notable improvement over the previous ChatGPT-4o model, which struggled even with four-digit multiplication, but it’s still a major flaw.

What implications does this have regarding the tool’s ability to reason?

Large-digit multiplication is a useful test of reasoning because it requires a model to apply principles learned during training to new test cases. Humans can do this naturally. For instance, if you teach a high school student how to multiply nine-digit numbers, they can easily extend that understanding to handle ten-digit multiplication, demonstrating a grasp of the underlying principles rather than mere memorization.

In contrast, LLMs often struggle to generalize beyond the data they have been trained on. For example, if an LLM is trained on data involving multiplication of up to nine-digit numbers, it typically cannot generalize to ten-digit multiplication.

As LLMs become more powerful, their impressive performance on challenging benchmarks can create the perception that they can “think” at advanced levels. It’s tempting to rely on them to solve novel problems or even make decisions. However, the fact that even o1 struggles with reliably solving large-digit multiplication problems indicates that LLMs still face challenges when asked to generalize to new tasks or unfamiliar domains.

Why is it important to study how these LLMs “think”?

Companies like OpenAI haven’t fully disclosed the details of how their models are trained or the data they use. Understanding how these AI models operate allows researchers to identify their strengths and limitations, which is essential for improving them. Moreover, knowing these limitations helps us understand which tasks are best suited for LLMs and where human expertise is still crucial.




University of Waterloo




            AIhub is supported by:



Related posts :



What are small language models and how do they differ from large ones?

  06 Jan 2026
Let’s explore what makes SLMs and LLMs different – and how to choose the right one for your situation.

Forthcoming machine learning and AI seminars: January 2026 edition

  05 Jan 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 January and 28 February 2026.

AAAI presidential panel – AI perception versus reality video discussion

  02 Jan 2026
Watch the second panel discussion in this series from AAAI.

More than half of new articles on the internet are being written by AI

  31 Dec 2025
The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI.
monthly digest

2025 digest of digests

  30 Dec 2025
We look back through the archives of our monthly digests to pick out some highlights from the year.
monthly digest

AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year com

  29 Dec 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Half of UK novelists believe AI is likely to replace their work entirely

  24 Dec 2025
A new report asks literary creatives about their views on generative AI tools and LLM-authored books.

RL without TD learning

  23 Dec 2025
This post introduces a reinforcement learning algorithm based on a divide and conquer paradigm.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence