October 6, 2024

A.I Struggles with Math, but Can Summarize Books.

Science & Technology The Journal 2024

A.I Struggles with Math, but Can Summarize Books.

By: Yvonne Liu

OpenAI’s ChatGPT and other chatbots can summarize large texts, answer questions, and even write poetry, but they can’t do math problems. Why?


Modern AI is programmed to be flexible and generate things based on what it learns, not following a set of rules. While this might be great for language (poetry, for example), it doesn’t work well with math.

While this A.I. does a lot, there are limitations. A.I. chatbots struggle with simple arithmetic and math word problems with multiple steps. While A.I. is getting better, it still clearly remains flawed.


‘The AI chatbots have difficulty with math because they were never designed to do it,’ said Kristian Hammond, a Computer Science professor and Artificial Intelligence researcher at Northwestern University.” AI chatbots are fine-tuned for determining probabilities, not following set rules. A far cry from past computers, since those could be described as number-crunching-math-whiz machines. Past computers were programmed to follow a guide of step-by-step rules. Powerful, but brittle. This is why A.I. never worked with these past computers. A different approach more than a decade ago broke through, called a neural network, and is sort of similar to a human brain.


An online education nonprofit, Khan Academy, is experimenting with an AI chatbot teaching assistant. In order to help students with math, the A.I. chatbot sends the problem to an actual calculator, while students see “doing math” on the screen. Once the actual calculators are done, the student gets the answer. Khan Academy is using tools actually meant for math instead of having the A.I. chatbots get things wrong. Similarly, ChatGPT has also used a workaround for certain math problems. The chatbot gets help with large-number division and multiplication on another calculator program.


A high school math teacher in New York, Mr. Schneider, believes that while schools may try to ban A.I., students will still use them.these slip-ups have turned into a learning opportunity in his class. Students compare their answers to the bots. Who’s right? Who’s wrong? How? And why? “It teaches them to look at things with a critical eye and sharpens critical thinking,” he said. “It’s similar to asking another human—they might be right and they might be wrong.”


These A.I. chatbots excel when they have consumed large amounts of training data, like textbooks and standardized tests. What happens is that the chatbots have analyzed something very similar, or even the same question. A.I.’s performance in math has ignited a debate in the A.I. community about how to move forward in the field. Mainly, there are two sides. One side believes that advanced neural networks, also known as large language models, will power A.I. to steady progress and A.G.I. (artificial general intelligence). This computer will be able to do anything the human brain can do. This is the dominant side of Silicon Valley.


The other side, however, believes that the large language models have little common sense or logical reasoning. A prominent figure on this side is Dr. LeCun, who has insisted on another approach called world modeling. Basically, it is a system that allows the bots to learn about the world much like humans do. However, it may take a decade or so to achieve.


Meanwhile, don’t trust A.I. programs that much. Don’t trust everything it tells you.

Image Credit by Patrick Gamelkoorn

Back To Top