By: Leaya Chen
When AI was first introduced in the 1940s, computers were programmed to follow piecemeal instructions and find info in orderly databases. They weren’t very smart, so efforts at AI stopped.
But about ten years ago, a different approach surfaced and began to receive astonishing gains. The basic technology is roughly modeled on the human brain. This kind of A.I. is not programmed with exact rules, but learns by analyzing big amounts of data. It generates words based on all the information it has already absorbed guessing what words will likely come next.
In the past few years, artificial intelligence has become known to almost everyone. Although AI is talented in language, it struggles in math.
Kristian Hammond is a computer science professor and artificial intelligence researcher at Northwestern University.
“The A.I. chatbots have difficulty with math because they were never designed to do it,” Hammon said. “
This technology does brilliant things, but it doesn’t do everything. Everybody wants the answer to A.I. to be one thing. That’s foolish.”
Unlike Kristian Hammond, some other people think that AI has to be right, for example, Kirk Schneider, a high school math teacher in New York
“They’re usually fine, but usually aren’t good enough in math. It’s got to be accurate,” Schneider said. “It’s got to be right.”
These wrong answers provided by AI have become a teaching opportunity. Schneider encouraged his students to ask themselves a question and then compare their answers to the chatbots.
“It teaches them to look at things with a critical eye and sharpens critical thinking,” Schneider said. “It’s similar to asking another human — they might be right and they might be wrong.”
It was a life lesson for his students: Don’t believe everything an A.I. program tells you.
Image Credit by Tara Winstead