A new study from Apple's AI researchers has exposed significant limitations in the reasoning capabilities of large language models (LLMs).
In a newly released paper titled "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models," the researchers argued that LLMs, despite their impressive language skills, demonstrate a troubling degree of inconsistency when solving mathematical problems.
The study found that these models struggle with even simple mathematical problems when presented with even slight changes in the wording of queries.
Read the full article on Computing here.