Categories
AI Content Generation and Curation

Apple study reveals major AI flaw in OpenAI, Google, and Meta LLMs [Video]

Large Language Models (LLMs) may not be as smart as they seem, according to a study from Apple researchers.

LLMs from OpenAI, Google, Meta, and others have been touted for their impressive reasoning skills. But research suggests their purported intelligence may be closer to “sophisticated pattern matching” than “true logical reasoning.” Yep, even OpenAI’s o1 advanced reasoning model.

The most common benchmark for reasoning skills is a test called GSM8K, but since it’s so popular, there’s a risk of data contamination. That means LLMs might know the answers to the test because they were trained on those answers, not because of their inherent intelligence.

To test this, the study developed a new benchmark called GSM-Symbolic which keeps the essence of the reasoning problems, but changes the variables, like names, numbers, complexity, and adding irrelevant information. What they discovered was surprising “fragility” in LLM performance. The study tested over 20 models including OpenAI’s …

Watch/Read More