Generative AI Lacks Understanding, Say Researchers at MIT
Steve Jones, November 7, 2024
MIT's news platform news.mit.edu published an article by Adam Zewe on November 5, 2024, claiming that even the best performing large language models don't form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks. It's relevant here, because even though LLMs are making a lot of money and providing tools for solving hard problems, they're unrelated to sentience, which is what NeuroSynthetica seeks to understand.
Their takeaway? "These results show that [LLM] transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach," the researchers say.
The research was funded in part by the Harvard Data Science Initiative, a National Science Foundation Graduate Research Fellowship, a Vannevar Bush Faculty Fellowship, a Simons Collaboration grant, and a grant from the MacArthur Foundation.
Steve Jones
Founder & CTO
Explorations into synthetic sentience and building the robotics used to demonstrate it.