The large language models (LLMs) that energy right this moment’s chatbots have gotten so astoundingly capable, AI researchers are onerous pressed to evaluate these capabilities—it appears that evidently no sooner is there a brand new take a look at than the AI programs ace it. However what does that efficiency actually imply? Do these fashions genuinely perceive our world? Or are they merely a triumph of information and calculations that simulates true understanding?
To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to convey two opinionated consultants to the stage. I used to be the moderator of the occasion, which came about on 25 March. It was a fiery (however respectful) debate, properly price watching in full.
Emily M. Bender is a College of Washington professor and director of its computational linguistics laboratory, and she or he has emerged over the previous decade as one of many fiercest critics of right this moment’s main AI firms and their strategy to AI. She’s often known as one of many coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the potential dangers of LLMs (and precipitated Google to fireplace coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” place.
Taking the “sure” place was Sébastien Bubeck, who just lately moved to OpenAI from Microsoft, the place he was VP of AI. Throughout his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 whereas it was nonetheless below growth. In that paper, he described advances over prior LLMs that made him really feel that the mannequin had reached a brand new degree of comprehension.
With no additional ado, we convey you the matchup that I name “Parrots vs. Sparks.”
From Your Web site Articles
Associated Articles Across the Net