Tuesday, February 14, 2023

ChatGPT Doesn't Have A Model Of The World, It Has A Model Of What People Say...,

FT  |   Much has changed since 1986, when the Princeton philosopher Harry Frankfurt published an essay in an obscure journal, Raritan, titled “On Bullshit”. Yet the essay, later republished as a slim bestseller, remains unnervingly relevant. 

Frankfurt’s brilliant insight was that bullshit lies outside the realm of truth and lies. A liar cares about the truth and wishes to obscure it. A bullshitter is indifferent to whether his statements are true: “He just picks them out, or makes them up, to suit his purpose.” Typically for a 20th-century writer, Frankfurt described the bullshitter as “he” rather than “she” or “they”. But now it’s 2023, we may have to refer to the bullshitter as “it” — because a new generation of chatbots are poised to generate bullshit on an undreamt-of scale. 

Consider what happened when David Smerdon, an economist at the University of Queensland, asked the leading chatbot ChatGPT: “What is the most cited economics paper of all time?” ChatGPT said that it was “A Theory of Economic History” by Douglass North and Robert Thomas, published in the Journal of Economic History in 1969 and cited more than 30,000 times since. It added that the article is “considered a classic in the field of economic history”. A good answer, in some ways. In other ways, not a good answer, because the paper does not exist. 

Why did ChatGPT invent this article? Smerdon speculates as follows: the most cited economics papers often have “theory” and “economic” in them; if an article starts “a theory of economic . . . ” then “ . . . history” is a likely continuation. Douglass North, Nobel laureate, is a heavily cited economic historian, and he wrote a book with Robert Thomas. In other words, the citation is magnificently plausible. What ChatGPT deals in is not truth; it is plausibility. What ChatGPT deals in is not truth; it is plausibility 

And how could it be otherwise? ChatGPT doesn’t have a model of the world. Instead, it has a model of the kinds of things that people tend to write. This explains why it sounds so astonishingly believable. It also explains why the chatbot can find it challenging to deliver true answers to some fairly straightforward questions. 

It’s not just ChatGPT. Meta’s shortlived “Galactica” bot was infamous for inventing citations. And it’s not just economics papers. I recently heard from the author Julie Lythcott-Haims, newly elected to Palo Alto’s city council. ChatGPT wrote a story about her victory. “It got so much right and was well written,” she told me. But Lythcott-Haims is black, and ChatGPT gushed about how she was the first black woman to be elected to the city council. Perfectly plausible, completely untrue. 

Gary Marcus, author of Rebooting AI, explained on Ezra Klein’s podcast: “Everything it produces sounds plausible because it’s all derived from things that humans have said. But it doesn’t always know the connections between the things that it’s putting together.” Which prompted Klein’s question, “What does it mean to drive the cost of bullshit to zero”? 

Experts disagree over how serious the confabulation problem is. ChatGPT has made remarkable progress in a very short space of time. Perhaps the next generation, in a year or two, will not suffer from the problem. Marcus thinks otherwise. He argues that the pseudo-facts won’t go away without a fundamental rethink of the way these artificial intelligence systems are built. 

 I’m not qualified to speculate on that question, but one thing is clear enough: there is plenty of demand for bullshit in the world and, if it’s cheap enough, it will be supplied in enormous quantities. Think about how assiduously we now need to defend ourselves against spam, noise and empty virality. And think about how much harder it will be when the online world is filled with interesting text that nobody ever wrote, or fascinating photographs of people and places that do not exist. 

 Consider the famous “fake news” problem, which originally referred to a group of Macedonian teenagers who made up sensational stories for the clicks and thus the advertising revenue. Deception was not their goal; their goal was attention. The Macedonian teens and ChatGPT demonstrate the same point. It’s a lot easier to generate interesting stories if you’re unconstrained by respect for the truth.

I wrote about the bullshit problem in early 2016, before the Brexit referendum and the election of Donald Trump. It was bad then; it’s worse now. After Trump was challenged on Fox News about retweeting some false claim, he replied, “Hey, Bill, Bill, am I gonna check every statistic?” ChatGPT might say the same. Recommended John Gapper ChatGPT is fluent, clever and dangerously creative 

If you care about being right, then yes, you should check. But if you care about being noticed or being admired or being believed, then truth is incidental. ChatGPT says a lot of true things, but it says them only as a byproduct of learning to seem believable

0 comments:

When Zakharova Talks Men Of Culture Listen...,

mid.ru  |   White House spokesman John Kirby’s statement, made in Washington shortly after the attack, raised eyebrows even at home, not ...