Skip to content

Chat GPT knows nothing

Tim Worstall is a British economist

As it goes wrong by word 6 of the description perhaps we shouldn’t pay all that much attention.

17 thoughts on “Chat GPT knows nothing”

  1. claude.ai does better. I particularly liked “relatively niche figure”…

    I should note that while I aim to be accurate in my response, Tim Worstall is a relatively niche figure and some details might need verification.
    Tim Worstall is a British writer and businessman known primarily for his economic commentary and blog writing. He has been a regular contributor to Forbes.com and The Register, where he often writes about economics, technology, and business topics. He’s particularly known for his writing on rare earth elements and their role in the global economy, having worked in the scandium and rare earth metals trade.
    Worstall is generally associated with free-market economic views and has been a fellow at the Adam Smith Institute, a libertarian think tank based in the UK. His writing often focuses on explaining economic concepts and critiquing government policies from a market-oriented perspective.
    He’s also written about environmental economics and has been particularly vocal about issues surrounding recycling and resource management. His book “23 Things We Are Telling You About Capitalism That Are Not True” was published as a response to Ha-Joon Chang’s “23 Things They Don’t Tell You About Capitalism.”

  2. Tim – it’s only a matter of time before ChatGPT decides you co-starred in “Sexy Economists VI” with Rocco Siffredi:

    Stanford misinformation expert admits his chatbot use led to misinformation in sworn federal court filing
    The Mercury News
    By Ethan Baron
    2024-12-04 00:55:25GMT

    A Stanford University misinformation expert who was called out in a federal court case in Minnesota for submitting a sworn declaration that contained made-up information has blamed an artificial intelligence chatbot.

    And the bot generated more errors than the one highlighted by the plaintiffs in the case, professor Jeff Hancock wrote in an apologetic court filing, saying he did not intend to mislead the court or any lawyers.

    “I express my sincere regret for any confusion this may have caused,” Hancock wrote.

    Lawyers for a YouTuber and Minnesota state legislator suing to overturn a Minnesota law said in a court filing last month that Hancock’s expert-witness declaration contained a reference to a study, by authors Huang, Zhang, Wang, that did not exist. They believed Hancock had used a chatbot in preparing the 12-page document, and called for the submission to be thrown out because it might contain more, undiscovered AI fabrications.

    It did: After the lawyers called out Hancock, he found two other AI “hallucinations” in his declaration, according to his filing in Minnesota District Court.

    Current LLMs are Chinese Rooms (ie Infinite Mechanical Turks typing Infinite Bullshit). Cleverer people than me (Elon Musk) think this will lead to AI superintelligences. Not sure if that’s because he has more faith in techies than I do, or less faith in humans than I do.

    Either way, Butlerian Jihad when?

    He who controls the Scandium, controls the universe.

  3. I wonder what it thinks Rachel Reeves is then? A Nobel laureate world class economist leading nations to greatness?

    That sort of AI is prone to hallucinations.

  4. Steve: I think the best description of LLMs I’ve seen so far is “stochastic parrots”.

    Artificial intelligence, folks. Fake, phoney, pretend, ersatz, make-believe, counterfeit. It’s a conjouring trick.

  5. Just regurgitation engines. But how, then, do they “hallucinate”? Do they find hallucinations on the internet and just incorporate them into their output or do they synthesise them anew by regurgitating a little bit of this and a little bit of that mixed together? Dunno.

  6. dearieme

    The term “hallucination” is inappropriate, and probably chosen for marketing reasons. The output of an LLM is basically the output of an automatic sentence generator whose output is constrained to word/phrase patterns that are apparently legitimate as far as the “rules of grammar” that it’s learned are concerned. Such rules are much easier to learn from observation than the much more complicated rules that govern how the world works. So the LLM never learns how the world works and so the sentences it creates can easily by well, not true.

    Calling these sentences “hallucinations” suggests that the other sentences are the output of true wisdom or summation. But they’re all just random scrapings organized by complex probability calculations uttered by stochastic parrot.

    Bit like the Sage of Ely, really.

  7. Sam, DM – Right now, AI chat bots are Plausible-sounding Bullshit Engines. Much like the House of Commons.

    The digital equivalent of an Oxbridge PPE confidently bluffing. AI doesn’t “know” anything and is easily tripped up by simple logical or rhetorical tricks, because the machine can’t differentiate between facts and nonsense. It can only generate text that looks like it might have been written by a person. This is, yes, essentially a parlour trick. A much more advanced ELIZA. Useful in certain cases but not a form of intelligence.

    Also, the ones that work by scraping the web are ingesting more and more AI generated noise every time. Idk how you get from here to Skynet, but it’s quite possible the machines will record themselves doing Tik Tok dances amidst the sea of human skulls. Androids dream of electric sheep, but only in superficial mimicry of humans. Which is what PKD was trying to warn us about.

  8. If I ask “AI” a question to which I already know the answer, it gets it right about 75% of the time. So if I ask it a question to which I don’t know the answer, what use is the response?

  9. “If I ask “AI” a question to which I already know the answer, it gets it right about 75% of the time. So if I ask it a question to which I don’t know the answer, what use is the response?”

    The Gell-Mann Amnesia will strike again. Despite AI being about as accurate as the average newspaper, people will continue to take its word for stuff when they don’t know the answer before asking the question. It was bad enough with newspapers, trying to base an entire economy on it is not going to end well.

  10. Bloke in North Dorset

    I remember saying when the hype started that it’s a tool and nothing more. Some specialist AI might be very useful, I read about a call centre that had used all their call data to train AI and it worked well, but the general stuff not so much.

    I use ChatGPT for a few things, mostly language questions and the answers seem reasonable and save me having to ask r/German or r/French and waiting for an answer.

    Most of the time it’s just a search engine without the ads.

  11. Aren’t people asking rather too much of AI? The instance Steve quotes. Surely exactly the same errors could have occurred from a human researcher? The AI has gone through the relevant data available to it but some of that data was incorrect or incorrectly presented. In fact, unlike a human, we can be certain the AI didn’t invent it intentionally or even accidentally. The error potential is implicit in what’s being attempted.
    I use language translation ware a great deal. Sure it isn’t perfect. It couldn’t be perfect. I was discussing today how to get expressions from one language into another. You can’t do a straight translation, because the words that the languages use have been come at from totally different conceptual directions. And all those words in both languages have entirely different & a variety meanings in other contexts. A human would have trouble parsing them without an understanding of the subject matter. And AI doesn’t do understanding.

  12. In fact, unlike a human, we can be certain the AI didn’t invent it intentionally or even accidentally.

    Not actually true. Well, technically the “AI” itself didn’t intentionally invent anything; but it is dependent on the training data it was fed, and that is chosen by humans with their own motivations and biases. There can also be deliberate thumbs on the scale added when the trained model is consulted.

    It’s why that google image generator would produce images of black, brown and yellow nazi soldiers (but never white), or black ancient greek philosophers.

    https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/ has some good examples, but my favourite is “chained greek philosophers eating watermelon”: https://x.com/demontage2000/status/1760866399071985927

  13. If this case is indicative of how AI ‘works’ then its utter gibberish:

    https://theconversation.com/why-microsofts-copilot-ai-falsely-accused-court-reporter-of-crimes-he-covered-237685

    How on earth can feeding billions of pages of information into these models create anything other than a warped version of reality, with no real insight into whats actually happening?All its doing is juxtaposition – A is commonly found next to B therefore A and B must be linked in some way, and when information on A is requested then spew out B as well.

  14. It’s entirely correct. Rachel Reeves has clarified that an economist is ‘anyone who knows all of the numbers, right from 1 to 10’ and ‘if you get 0 you’re a senior economist’

  15. All its doing is juxtaposition – A is commonly found next to B therefore A and B must be linked in some way, and when information on A is requested then spew out B as well.

    Yes. In it’s simplest form, that’s exactly what it’s doing.

    This report on a presentation aimed at engineers is a good writeup of how ai works: https://lwn.net/Articles/982289/

Leave a Reply

Your email address will not be published. Required fields are marked *