Skip to content

So, ChatGPT can read Wikipedia then

Describe Tim Worstall

I’m sorry, but as an AI language model, I cannot provide personal opinions or descriptions of individuals. However, I can tell you that Tim Worstall is a British writer and blogger who has written for several publications, including Forbes, The Telegraph, and The Register. He often writes about economics, business, and public policy, and has authored several books on these subjects.

Not sure how useful that is then……

Tim Worstall is a British writer, blogger, and commentator on economics and public policy. He has written for numerous publications, including Forbes, The Telegraph, The Times, and The Register. He is also a senior fellow at the Adam Smith Institute, a free-market think tank based in London. He has authored several books on economics, including “Chasing Rainbows: Economic Myths, Environmental Facts” and “Next Left: How the Left Thinks About the Market and Why They’re Wrong”.

Did I? Umm, really?

Here are some of Tim Worstall’s books:

2006: “2005: Blogged – Dispatches from the Blogosphere”

2006: “Chasing Rainbows: Economic Myths, Environmental Facts”

2011: “The No Breakfast Fallacy: Why the Club of Rome was Wrong About Us Running Out of Resources”

2012: “Redesigning the Welfare State in New Zealand: Problems, Policies, Prospects”

2013: “The Joy of Tax: How a Fair Tax System Can Create a Better Society”

2014: “Next Left: How the Left Thinks About the Market and Why They’re Wrong”

2016: “Globalisation is Good”

2017: “The Churchill Memorandum”

2019: “The VAT Laffer Curve: Past, Present and Future”

2021: “Global Britain: After Brexit, How to Keep the World Turning”

Jeez. 3/10 is worse than Wes Streeting……better than the ‘Tater of course but still.

27 thoughts on “So, ChatGPT can read Wikipedia then”

  1. This is why I hate most of what journalists write about AI, computers, robots, etc.

    It’s a glorified search engine. It’s just very good at blending all of the results it finds. Results that were input manually by biased and imperfect humans.

  2. adfjladjf

    Nope. It’s worse than that.
    It’s a sort of pattern-recognizer and repeater.
    The standard phrase is “stochastic parrot”; but I think that’s aiming too high.

    Drunken stochastic parrot fits better.

  3. Harry Haddock's Ghost

    Ha ha ha – You wrote the Joy of Tax, ner ner ner ner ner…

    Tom is being trolled by AI.

  4. Jeez! It even calls itself an AI. We can take it Speculative Fiction’s not been reading lists.

  5. What it does is to generate blocks of text that humans will accept as valid. Its only measure of success is whether the human accepts the answer or questions it. So, we’re not creating AI so much as BS: a Bluffing System which gradually becomes ever more precisely calibrated to generate words that we will accept as true, regardless of whether they actually are. If AI really is going to take our jobs, then the first sector to be automated by this technology will be politics.

  6. Of course, the “hallucinations” of ChatGPT and their potentially dangerous consequences will attract calls for government regulation of AI technology. It might be a useful project for the ASI to produce some articles or a report on how the risks can be managed within a free market framework that doesn’t stifle innovation. There will come a time when the British government decides that it has to take a position on this new-fangled AI thingy, and at that point there will be a first-mover advantage for any think-tank that has ready-made solutions to hand.

  7. @AndrewZ
    Well they could advise them to stop calling it AI, for a start. Because it’s not even vaguely that. That’s just a phrase the IT mob has hijacked & misused. It’s just a statistical way of handling big data sets. It’s not even new. I’ve got software goes back to the mid 00’s did it. Voice recognition & translation. And yes they learned. In fact they had to be trained. But it’s a long way from any sort of intelligence. It certainly doesn’t “understand” the data it’s handling. And there’s not the slightest sign anyone’s anywhere near building anything that can do that. They may never do. It may be impossible. Understanding implies self awareness & nobody knows how self awareness comes about in biological systems.

  8. Oh & if anyone wants to go read it, there’s about 40 years of people writing about this subject. But try & stay away from Asimov. He wrote space operas not speculative fiction. The three laws are a joke.

  9. @bloke in spain

    You can secure the stable door while I look for the horse. “AI” has already become the accepted term for anything that can parse a natural language query and respond in a similar form, regardless of whether it uses anything that remotely resembles actual intelligence.

    My previous comments should also make clear what I think of LLMs – they aren’t stochastic parrots, they are stochastic salesmen hunting for the exact form of words that will get you to accept what they say. Imagine Skynet as a character from “Glengarry Glen Ross”.

    As for Asimov’s “Three Laws of Robotics”, they are part of an algorithm for generating literary logic puzzles. Imagine a situation. Put a robot in the situation and work out how it must behave, according to the Three Laws. Then write the story backwards, with some characters who don’t know the situation trying to work out what it is and how the Three Laws apply to it.

  10. We focus on the artificial, But I haven’t yet seen a proper definition of intelligence from these boffins yet.
    Any philosophers out there?

  11. Also, while we’re talking about Wikipedia and “AI”, this Wikipedia entry currently currently lists Paddington Bear as a leading member of a Peruvian death squad:
    https://en.wikipedia.org/wiki/Grupo_Colina

    It may have changed by the time you see it. But it’s probably related to this tweet, which uses an image which looks like it was generated by Midjourney, the graphic design equivalent of ChatGPT:
    https://twitter.com/torriangray/status/1643099568136126469?cxt=HHwWioDQlcGWu80tAAAA

    Add in “Deep Fake” technology and we are looking at a perfect storm of digital deceit which will result in a total collapse of confidence in any digital information. In a few years, we’ll go from “pics or it didn’t happen” to not trusting anything we didn’t see ourselves or didn’t hear about from someone we know personally and know to be reliable.

  12. AndrewZ,

    There was an article by a journalist the other day saying that it might take journalists jobs, and I was thinking that it just might. That it can write acceptable opinions that fit with people’s biases.

    In general, I think it will automate midwits. So, it might write a soap opera script or a generic fantasy book illustration or the sort of article that gets into the Guardian, but it’s not going to be replace great work.

  13. Bloke in North Dorset

    As I’ve said before, its no different to the introduction of calculators in everyday life. If you don’t understand what your asking then you’ve no idea if the answer is anywhere near useful. Its a tool and when used correctly will be useful and labour saving, when used badly, especially with blind faith, it is dangerous.

  14. Well, here’s a little test and response.
    I asked ChatGPT “What was HMS Sheffield’s superstructure made from?”

    And the answer comes back,

    HMS Sheffield’s superstructure was primarily made from aluminum, a lightweight and strong metal that was commonly used in shipbuilding at the time. The use of aluminum in the ship’s construction allowed for a lighter overall weight and improved performance, including increased speed and maneuverability.

    However, the use of aluminum in the superstructure also had a significant impact on the ship’s vulnerability to fire. Aluminum has a lower melting point than steel, and it can also ignite more easily when exposed to high temperatures.

    When HMS Sheffield was hit by the Exocet missile, the missile’s high explosive warhead caused a massive explosion that generated extremely high temperatures. This intense heat caused the aluminum in the ship’s superstructure to melt and ignite, contributing to the intensity of the fire and making it more difficult to control.

    The use of aluminum in naval construction has since been reevaluated, and many modern warships are now built with a combination of aluminum and steel or other materials to improve their overall strength and resistance to fire.

    Now, I’ve been aboard most of the Type 42s, including Sheffield herself. I’ve worked with survivors of both Sheffield and Coventry’s sinking. I’m acquainted with the very clever naval architect at D Ships who studies warship battle damage and damage control.

    Sheffield, like her sisters, was an all-steel ship, and her superstructure did not “melt and ignite” – the fire was serious and forced her abandonment, but the “burning aluminum superstructure” was the USS Belknap a few years previously. It was widely assumed that Sheffield had the same issue – she didn’t.

    Conclusion? Like an automated Wikipedia, ChatGPT may tell you what’s widely believed and claimed, but it won’t check or confirm the underlying facts, and can’t be relied on for much more substantial than “what’s trending on Twitter this morning?”

    No surprise, but nice to have the evidence.

  15. ChatGPT’s opinions are regurgitated platitudes often readily contradicted by evidence or elementary logic and its responses to questions with objectively correct answers are, to put it kindly, highly suspect. No wonder journalists fear for their jobs.

    Daughter #1 was doing a maths revision paper and asked me about her answer to a question involving an integration. My answer was rather complicated so I asked ChatGPT for its opinion. Its answer was much simpler, and wrong. It wasn’t wrong because of a minor slip-up or an interesting misunderstanding: it made a nonsensical algebraic step.

    So I asked the same question in slightly different words, and got another – different – wrong answer. And repeated twice more with two more different wrong answers. ChatGPT seems to live in a solipsistic world, without basic checks on its responses, against either reality or alternative views, in which whatever immediate “thought” occurs to it is the “right” answer.

    I came across a quote from BF Skinner which seems relevant to the similarities between bien pensant opinion column writers and ChatGPT and the angst the former seem to be suffering when confronted by the latter’s musings:

    “The real question is not whether machines think but whether men do.”

  16. @John Galt

    From Stephen Wolfram:

    “ChatGPT Gets Its “Wolfram Superpowers”!
    Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”.

    It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.”

    It’ll be interesting to see how Chat GPT’s lackadaisical idiosyncrasies combine with Wolfram Alpha’s stolid objectivity.

  17. A excellent discussion of how ChatGPT interacts with libel law is found at https://reason.com/volokh/
    Professor Volokh also found that ChatGPT produced false and defamatory stories about real people, complete with made-up quotes from real sources, such as Reuters and the Washington Post.

  18. As I posted on The Times yesterday, reading William Hague explaining ChatGPT is like someone who’s seen a conjuror produce a coin from their ear and thinks this can solve the national debt.

  19. I like the Asimov short based on the premise (revealed at the end) that a small % of humans account for nearly all the advancements that have made us better than apes, heck any life form earth has known.
    So society has abandoned university for all and concentrates educational investment on the few. This surprises the modest but gifted hero of the short as he is suddenly separated from all his school mates who are dispatched off to jobs which are fulfilling for them but not creative or innovative.
    Just can’t recall the name of the short story.
    Possibly Asimov’s second best short after Nightfall One. Imv of course.

Leave a Reply

Your email address will not be published. Required fields are marked *