Skip to content

John Naughton, sigh

One symptom of their anxiety is the way they have been throwing unconscionable amounts of money at the 70-odd generative AI startups that have mushroomed since it became clear that AI was going to be the new new thing. Microsoft reportedly put $13bn (about £10.4bn) into OpenAI, for example, but it was also the lead investor in a $1.3bn funding round for Inflection, Deepmind co-founder Mustafa Suleyman’s startup. Amazon put $4bn into Anthropic, the startup founded by refugees from OpenAI. Google invested $500m in the same outfit, with a promise of $1.5bn more, and unspecified sums in A121 Labs and Hugging Face. (Yeah, I know the names make no sense.) Microsoft has also invested in Mistral, the French AI startup. And so on. In 2023, of the $27bn that was invested in AI startups, only $9bn came from venture capitalist firms – which until recently had been by far the biggest funders of new tech enterprises in Silicon Valley.

This is used as evidence of imminent monopoly in AI.

Sigh.

16 thoughts on “John Naughton, sigh”

  1. Martin Near The M25

    Government regulation of AI? Laughable. Let’s set up OFHAL maybe? “I’m sorry I can’t do that Dave.” “I insist, I’m a civil servant and I’m telling you to.”

    There isn’t going to be an AI monopoly because nobody wants to be shut out of it. I think it’s been way overhyped and a lot of this money is going to be wasted. Same with any technology. Once it gets past the “throw it at everything” stage we’ll have a clearer idea of how much use it’s going to be.

  2. “I think it’s been way overhyped and a lot of this money is going to be wasted.”

    As far as I can see the whole AI ‘thing’ consists of massive amounts of processing power being fed huge amounts of data and it using that data to create patterns and extrapolations etc such that it looks like it ‘knows’ everything. Which is all well and good, except that the data is not in anyway screened for veracity, its just scraped off the fount of all truth and reason that is the web. And as such what it produces will be about as useful……

  3. Beware of thinking that because most of the reporting of AI is done by would-be creative types in journalism therefore AI will only be used to create pictures and stories. The chat bots that stand in for customer service seem to be marginally better than minimum wage staff in Bangladesh and the Scottish Highlands. I understand that financial institutions using it for derivatives’ trading are finding it productive. Other uses will be found that journalists will be unable to comprehend. For example, how about intelligent dosage of fertiliser and weed killer using data derived from sensors analysing actual plants?

  4. ” I understand that financial institutions using it for derivatives’ trading are finding it productive. Other uses will be found that journalists will be unable to comprehend. For example, how about intelligent dosage of fertiliser and weed killer using data derived from sensors analysing actual plants?”

    Buts thats not AI really is it? Its just a complicated program. Data in, process it through the algorithm, spew out decisions/recommendations. You could have done that on a BBC micro.

  5. Cool Diogenes. Perhaps I can get it to work out the correct dose of Roundup for the junk growing in my back ‘garden’.

  6. Buts thats not AI really is it? Its just a complicated program. Data in, process it through the algorithm, spew out decisions/recommendations. You could have done that on a BBC micro.

    Depends.

    If the program is “If sensor A < level X use [this much] fertiliser; if sensor B > level Y double it” etc, then yes it could be as complicated as you have time to keep adding conditionals.

    If on the other hand the “program” is a machine-generated map from a sensor array of n inputs to m output values, trained up on a (hopefully) validated set of known data, then that could possibly be considered AI.

  7. Jim; AI comes in many flavours: the overhyped imagination of journalist et al, the patient creation of advanced computing and algorithms, the current fad for LLMs. They are not all the same.

    M n/M25 “I’m sorry Dave, I can’t do that.” Pendantry, yes, but the word order and punctuation does affect tone and therefore meaning.

  8. “s. Perhaps I can get it to work out the correct dose of Roundup for the junk growing in my back ‘garden’.”

    No Bogey, it’s gonna tell you not to use it, based on some dodgy input from some green terrorist outfit. It’ll probably call the cops on you too.

  9. I have some AI running on my BBC Micro. It’s a program that’s been generating a chart of my weight for a couple of decades. The AI bit is that if the chart goes off the top (or the bottom), it resizes the chart and – the AI bit – remembers it for next time.

  10. Regurgitation engines with a bit of pattern recognition thrown in. Yippee!

    I repeat what a pioneer of Machine Intelligence (as it used to be called) told me long ago.

    (i) We don’t know how to emulate human decision-making.

    (ii) But even if we did we have no reason to think it a good way to do it.

    It took only a couple of beers to get that out of him.

  11. Out of curiosity I asked an AI tool for suggestions on planning a 3,000 km roadtrip. It suggested that I leave one location and drive to another location arriving in about 6.5 hours which was surprising as they are over 1,600 km apart.
    Large Language Models are interesting, but not intelligence though they do a good job of seeming intelligent. Not surprisingly are graduate journalists also seem to forget intelligence and sentience are different things

  12. @jgh: what happened in 2010? I’m guessing a serious illness that caused a sudden loss of weight, or a romantic entanglement that precipitated a crash diet to look more trim. If the latter it obviously didn’t last!

  13. LLMs to a journo or politician looks like a magician who appears to produce a coin from inside your ear – they think they’ve found the solution to paying off the national debt. As I was personally interested in AI, I took a free Stanford MOOC on the subject. Most of it was about finding efficient solutions to the travelling salesman problem. “AI” is a useful term that can be stuck on any old bit of code in the hope of attracting more funding.

    The first program I ever wrote – in the sixth form in 1969, written in (Ferranti) Sirius Autocode – played dominoes to a decent standard. Little did I know that I was producing “AI”!

  14. Jim: I temporarily moved to a village in the Aberdeenshire countryside where the nearest shops were a six-mile cycle ride away. I then passed my driving test. 😉

  15. Chris Miller,

    ““AI” is a useful term that can be stuck on any old bit of code in the hope of attracting more funding.”

    There’s a bubble right now. Companies are producing “AI” solutions. Many of them look like magic if you don’t know what’s being done. There’s a vague sense of “yeah, it’s kinda bad now, but look how much better it’s getting”. All in the hope that investors can get some suckers to buy it and cash out.

    Personally, I think a lot of the machine learning value has been extracted years ago. People are talking about AI now because they’re seeing ChatGPT and Midjourney, but spam filtering, credit card fraud detection, ANPR cameras, ebay categories, Netflix recommendations are all based on the same thing: train a model, then using data, score probability.

    And I’m sure as processing gets cheaper, more things will fit with ML, but it isn’t some magic bullet. I’m just wondering if I should pull all my money out of US stocks, because when it bursts, I reckon the S&P 500 is going to fall by 10% or more.

Leave a Reply

Your email address will not be published. Required fields are marked *