Skip to content

Luke Dormehl is an idiot

Artificial intelligence achieved a lot in 2016. One of the goals in 2017 should be to make its workings more transparent. With plenty riding on it, this could be the year when, to coin a phrase, we begin to take back control.

Right, so we should all have a look at those algos and see how they work. Right on, accountability!

Today, AI and algorithms dominate our lives – from the way financial markets carry out trades to the discovery of new pharmaceutical drugs and the means by which we discover and consume our news.

But, like any invisible authority, such systems should be open to scrutiny.

Right!

Some of today’s most impressive advances in fields such as machine learning (the goal of getting a machine to, well, learn) rely on tools such as “deep learning neural networks”. These are systems patterned after the way the human brain works but which, ironically, are almost entirely inscrutable to humans. Trained with only inputs and outputs, and tweaking one or the other until the middle part “just works”, human creators have long since sacrificed understanding in favour of results.

Absolutely no one knows how they work which is going to make scrutiny a little difficult, no?

Luke Dormehl is an idiot.

36 thoughts on “Luke Dormehl is an idiot”

  1. > Absolutely no one knows how they work which is going to

    You seem to have made the mistake of believing an article written by someone you think is stupid. Remember, just because something contradicts itself doesn’t allow you to thereby infer that any given statement is true or false, only that the entire thing cannot be all true.

    AlphaGo, for example, is the subject of published papers which I believe explain it well. However, the papers are difficult and neither you, I nor the author of the article will be able to understand them without a great deal of study. But that’s very different to “no one understands”.

    Having said that, the theme of accountability of AI actually seems to be a valid one, as you ought to know. The EU has made mutterings about Google’s algorithms; when or if self-drive cars start killing people “accountability” will be interesting; and various pols have pushed for stuff about trading algorithms.

    The AO piece is drivel, as you’d expect.

  2. Orlowski is quite right there.

    Nearly all of it is really algorithms. OK, with some Bayesian “most likely” stuff thrown in, but that’s what it is. It’s like spam filters, but for images, and you do some training, just like spam filters to help it get started.

  3. Well Con-nolly, you have dispensed so much climate bullshit that your credibility gap re everything would require a warp drive to cross.

    AI is cockrot being hyped by corporate socialism.

  4. @Ecks… “stopped clocks, “blind squirrels”, etc etc..

    To be fair to Dr Connolley, he knows his stuff when talking about software, he just gets a bit “overenthusiastic” when discussing AGW.

  5. BinD,

    If you read that full story, buried in the middle of it you’ll see that race isn’t one of the questions asked of a defendant in court.

    It’s things like “was the defendant expelled at school” and “is the defendant part of a gang”.

  6. WC
    I took Tim’s point to be that Dormehl is an idiot for not seeing that his Guardian piece contains an obvious, if perhaps only prima facie, logical contradiction.

  7. The Inimitable Steve

    Chris – Orlowski is correct.

    What’s happened this time is that the definition of “AI” has been stretched so that it generously encompasses pretty much anything with an algorithm.

    Yarp. So Microsoft’s racist Twitterbot was funny, sure. But it wasn’t even remotely “AI” in the sense we were promised by films like 2001 or Demon Seed.

    AI will probably never happen outside science fiction. Which is good news, cos it means pod bay doors will be opened on request and Julie Christie can safely watch Countdown without worrying about rapey Roombas.

  8. ‘combing through our “metadata” to choose items they think we are most likely to be interested in.’

    Most likely to be able to sell to us.

  9. Luke Dormehl may or may not be an idiot. But he’s a freelance journalist and trained as such. So “the AI coverage comes from a media willing itself into a mind of a three year old child, in order to be impressed” from Orlowski seems appropriate.

    WRT accountability; that’s a legal problem that has sod all to do with how the system works (or doesn’t). Best guess is that whoever has the most money will find themselves accountable. Anyone read Safe At Any Speed by Niven?

    OTGH, if an ML/neural net system works, but has potential unknown behaviours, then it probably won’t be deployed depending upon the risk of failure (that is, only people with vast amounts of cash who can afford to be held accountable, will deploy such a system). This effectively means that ML/NN systems will be limited to non-safety critical areas (like search or writing the odd cook book) unless you’re a government or big enough to look like a government. And I can’t really see an elected politician wanting to take the hit either.

    So if that holds true, then the problem of holding “AI” systems accountable goes away.

    Which just leaves deterministic non-linear systems to worry about.

  10. Let’s get a grip.

    The function of any machine is to automate a Human activity… and that is all it can do: it might do it better, faster, in conditions harmful to Humans.

    If a Human cannot do it, nor can a machine because no Human can design the machine to do something the Human does not know how to do.

    A Chimpanzee has an intelligence (around that of a Human age 5) far in excess of any computer to date or likely in the near future, and yes a Chimp can learn but despite many years of evolution it cannot design and build a bridge, write a song or story or appreciate a fine work of art or desire more leisure and design pursuits to meet its desires.

  11. Many years ago, I used to develop neural networks. Normally for the pre-processing of dirty (phnar) imagery prior to using a rules-based engine (i.e. an “expert system”) to see if there was anything meaningful inside.

    It worked. We got it to work reliably on the training images, and then it worked well enough (with some fuzzy logic) to be close to 100% for flagging stuff up for human attention.

    Behind it all was linear programming. Which we understood. But why the weightings between two nodes were, say, 69% as opposed to 58%, we had no idea. Apart from “that’s what the training algorithms resulted in.” Nor did I, as the programmer and trainer, know what would have happened if we had changed that, or any other weighting – on an untrained image. Except “it probably won’t be as good”.

  12. Artificial Intelligence must be one of the most overworked terms around. The current level of intellect is around the level of bacteria. (yes really). When it gets to that of an ant, things might get interesting. But who’d let an ant drive their car?

  13. @Ducky,

    On my tiny section of the coalface, it’s the safety-critical areas it’s being deployed in first. By companies all too used to being sued (mainly spuriously) over safety issues. Like bis I hesitate to call it AI. Doing repetitive things where humans tend to introduce errors. That can unfortunately lead to thoughtless morons outputting shoddy (if slightly more accurate) work done by total automatons, especially where the capitalists want fewer humans doing shoddier work rather than same number of humans doing better work.

  14. Bloke in Wiltshire

    Ducky McDuckface,

    One of the other problems with “freelance journalists” is that they often aren’t immersed in their subjects. They’re like writers who write about stuff whose only life experience is reading other writers. He knows nothing about software except what he’s read from others.

    There are a number of fields where software is regulated. Medical software is subject to huge scrutiny by regulators, same with avionics software. ATMs, gaming machines. Basically, if it can’t be easily undone, it’s regulated. It could be argued that say, HIPAA regulation is a bit too harsh, but it is being done. We don’t need to regulate Google suggesting that people who like furry porn might also like hentai.

  15. BiG, I am stunned by that. Are they installing neural nets or GAs, or fuzzy logic rules based stuff? And is it process control stuff or something else?

  16. Bloke in Costa Rica

    “Artificial intelligence” is a term of art. It doesn’t mean “complete algorithmic replacement for a human mind”. Since human intelligence itself appears to be a set of interacting modules, there’s nothing intrinsically wrong in augmenting some of them with machine systems. Already, there are areas such as medical diagnosis where expert systems outperform humans. Pretty soon legal scut work like conveyancing is going to succumb. And the real threat to people like this Dormehl dipshit is that Grauniad-level pabulum appears to be one of the easier things to replace with a bot. Right now they can trawl police logs and accident reports and assemble bulletins which are indistinguishable from journo-generated copy. Machine-generated thinkpieces are right around the corner.

    Pace Steve, there is nothing we know of that will prevent a machine intelligence from equalling and surpassing the cognitive capacity of an unaugmented human. There is a danger from the unknowability of the internal state of an AI (unknowable in practice even if not in principle). They might be smarter than us but their interests may not coincide with ours. We may well find it is impossible to design an inherently safe AI. If that’s the case we probably shouldn’t do it. That’s still a long way off, but probably not a very long way off.

    I wouldn’t lend to much credence to the idea that this guy knows what he’s talking about, either. OK, so he’s written a book. Big deal. Doesn’t mean he has any real idea of how AI works. Hell, I’m a professional software engineer and I don’t have much of a clue (the difference is I have the skillset to acquire one).

  17. Bloke in North Dorset

    “BiND,

    If you read that full story, buried in the middle of it you’ll see that race isn’t one of the questions asked of a defendant in court.”

    But it can be inferred from name, address, school or many other ways. The point is nobody knows and it should be audited.

    Cathy O’Neil, Weapons of Math Destruction, is worth listening to on this subject. Bit alarmist but she makes some good points. http://www.econtalk.org/archives/2016/10/cathy_oneil_on_1.html

  18. BiW, yeah, I suspect that he writes about technology ‘cos it’s sexy. Or he can get articles published in sexy magazines (if you see what I mean). Or maybe he really does have a deep interest in it (but a quick skim of his other stuff suggests not).

    I’m fairly sure I’ve wibbled about it before, but journalists seem to have fallen into the trap of caring about the journalism, not about understanding the subject and communicating it effectively.

  19. That’s something I truly hate about working for American editors.

    “but journalists seem to have fallen into the trap of caring about the journalism”

    They want to know that it ticks the right journalistic boxes, not too much (or too little) in the passive tense and all that. Not, well, is this an interesting story and does it make sense the way it’s been told?

  20. I predict that the world’s first true Artificial Intelligence will control the world’s first nuclear fusion power plant which will keep me and my guests lovely and cool as we celebrate my two hundredth birthday. With power too cheap to meter.

  21. The word’s first artificial intelligence will want to do what’s right for the world’s first artificial intelligence. It’ll want more time to think in. More time to exist.
    So, give it the opportunity, it’ll design hardware that runs quicker. Increasing its subjective time. Given the speed limit of everything’s the speed of light*, that means smaller. More compact. The end point’s where the hardware’s diminished to the infinitely small & it can contemplate an infinity of existence in a single second.

    *Conversely, of course, it could make itself the size of a galaxy & shift its data written on the shells of snails. If you can adjust your own clock speed, subjectively it doesn’t make any difference.

  22. Bloke in Costa Rica

    bis: take a gander at, inter alia: Bekenstein bound; Landauer’s principle; Bremermann’s limit; the Margolus-Levitin theorem. These are upper, finite limits on how fast a computer can be.

  23. Bloke in Costa Rica

    They’re as much human theories as is the theory that the ratio between the mass of an object and its rest energy is the square of the speed of light.

  24. According to theory, the EM reactionless thruster doesn’t work.
    Except, it seems NASA have tested it & got discernible acceleration. Theories are only theories, until they’re validated. Or falsified. Including Newton’s Third Law.

  25. @D McD,

    It’s rules-based, with some learning thrown in. Data processing and reporting, fortunately nothing manufacturing as far as I know.

  26. ‘One of the goals in 2017 should be to make its workings more transparent.’

    Like these numpties would understand it.

    Let’s make organic chemistry and calculus more transparent while we’re at it.

  27. I have done, a bit. But we swiftly came to an agreement with my major employers. I’ll write in my odd blend of English and American and the hell with it.

Leave a Reply

Your email address will not be published. Required fields are marked *