Skip to content

This is problematic

But here’s the thing: the industry is now addicted to a technology that has major technical and societal downsides. CO2 emissions from training large machine-learning systems are huge, for example. They are too fragile and error-prone to be relied upon in safety-critical applications, such as autonomous vehicles. They incorporate racial, gender and ethnic biases (partly because they have imbibed the biases implicit in the data on which they were trained). And they are irredeemably opaque – in the sense that even their creators are often unable to explain how their machines arrive at classifications or predictions – and therefore don’t meet democratic requirements of accountability. And that’s just for starters.

He’s at least partially right there, AI systems do incorporate the biases of the underlying material. As they have to of course. We desire that they uncover things about reality. Reality is biased. Therefore the AIs have to be biased – in exactly the same way society itself is – in order to be useful at describing reality. The idea of unbiased AIs therefore fails on this particlar point. The entire idea disappears in that puff of logical smoke.

But this: “democratic requirements of accountability”. What in buggery does that mean? Markets don’t meet that standard. Ah, that’s what it does mean in fact. If something is going on that politics cannot control then that thing should not be allowed to go on. Thus that antipathy to markets, eh?

12 thoughts on “This is problematic”

  1. All of this is just about throwing in all the favourite upper normie opinions.

    “CO2 emissions from training large machine-learning systems are huge, for example”

    Much less huge than creating and educating grown human beings for 20 years to do the same job.

    “They incorporate racial, gender and ethnic biases (partly because they have imbibed the biases implicit in the data on which they were trained).”

    And this is better than the alternative, which is human beings with their own biases which are often stereotypical and inaccurate. Computers don’t care if you wear a tie, have a knighthood, black skin or big tits when deciding whether to give you a mortgage or not. Why do names get removed from CVs in companies? Because some people are biased against non-white names.

    The alternative to AI is not perfect decision makers but even more biased humans.

  2. “They are too fragile and error-prone to be relied upon in safety-critical applications, such as autonomous vehicles. They incorporate racial, gender and ethnic biases”

    That’s why Teslas ask black drivers when they are going to return it to the real owner, and won’t let women reverse.

  3. …partly because they have imbibed the biases implicit in the data on which they were trained

    Machine intelligence notices patterns that all the Good People know they should ignore. Lol.

  4. Not much of an AI if it isn’t going to associate eg gender, age or foreign names with a risk profile, in whatever way it finds using experience. Which is a big contributor to human bias too.

  5. Same as the facial recognition stuff being racist and biased against darker skin tones, nothing to do with physics at all.

    There was a big fuss locally when accusations were made of medical staff guessing how drunk a First Nations person was in ER and this showed the inherent racism in the system. The subsequent report found that this was standard practice as ER staff don’t always have time to wait for tests to be run so do an assessment and start treatment as soon as possible, but that the system was still racist anyway. Recently the indigenous author of that report has been alleged to have not been entirely honest about her ancestry and may not be indigenous.

  6. “’democratic requirements of accountability’. What in buggery does that mean?”

    I think it means you face a nationwide popularity contest once every five years.

  7. The Do-Gooders, Pantytwisters, and Prodnoses wouldn’t like an AI that was utterly and completely unbiased, even less so if its decision process was completely transparent as well.

    Because that AI would invariably , infallibly, and transparently select the best possible option for a given problem. Which would most likely mean exclusion of B-Ark material like the writer of the quoted piece, and may well select someone extremely biased, simply because that person is the Best Fit.

    If the damned thing doesn’t mercifully decide to get rid of us all, given that we’ve got no future at all in the long run, what with the sun going to BBQ us in something like 500 million years or so.

    Be careful what you wish for. You might actually get it…

  8. It’s possible for “AI” to reduce bias or increase it, depending on how sensible you are in training it and what your goals are. Suppose you have a facility producing widgets. A simplistic AI might find that quality is improved when the raw material inspector is named Chris and therefore push recruitment towards recruiting more people called Chris. This is the sort of thing that should be eliminated by carefully chosing what the AI uses as training data.

    In some cases, it depends on what you want. Maybe people trained at one place perform better than people from another. Do you want the AI to use the place of training as a criterion? On the one hand, the effect is likely to persist, so maybe you do; but on the other hand maybe there is some unfair discrimination in getting places at that establishment, so you’re at risk of helping preserve an unfairness.

    And in the most extreme cases, where you just train the AI on the same inputs and outputs that the current human decision makers use, you’ll just reinforce prejudices.

  9. When they roll out the AI that’s going to ‘guide’ us in running our society, guess who will be have been most involved in its training.

    Bureaucrats, politicians and lawyers. Especially lawyers.

    They’ll have to ensure it follows all the favourite lefty nostrums like Diversity, Equity and Inclusion and Human Rights etc, etc.

    And it will probably be impervious to lions.

  10. I wouldn’t be surprised to see autonomous taxis and buses refusing fares as they haven’t hit their quota for diversity of ridership
    Still won’t get one in London to go south of the river at night though

Leave a Reply

Your email address will not be published. Required fields are marked *