So, it’s bad, right, don’t sweat the details

The research, which went viral this week, used a sample of online dating photos, limited only to white users, to demonstrate that an algorithm could correctly distinguish between gay and straight men 81% of the time and 74% for women, suggesting machines can potentially have much better “gaydar” than humans.

The Human Rights Campaign (HRC) and Glaad, two of the most prominent LGBTQ organizations in the US, slammed the study on Friday as “dangerous and flawed … junk science” that could be used to out gay people across the globe and put them at risk. The advocates also criticized the study for excluding people of color and bisexual and transgender people and claimed the research made overly broad and inaccurate assumptions about gender and sexuality.

Note there’s no actual comment about whether it works or not.

22 thoughts on “So, it’s bad, right, don’t sweat the details”

  1. If it works, it isn’t junk science.

    If it can be used to “out gay people across the globe”, it isn’t junk science. You might as well call nuclear physics junk science because of the risk of nukes.

    But this study is probably junk science. Men posing for a photo to look appealing to men are going to do it differently then men posing for a photo to look appealing to women. This is not the same as being able to tell them apart walking down the street.

    Also, 81% is not that spectacular a hit rate. Suppose gays are 2% of the population. This algorithm labels almost 20.24% of the population as gay (19% or the 98% plus 81% of the 2%), so only about one in three labeled “gay” actually is.

    Statistics

  2. How could such an “al-gore-ism” work?

    Take photos of known homos and look for common facial characteristics? Like having a nose, eyes etc?

    And then a general photoscam(n) to see which males in the general population have the same chracteristics?

    You know I blame Star Trek/Star Wars and the like. Such SF has raised people’s expectations of “science” to such a degree that you can pour out any old bollocks and people will give it credence.

    I once saw a classified ad in the old “Science Digest” magazine. “Antigravity is a Reality” it said. Followed up by a comment for the Ages “Scientific” it said “PhD ratings”. Which sums it all up.

  3. Synp

    I’m probably being a bit slow here, but simply following the “statistics” line, and using their wording “an algorithm could correctly distinguish between gay and straight men 81% of the time” – if it had predicted that all were straight, would it have been able to claim 98% accuracy?

    If I’m reading it correctly, it sort of suggests it “has a bias”?

  4. It is definitely plausible; I’ve got pretty good gaydar (good enough that I’ve outed people to my wife who were in committed heterosexual relationships at the time, and who are now in committed homosexual ones). There is definitely something about features and mannerisms that can be picked up through pattern recognition, which is what AI does best.

  5. > Men posing for a photo to look appealing to men are going to do it differently then men posing for a photo to look appealing to women.

    Yep. From the composite example shown, it looks like heterosexual dating photos have the head tilted back, to make the jaw look bigger and thus make them look tougher & more aggressive; whereas homosexual dating photos have the head tilted forward, which makes the jaw appear smaller and less threatening.

  6. “Also, 81% is not that spectacular a hit rate. Suppose gays are 2% of the population. This algorithm labels almost 20.24% of the population as gay”

    No. What they did was to pull out 35,326 pictures evenly divided between men/women and straight/gay. So you need to be comparing the result with a 50% baseline. Incidentally, the system got 81% with a single photo, but 91% with 5 photos of each person. Humans manage about 61%.

    But they also agree that it would perform far less well against a more representative population – they tried it on a thousand randomly selected photos with a more usual (they claim) 15:1 ratio, asked it to pick out the top 100 most likely gay, and only got 47 right.

    https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs


    I thought the interesting bit was where they said they could possibly use it to identify IQ or political views. So Google/Facebook/etc. in future will be able to spot racists and sexists by their faces, and label them in all online photos. Local councils and police forces will be able to use their street cameras to identify and track them. The smart phone App can’t be far behind. And a 91% accuracy is pretty good for that, right? Better than having no idea at all whether the person you’re stood next to in the bus queue is a secret racist, sexist Trump-supporting UKIPer, right?

    In fact, I think there might be a big market for such an App! Even if its assignments it made were no better than random, lots of people would still believe and pay for it. After all, they have astrology Apps, right?

    You could even build it into a monster-hunting version of Pokemon Go, yes? Gotta Catch ‘Em All!

  7. The same article says a machine can pick out Yank social security number from photographs and online info. Unless that means there is such a photo/info link in a databank somewhere that is a crazy statement.

    As for your crap NiV it is no surprise to find you on here defending junk science. Presumably to last part of your witty as ever bull-fest is intended as a joke.

    In the same spirit I point out that if there should be no way to escape the eye of tyranny in future there would be no reason to delay a kill-as-many-as-you-can-before-they-get-you-rampage.

    Perhaps your stunningly successful verbal techniques of jihadi-“talkdown and tame” might help the evil state tho’.

    Also were I PM not only would all such UK research be switched into ways that ordinary people can beat the technology of tyranny but there would be a worldwide MI6 kill-list of scientific pork engaged in just such work on behalf of state oppression.

  8. NiV: they tried it on a thousand randomly selected photos with a more usual (they claim) 15:1 ratio, asked it to pick out the top 100 most likely gay, and only got 47 right.

    That’s reassuringly useless.

  9. “As for your crap NiV it is no surprise to find you on here defending junk science.”

    I wasn’t defending it.

    And your reason for thinking it’s junk science is…?

    “That’s reassuringly useless.”

    Supposedly there were 70 known gays in the sample and it got 47 of them. If it was truly clueless and picked faces at random, it should have got 7 by chance.

    But this is, of course, only the first iteration. Technology tends to get better over time.

    However, my main point wasn’t about whether it works or not. It was about the social implications of people *believing* that it does, and what they’ll use it for.

    That’s why the LGBT set are so against it. You have to be careful of the precedents you set. Never give the authorities or society generally a tool of social control that you wouldn’t be happy to see in the hands of your enemies. Because one day, it will be.

  10. It seems like a Bayesian problem to me.
    At a guess the number of false posiives and false negatves would be big enough on both sides that no couri would convict and no employer dare to discriminate.

  11. “At a guess the number of false posiives and false negatves would be big enough on both sides that no couri would convict and no employer dare to discriminate.”

    To the first, I say “campus rape trials”, and to the second, there’s no law forbidding discrimination against racists and sexists (nor against simply looking like a racist/sexist). The only protected categories are age, race, sex, sexual orientation, religion, marital status, and disability.

    And even if it was illegal and identification unreliable, that doesn’t stop people doing it. That and more subtle instances of it.

  12. A good 40% of the males on any dating site who express a preference for a female companion will be gay (in but accept or in and deny), some are looking for a beard for cover, some are in denial and are using their (misogynistic) conquests to demonstrate how hetero they are.

    Less than 10% of females who express a preference for a male companion are gay, ‘straight’ females (in and deny) will not date at all and ‘straight’ (in but accept) have less requirement for a beard.

    James NZ,

    ‘pattern recognition, which is what AI does best.’

    AI is shit at best and pattern recognition is something that humans have evolved to do as you point out in your previous paragraph, there’s nowt as queer as a bloke pretending.

  13. @Synp
    > Also, 81% is not that spectacular a hit rate.

    It would be interesting to see if the algorithm has a % confidence measure of some kind. It sounds like this is getting reasonable results when it has to make a call every time. If it was set up to only trigger for over 95% confidence in homosexuality over 5+ photos, that seems like it would be a good basis for monitoring of potential benders in sensitive jobs by security police.

  14. Paul Rain,

    ‘that seems like it would be a good basis for monitoring of potential x in sensitive jobs by security police.’

    You can no more identify a criminal from a mugshot than you can a trainspotter or a homosexual from a ‘self described’ photo on a website.
    I have no doubt that some people are persuaded by the sort of pseudo-scientific crap that is spun from paid propagandists in order to ensure next years grant cheques but the idea of implementing any system based on their ‘results’ is beyond parody, it is sick.

    ThoughtCrime, it’s in your head, better turn yourself in and hope you go easy on yourself.

  15. @PF:
    “I’m probably being a bit slow here, but simply following the “statistics” line, and using their wording “an algorithm could correctly distinguish between gay and straight men 81% of the time” – if it had predicted that all were straight, would it have been able to claim 98% accuracy?”

    Yes, predicting everyone is straight is right 95% of the time (bisexuals not only exist, they outnumber gays). Same goes for any way of identifying potential terrorists (they’re an even smaller part of the population)

    @Mr Ecks
    “How could such an “al-gore-ism” work?”

    Generally they take a bunch of pictures, in this case 35,000 pictures from dating websites, where those pictures are labeled “gay” or “straight”. This is called the training set. They feed them to a computer running a machine learning algorithm (like logistic regression. In this case they used the neural network variety) and it optimizes a function that yields a “gay” or “straight” label in the most accurate way possible.
    Step 2, they take another bunch of labeled pictures. This is called the validation set. They use the same function they developed on the training set to guess the sexuality of the people in the validation set. That is the number that is published.

    @NiV
    “What they did was to pull out 35,326 pictures evenly divided between men/women and straight/gay.”

    Yes, that’s for the training and validation sets. But we need to assess the efficacy of the tool as the LGBT activists are proposing that it be used: to take a picture of a member of the general population and label them “gay” or “straight”. That would fail spectacularly. Note that the 47 out of 100 figure is also for pictures from dating sites. Accurate gaydar this ain’t.

    @BogRocket
    “A good 40% of the males on any dating site who express a preference for a female companion will be gay”

    Source?

  16. “A good 40% of the males on any dating site who express a preference for a female companion will be gay”

    Yes, without attribution I’m calling bollocks on that one.

  17. BiCR: Yeah, just the usual projection that one expects from these people. Just like when Kinsey came up with his ridiculous claim that 10% of people were homosexual on the basis of his surveys of prisoners and interviews with pedophiles.

  18. Synp–on what basis is the machine deciding who is what? Have they programmed it with what to look for? Or is the gadget deciding itself what ???? it is looking for? On the basis of –obviously-some kind of physiognomy in the manner of Bertillon or ancient Chinese face-reading?

    Still sounds like crackpot science to me.

  19. When I was working in a ‘new’ Uni (old Poly) in the early 90s they proposed giving the teaching staff ID cards to more easily distinguish them from the students.

    This was strongly opposed by the female faculty members as it would also allow people to identify them as women!

    I wonder who well AI would work in this case?

Leave a Reply

Your email address will not be published. Required fields are marked *