Skip to content

Short answer

‘Time is running out’: can a future of undetectable deepfakes be avoided?

No.

Long answer: Hell No.

We’ll just have to adapt to it.

13 thoughts on “Short answer”

  1. And not for the first time, Michael Crichton got there years earlier with “Rising Sun”.

  2. I can’t help thinking that it’ll ne preferable to the detectable shallow fakes who run things at the moment.

  3. I really can’t see what the problem is. “Fakes” have been with us since the first artist put brush to canvas. Photographer once told me he used shaving foam for photographing food products for adverts because it’s more realistic than cream. There’s a simple safeguard works for being told or shown anything. First ask yourself, “Why am I being presented with this information?”

  4. Not really seeing that much of an issue, really. Reason’s partially in the article;

    “The Coalition for Content Provenance and Authenticity, which includes among its membership the BBC, Google, Microsoft and Sony, has produced standards for watermarking and labelling”

    Provenance will become important – just like antiques, for the big bucks, anyway.

    On the generation side, there’s stick yer “standard” provenance info in the file header, just like your phone camera will do (date/time, GPS co-ordinates), and then there’s the cryptography, watermarking via steganography in the bits of the image itself, or whatever the NFT nerks get up to. There’s also a third -party approach, a la SSL certificates. The information stored with the image can easily include the account ID of the user.

    On the distribution side, there’s reading that information, and either allowing/disallowing its inclusion in the post, or automatically generating captions/mouseovers.

    Then, there’s the cost side – who can afford to execute the model, or how long are they willing to wait for the model to execute?

    And, the regulatory side – licensing for those running or using models, or distributing the images, or some other mechanism. Those executing the models can always keep the logs of the prompts.

    There’s always terminal stupidity on the part of the user/viewer, but I think that the cheaper, quicker models will tend towards always producing imagery verging on the cartoonish, or obviously fake, and that stuff coming out of other models, those intended to escape provenance/regulatory safeguards, will just lead to vast amounts of noise on the platforms they get posted on.

  5. @bis: “Photographer once told me he used shaving foam for photographing food products for adverts because it’s more realistic than cream”

    The nice brown skin on “roast” chicken is apparently creosote, again because it looks better in photos.

  6. Once upon a time, a colleague’s brother was a food photographer, regular gigs for various cookery magazines and the likes of M&S and Waitrose; there’s all sorts of tricks.

    At the same time, there was a woman who turned out to be a hand model; lots of shoots of rings and bracelets and skin care creams and whatnot.

  7. The point is, DMcD, is that anyone with any sense knows the photos aren’t reality. They’re an illusion to sell the product. So why not treat “deep fake” the same way? As I replied to the geezer who constantly shares his latest interweb conspiracy theories with me. “Why would anyone who had access to such sensitive & confidential information share it with an insignificant tosser like you?” If you see a video close-up of some celebrity totty giving oral to an overendowed black man, it’s pretty well guaranteed to be fake. An endless list of questions. Why would she perform for the camera? Why would whoever worked the camera be sharing it? But most importantly, why would they be sharing it with you?

  8. Worked for a food company and turns out food doesn’t photograph well so there’s exemptions in the advertising standards for not using the actual product.

  9. Did strike me that a lot of measures to ensure authenticity are potentially very hostile to privacy. Eg can media orgs verify some rando’s phone vid of a major event (train crash for example) or is the guy just hawking some AI output to make a quick buck? How do you know it’s even from his phone and he hasn’t effectively stolen it off a mate? If the answer is “he’s in the clear since his phone’s clever digital watermarking verifies that particular device took that image at that particular time and place” then the privacy implications are pretty lousy.

  10. “can media orgs verify some rando’s phone vid of a major event (train crash for example)”

    Yes, in that time/date and location are usually put into the header information within the file. That said, on many devices, you can turn that off, with a bit of fiddling. Or, with software, you can remove or alter the data in the header.

    For many events tho’, the media organisation would prefer to have more than a single set of pictures or eye witnesses.

  11. BiS;

    “The point is, DMcD, is that anyone with any sense knows the photos aren’t reality.”

    There’s always a set of users that require the repeated application of the clue bat.

  12. DMcD, indeed yes.

    Freeware called Exiftool allows one to alter media file metadata. Mostly it can be done through the Windows “Properties” or “Photos” app easily enuff.

Leave a Reply

Your email address will not be published. Required fields are marked *