• Tuukka R@piefed.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    If the part of the image that reveals the image was made by an AI is obvious enough, why contact a specialist? Of course, reporters should absolutely be trained to spot such things with their bare eyes without something telling them specifically where to look. But still, once the reporter can already see what’s ridiculously wrong in the image, it would be waste of the specialist’s time to call them to come look at the image.

      • azertyfun@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        My guess is the same thing as “critics say [x]”. The journalist has an obvious opinion but isn’t allowed by their head of redaction to put it in, so to maintain the illusion of NeutTraLITy™©® they find a strawman to hold that opinion for them.

        I guess now they don’t even need to find a tweet with 3 likes to present a convenient quote from “critics” or “the public” or “internet commenters” or “sources”, they can just ask ChatGPT to generate it for them. Either way any redaction where that kind of shit flies is not doing serious journalism.

      • Tuukka R@piefed.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        It is implied in the article that the chatbot was able to point out details about the image that the reporter either could not immediately recognize without some kind of outside help or did not bother looking for.

        So, the chatbot added making the reporter notice something on the photo in a few seconds that would have taken several minutes for the reporter to notice without aid of technology.