Programs like Midjourney and DeepAI create photorealistic images with AI and are causing an avalanche of fakes on the internet. Check out DW’s tips to identify whether a photo is real or fake. It’s never been easier to create fake images so real: an internet connection and a tool that works with artificial intelligence is all you need. In seconds, photorealistic images are created that many of us perceive as real. And that’s why they spread so quickly on social media and are often used specifically for misinformation. Just a few recent examples that have gone viral: Vladimir Putin allegedly arrested, Donald Trump also allegedly arrested or Elon Musk allegedly holding hands with Mary Bara, CEO of General Motors. They are all images created by artificial intelligence (AI) showing things that never happened. Earthquakes That Never Happened Events such as alleged spectacular car chases or arrests of celebrities like Putin or Trump can be checked out fairly quickly by users by checking reputable media sources. More problematic are images where people aren’t as well known, AI expert Henry Ajder tells DW. “And it’s not just fake images with people that can spread misinformation,” explains Ajder. “We’ve seen people create events that never happened, like earthquakes.” That’s what happened in the case of a supposed big earthquake that shook the Pacific Northwest in 2001. But the earthquake never happened, the images shared on Reddit about it were generated by artificial intelligence. Faced with such images, it is becoming increasingly difficult to classify what really happened. But just as it is human to err, so too does AI make mistakes. At least for now, as AI tools are developing rapidly. Currently (April 2023) programs like Midjourney, Dall-E and DeepAI have their problems, especially with images that show people. Here are six tips from DW’s fact-checking team on spotting manipulations: 1. Zoom in and Examine Closely Many AI-generated images look real at first glance. The programs used can create photorealistic images that often only turn out to be fake on closer inspection. So the first tip is to look closely. To do this, look for versions of the image that have the highest possible resolution and zoom in on details. Magnification will reveal any discrepancies, errors or image clones that may not have been detected at first glance. 2. Look up the source of the image If you are not sure whether an image is real or generated by AI, try to find out more about its source. Sometimes other users share their findings in the comments, indicating the source or the first post of the photo. A reverse image search can also be useful: upload the photo to tools like Google, TinEye or Yandex, which often leads to more information and may even reveal its origin. If search engine results already include fact checks by reputable media outlets, they often provide clarity and context. 3. Pay attention to body proportions Are the body proportions of the people portrayed correct? It’s not uncommon for AI-generated images to show discrepancies: the hands could be too small or the fingers too long. Or the head and feet don’t match the rest of the body. This is the case of the forgery in which Putin would have knelt before Xi Jinping. The rear shoe of the kneeling person, who is supposedly Putin, is disproportionately large and wide. The calf of the same leg appears elongated. The person’s semi-covered head is also very large and does not match the rest of the body. 4. Beware of typical AI errors The main source of errors in AI imaging programs such as Midjourney and DALL-E are hands these days. People often have a sixth finger, like the policeman to Putin’s left in the photo mentioned above. Or again, in this case: Here it says that Pope Francis is wearing a designer coat. The image has gone viral – although the pope in the right photo appears to have only four fingers and his fingers in the left photo are unusually long – the photos are fake. Other common errors in AI images are too many teeth, oddly misshapen eyeglass frames or unrealistically shaped ears, as in the fake image of Xi and Putin mentioned. Reflective surfaces like helmet visors also cause problems for AI programs; sometimes they seem to dissolve, as in Putin’s alleged arrest. But AI expert Henry Ajder warns: “In the current version, Midjourney still makes mistakes like with the pope image, but it is much better at generating hands than in previous versions. The direction is clear: we cannot expect programs to make such mistakes much longer.” 5. Does the image look artificial and understated? The Midjourney app in particular creates a lot of images that seem idealized, i.e. too good to be true. In that case, follow your instinct: can such a perfect and aesthetic image with impeccable people really be real? “The faces are too pure. The fabrics shown are also very harmonious,” explains Andreas Dengel, director of the German Center for AI Research. People’s skin is often smooth and free of any imperfections, and their hair and teeth are impeccable, which is often not the case in reality. Many photos also have an artistic look that can hardly be achieved even by professional photographers in the studio and with further image editing. AI programs obviously often create idealized images that look perfect and are meant to please a lot of people. This is precisely a weakness of the programs, because it makes some counterfeits recognizable. 6. Examine the background of the image Sometimes the background of an image reveals manipulation. Here, deformed objects, for example flashlights, can also be shown. In some cases, AI programs clone people and objects and use them twice. And it’s not uncommon for the background of AI images to be out of focus. But even this effect can contain errors. For example, when the background is not just blurred, but artificially blurred, as in this fake image, which supposedly shows an irate Will Smith at the Oscars: Conclusion Many AI-generated images can be identified as fake with a little research. But technology is getting better, and bugs are likely to be rarer in the future. Can AI detectors like Hugging Face help uncover manipulations? From what we’ve discovered, the detectors provide clues, but nothing more. The experts we interviewed tend to advise against their use, saying the tools are not fully developed. Even genuine photos are declared fake and vice versa. The question of what is real and what is not cannot always be reliably answered by detector applications. It’s a “technology race” with artificial intelligence, says Antonio Krüger, an AI researcher at the University of Saarland. “I think we have to get used to the fact that you can’t really trust any image on the internet.” — Note: For legal and journalistic reasons, DW does not publish images created with generative AI programs. Exception: We show AI images when they are the subject of news reports, for example when we verify fake news or report on the capabilities of artificial intelligence. In this case, we clearly indicate that these are AI images. Author: Kathrin Wesolowski, Thomas Sparrow, Joscha Weber

Doubts, Reviews and Suggestions? Talk to us

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply