• Generative AIs are so powerful that they can generate very realistic fake photos
  • But there is a part of the human body that they do not yet know how to represent well: the hands
  • And this would be due to the data used (because the hands are rarely clearly visible on the real photos)

Generative artificial intelligences, such as ChatGPT, are invading our daily lives. And it is becoming important to know how to distinguish content generated by an AI from that created by humans. As far as the texts are concerned, companies are already working on this problem. And there are also online services that offer you to detect fake photos created by generative AIs. But before going through detection algorithms, a simple observation of the hands can sometimes be enough.

Let it be clear, generative image AIs, such as DALL-E or Midjourney are very sophisticated today. For example, a person had used one of these artificial intelligences to create a fake life on Instagram. And he managed to deceive everyone, even his relatives.

Hands look weird in these pictures

However, there still seems to be one part of the human body that these generative AIs struggle to draw well: the hands! So, if you have any doubts about the authenticity of a photo, the first step (before using software, for example) is to observe people’s hands, if these are visible. The recent fake photos of Pope Francis that made the buzz prove it to us.

These images look very realistic. But if you watch them closely, you will see some things that don’t hold water. For example, in the image on the right, in the tweet below, we can clearly see that there is a problem with the right hand.

And this is no exception. Generative AIs, at the moment, really struggle with their hands. A problem which, moreover, has already inspired many memes on Twitter.

Why do AIs have a problem with their hands?

If generative AIs still struggle to create realistic hands, it would be because of the data they were trained with. In January, BuzzFeed asked Stability AI, the company behind AI Stable Diffusion, about this. And a representative of this company replied: “It is generally understood that in AI datasets, human images show hands less visible than faces. Hands also tend to be much smaller in source images, as they are relatively rarely seen in large format.”

In other words, these generative artificial intelligences would therefore need more images of hands, and of better quality, to correct this problem. Meanwhile, this loophole can help you quickly determine if an image is genuine without going through advanced techniques.

But that’s not all. BuzzFeed also quotes Amelia Winger-Bearskin, an art and AI professor at the University of Florida. This explains that in the images that AIs can use, the hands are often incomplete, when they are visible. They can hold something, they can shake another hand, etc.

Anyway, this AI-generated content is starting to cause concern. And, in Europe, we are studying a text that would impose a warning for any content (text or image) that has been generated by artificial intelligence. This was recently announced by European Commissioner Thierry Breton, during an interview with France Info.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply