Intel created FakeCatcher to find deepfake videos by analyzing facial blood flow


In collaboration with the State University of New York at Binghamton, Intel has created a technology called FakeCatcher Able to identify if a video is fake with an accuracy of 96%.

In general, deepfakes are synthetic videos, images, or audio clips in which the action or person is not real. Deepfake videos are based on artificial intelligence and deep learning techniques that are able to modify the contents of the videos and present something real that did not actually happen. It often involves repeating a person’s voice and face to make them say or do what they have never said or done.

There are also “good” deep fakes such as those that are clearly identifiable as satire or as an expression of changing one’s identity in contexts such as metaverse avatars, but deep fake videos can also prove to be a political weapon of incredible power when the victims are The guys are from institutions that can convey any kind of message and are completely credible.

Intel looks at human blood to understand whether or not the video is real

Intel’s FakeCatcher relies on machine learning, and unlike other similar systems that examine the raw data of videos to find deepfakes, it notices video as a human would see it, but looks for specific data that escapes us.

This is the blood flow (photography) in the person’s face that can also be identified by the pixels in the video. As the heart pumps blood, the veins change color. These blood flow signals are collected from across the face and algorithms translate them into space-time maps.

Then, using deep learning, FakeCatcher can detect whether the video is real or fake, Because the fake person has no reliable blood flow data.

We immediately wondered if a low-resolution video or with a photography filter like social media could avoid FakeCatcher’s watchful eye.

The Director of FakeCatcher at Intel, Ilke Demir, replied to us indirectly via the Space Che service His contract on Twitter To tell technology: The team created a special model that also takes into account low-resolution, filtered, or blurred videosAnd even then, FakeCatcher showed that it knew what fake videos were with a slightly lower accuracy: 91%.

In an interview with VentureBeat, Demir said that at the moment FakeCatcher cannot be fooled by deepfake video makers, who might find the system to replicate a reliable bloodstream.

The reason is that it is not possible to transfer the photograms extracted by FakeCatcher to train an adverse generation network that creates deepfakes. However, if we were to approximate the photodiode, we would need huge datasets on facial blood flow that do not currently exist.: At most there is a group with data of about 40 people that cannot be traced back to a reliable group.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *