The arrival of deepfake it generated shared reactions: on the one hand there was a positive surprise at the remarkable quality of false videos and images that can be generated; On the other hand, it also raised concerns about its potential use in fraud and other questionable ethical practices.
This phenomenon represented a new challenge for investigations in this field, particularly those focusing on security issues. In this sense, a recent project has proposed a method to strengthen the AI-based detection mechanisms used to identify fake images or videos.
Another factor to be analyzed during the research deepfake
A research group from the University of Rome La Sapienza in Italy has proposed a new method to detect digitally manipulated faces by identifying depth inconsistencies.
The RGB and depth properties shown in the image accompanying this note contain semantic information that is easier to interpret and robust or heavy compression operations.
This exercise can be done manually using image editing programs that have advanced tools like Photoshop or Gimp. However, detecting these irregularities can fall back on artificial intelligence, as machine learning systems have already been developed for this task.
The La Sapienza team presented DepthFake, a new proposed method that complements the in-depth analysis mentioned above. The uniqueness of this approach is its ability to extract the depth of a face using a monocular estimation method chained to the RGB image. In simple terms, the color breakdown shown as an example to detect anomalies is reduced to grayscale to speed up the search, exploiting elements of the image that are not lost in the conversion.
That item showing the main technical aspects of this method and the good test results of its application is a positive proof of the reactive nature of the development of security tools.
Since we have such a powerful tool in front of us, these efforts are an approximation to ensuring access to more interesting audiovisual experiences than fraudulent and harmful content.