Best Practices

The challenge of authenticating media in the age of AI-generated CSAM

The rise of AI-generated child sexual abuse material (CSAM) has dramatically complicated the job of law enforcement officers. AI-generated CSAM can be hyper-realistic, making it increasingly difficult to identify and classify with traditional detection methods. This evolving threat raises significant legal, ethical, and technological concerns, requiring agencies to adapt quickly.

These drastic improvements in AI are not only challenging the ability of investigators to determine the authenticity of material, but each new AI-generated image or video increases the difficulty of identifying real victims since time spent on synthetic media delays the identification of genuine individuals, emphasizing the need to quickly authenticate material.

In addition, the law varies significantly from one jurisdiction to the next and is constantly evolving. What has not been adjudicated in one region may be punishable by law in another. As the legal system works to catch up with technology, the need to discern and prove what is real versus what is synthetic becomes crucially important.

Unfortunately, there is a significant amount of misinformation on this topic. Some people may assume “deepfake detectors” can easily solve this problem, or metadata always provides the necessary information to identify AI-generated material, or that results from digital forensics tools don’t need to be carefully analyzed and interpreted. None of these claims are true. Successful identification of authentic material is challenging work, particularly when investigators are looking for hard evidence that is admissible in court.

This aspect of the digital landscape is fast becoming a battle between criminals using increasingly sophisticated methods to commit crimes and hide evidence, and trained investigators. Fortunately, a whole new set of digital forensic solutions is now available to help identify AI-generated content and fight the ongoing battle against CSAM. When equipped with the right tools, investigators can concentrate on combating CSAM and identifying what is authentic vs  AI-generated content.

The difficulty of identifying AI-synthesized deepfakes

The increasing quantity and quality of CSAM deepfakes has created an urgent need for effective detection methods. While many companies are eager to establish authority in the realm of AI, when it comes to successfully identifying AI-generated media, promises don’t always match performance.

As artificial intelligence models and deepfake generation techniques continue to improve, the characteristics-detection algorithms used to identify synthetic media become less reliable. Detection algorithms need constant retraining and updates to remain effective. Failure to acknowledge this fact can mislead people about the long-term viability of a detection solution. What worked yesterday may not work today.

On the other hand, a detection algorithm that works well with controlled datasets may perform poorly when faced with the diverse, unpredictable, and ever-changing content encountered in the real world, leading to a false sense of security regarding the value of a given piece of information. In addition, some solutions may only be able to analyze still images instead of video, providing limited value.

More data means more problems

The growing volume of information on modern devices has created one of the most serious problems for digital forensic investigators due to the complex, large-scale data environments they must now navigate. Computers, laptops, smartphones, and other mobile devices can provide terabytes of data to be examined, including messages, photos, videos, emails, app data, and location histories, all of which may be relevant to an investigation.

This exponential increase in data means investigators face significant time and resource constraints when trying to find reliable answers on which files are authentic as opposed to generated by AI. Reviewing such vast amounts of information manually is effectively impossible. As a result, tools which can assist investigators are becoming indispensable.

Meanwhile, encryption, cloud storage, and data stored across multiple devices make it difficult to collect information in a unified way. Encrypted files—along with data stored on remote servers and social media platforms—all add legal and technical barriers and further complicate investigations, requiring constant updates to forensic tools and investigator training. At the same time, analysts must carefully manage this data to avoid compromising evidence integrity and ensure chain of custody procedures are followed.

This combination of the volume of media as well as technical and legal obstacles can significantly slow down investigators if they lack the tools and training specifically designed to address these issues. Fortunately, a new generation of digital forensic tools has been created to address these specific issues. Mass amounts of data can be analyzed and organized, playing an indispensable role in assisting investigators.

From terabytes to trials

Investigators face a special challenge in differentiating between authentic and deepfake media when it comes to legal processes. To successfully protect victims of CSAM in court, evidence must meet demanding standards of admissibility, including relevance, authenticity, and reliability.

While various solutions exist claiming to have algorithms that detect deepfakes, they often have limitations, particularly with new and sophisticated fakes. Many tools rely on identifying inconsistencies or artifacts that may not be present in advanced deepfakes. For example, during the process of uploading a file to social media platforms like YouTube, Facebook, Instagram, or Snapchat, brand new files are created. Once this has been done, the ability to interact with the original pixels and the original file metadata and structure is lost. The manipulation process may involve multiple layers of editing and processing, complicating the ability to trace a file back to its original source with demonstrable accuracy and/or legal admissibility.

At this point, investigators are analyzing a file created by a social media platform—not the original file. Since detection algorithms may not provide consistently reliable results, some tools which claim to identify deepfakes lack the legal authority to be presented during a trial.

Conversely, videos captured on mobile devices can be authenticated through file systems and file format forensics, meeting evidentiary requirements. While video editing is easier than ever on mobile devices, the artifacts left behind within files can be detected by the latest forensic tools. Conversely, and possibly more importantly, the ability to prove originality and authenticity can also be determined using approaches such file structure analysis.

Identification, please

The difficulties in identifying authentic footage as opposed to deepfake videos will continue to present significant challenges, particularly when it comes to legal admissibility. Deepfake detectors may or may not work, but if their results cannot legally be used in a criminal investigation, it becomes a moot point.

On the other hand, tools which can verify the authenticity of material that follows a formal process, can be validated, and produces evidence admissible in court is becoming critically important. Trained investigators using the right software can help combat CSAM without being distracted by AI-generated fakes.

Magnet Forensics already has a wide array of powerful tools to support investigations into AI-generated CSAM. In particular, Magnet Verify’s video authentication ability helps investigators determine the trustworthiness of digital evidence, establish when a video has been edited or modified, and recognize original camera video from synthetically produced media.

The result is that Magnet Verify generates detailed, legally compliant reports documenting the analysis which has been performed, ensuring evidence can be used in court. In addition, Verify works seamlessly with Magnet Axiom and other forensic tools to streamline investigations and ensure evidence integrity. Verify has become an indispensable tool in helping to identify deepfake and synthetic media, highlight critical evidence, and support investigators in analyzing case data. With Verify, investigators have a powerful new tool in their fight against the scourge of CSAM.

Learn more about Magnet Verify and its ability to confidently authenticate images and video, and counter manipulated media, or contact us at sales@magnetforensics.com for more information.

Subscribe today to hear directly from Magnet Forensics on the latest product updates, industry trends, and company news.

Start modernizing your digital investigations today.

Top