Darknet
We are going to help to deal wich your data leaks emerging in the Darknet.

Facial Recognition Injection Attacks also known as Video Injection Attacks are getting more and more popular. This article showcases different types of attacks and prevention methods.
ENISA describes in their publication, ID Proofing Good Practices, that facial recognition injection attacks occur five times more frequently than presentation attacks. Particularly, video injection attacks leverage deepfake technologies have gained prominence among criminals and nation-state actors.
This article provides a high-level overview of how facial recognition injection attacks operate, alongside recommended practices to secure system interfaces against these sophisticated threats. It was developed in collaboration with Mobai, a Norwegian start-up specializing in biometric facial recognition technology, where the author completed an internship.
Facial recognition is widely utilized as an authentication factor by verifying identity through biometric attributes—often termed something you are. Attacks on facial recognition systems generally fall into two categories: Presentation attacks and injection attacks. Presentation attacks involve presenting deceptive materials, known as Presentation Attack Instruments (PAIs), such as photographs or 3D masks, to fool biometric sensors.
In contrast, facial recognition injection attacks, or video injection attacks, involve inserting manipulated videos or images directly into the camera stream using Injection Attack Instruments (IAIs). Attackers intercept or alter the camera feed, replacing it with fabricated content like images, pre-recorded videos, or deepfakes.
Deepfakes are not a focus for this article, but Andrea Hauser has written many great articles about them:
ENISA utilizes the following illustration to showcase where a fraudulent video is injected when attacking a facial recognition system. This is where the video feed is intercepted and a tampered-with video is injected.
The simplest attack form is using a virtual camera on the device. A virtual camera, as shown later, primarily serves as a method to inject manipulated videos into video streams. Such attacks can be executed on both mobile phones and mobile devices. Popular software solutions providing virtual camera capabilities include OBS and ManyCam. These tools are extensively used in live streaming or video conferencing to overlay content onto video feeds. Executing this type of attack is as straightforward as selecting the virtual camera input in applications such as Zoom and then choosing the desired manipulated media. A practical demonstration using OBS can be viewed in the following video by GuideRealm.
Specialized real-time face-swapping or deepfake software solutions exists, and is used in combination with virtual camera software as mentioned above. Notable open-source projects providing such capabilities are Deep-Live-Cam, DeepFaceLive and Avatarify Python – although DeepFaceLive was archived on November 13th 2024. For in-depth demonstrations, refer to different videos. A recommendation is the German video by AlexiBexi explaining Deep-Live-Cam in great detail.
Hardware-based video injections represent another approach, similar in concept to virtual camera injections but utilizing physical hardware devices to inject tampered video or image streams. Typically, this involves using external adapters such as video capture cards, which accept input via HDMI and interface with client devices as camera inputs. These adapters commonly connect through USB-C ports, although direct camera input adapters are also viable but require specialized connections. Implementing hardware injections, especially on mobile devices, often involves physically removing or replacing internal cameras. Guides provided by resources like iFixit offer detailed teardown instructions for various devices. Mobile devices frequently utilize interfaces like the MIPI Camera Serial Interface (CSI-2) connected via flat flex cable (FPC) connectors. While CSI-to-HDMI adapters, particularly those designed for Raspberry Pi systems, exist, compatibility is not universal due to variations in pin layouts, connector types, and voltage levels. A documented proof-of-concept was achieved and shown by FaceTec in their NIST FRVT-PAD Commentary of 2022. Although the referenced demonstration channel has since been removed by YouTube. Rooting mobile devices may be necessary to facilitate the required modifications. Alternatively, a Raspberry Pi running LineageOS, an open-source Android distribution, can serve as an effective platform for testing facial recognition applications and attacks, particularly due to its native support for CSI-to-HDMI adapters.
Device emulation attacks leverage virtual environments to intercept and manipulate camera-related system calls. Rather than interfacing with physical hardware, these calls are redirected to virtual cameras and sensors provided by the emulator. The virtual operating system executes these system calls, allowing the injection of fabricated video streams without modifying physical hardware.
Function hooking is a sophisticated method that directly intercepts and modifies the facial recognition application’s system calls on the actual device. Unlike device emulation, function hooking involves altering function calls in real-time during video capture in a debugging state. Attackers can achieve this by replacing the function, that is called for capturing the video stream, and replacing it with the tampered video stream.
Implementing function hooking requires advanced technical capabilities, including root access, reverse-engineering (decompilation), detailed knowledge of system call mechanisms, and the ability to craft malicious functions. Due to its complexity and dependency on specific technologies and applications, function hooking attacks are highly specialized and customized for each targeted scenario.
Contrary to Presentation Attack Instruments (PAIs), Injection Attack Instruments (IAIs) cannot be fully detected using traditional methods such as liveness detection_. Typically, liveness detection mechanisms rely on various factors, including motion analysis, 3D depth sensing, and the evaluation of light reflections and shadows. Detecting or preventing IAIs poses significant challenges because these attacks involve compromised devices under an attacker’s control.
In live video scenarios, such as online job interviews, IAIs become particularly difficult to identify, as detection heavily relies on the awareness of the interviewers or other participants. This vulnerability has been notably exploited in incidents involving live-deepfake attacks, exemplified by cases documented by North Korean APT groups. One well-documented example is described in the article KnowBe4 Interviews a Fake North Korean Employee.
Effective detection strategies against IAIs extend beyond user awareness. These methods include integrity checks at the device, camera, and video stream levels. Integrity verification examines hardware authenticity, software legitimacy, and driver conformity against established baselines and known secure configurations. Furthermore, anti-spoofing software and advanced algorithms play a critical role in facial recognition systems to mitigate such attacks.
Finally, from a organizational and technical perspective the standard CEN/TS 18099:2025 provides comprehensive guidelines and insights into managing biometric security threats. Additionally, the International Organization for Standardization (ISO) is developing a standard under the working title NP 25456, Information technology — Biometrics — Biometric data injection attack detection. which aims to establish standardized practices for identifying and mitigating biometric injection attacks effectively.
Facial recognition systems are becoming crucial for security and authentication. However, the cleverness and maturity of such attacks are increasing steadily. Facial recognition injection attacks pose significant threats by bypassing traditional security methods. These attacks use tools like deepfakes, virtual cameras, and hardware devices to manipulate video streams, making detection difficult.
To mitigate these threats, a multi-layered defense approach is necessary. Integrating device integrity checks, anti-spoofing software, and secure system interfaces is essential. Awareness of attack methods, like virtual camera manipulation and deepfake injection, is critical for protecting biometric systems.
Our experts will get in contact with you!

We are going to help to deal wich your data leaks emerging in the Darknet.

Yann Santschi
Our experts will get in contact with you!