The proliferation of advanced tools for manipulating video has led to an arms race, pitting those who wish to sow disinformation against those who want to detect and expose it. Unfortunately, time favors the ill-intentioned in this race, with fake videos growing increasingly difficult to distinguish from real ones. At the root of this trend is a fundamental advantage held by those manipulating media: equal access to a distribution of what we consider authentic (i.e., "natural") video.
We show how coding very subtle, noise-like modulations into the illumination of a scene can help combat this advantage by creating an information asymmetry that favors verification. Our approach effectively adds a temporal watermark to any video recorded under coded illumination. However, rather than encoding a specific message, this watermark encodes an image of the unmanipulated scene as it would appear lit only by the coded illumination. We show that even when an adversary knows that our technique is being used, creating plausible coded fake video amounts to solving a second, more difficult version of the original adversarial content creation problem at an information disadvantage.
Our work targets high-stakes settings like public events and interviews, where the content on display is a likely target for manipulation, and while the illumination can be controlled, the cameras capturing video cannot.
As with many lights, the stage light we purchased adopts the
ANSI 0-10V dimming standard, which we can use for injecting our time-varying illumination code.
However, we found that the dimming system has a low pass filter with a sub-1Hz cutoff, which prevented us from directly injecting our 12Hz code signal.
We fix this by adjusting the filter components and bypassing its internal pulse-width-modulation (PWM) section, a simple modification for manufacturers to incorporate that gives us a bandwidth of
over 100Hz. We use an ESP-32 microcontroller running our compiled C code to modulate the light with our code signal, with interactive control of the average brightness and signal amplitude.
We were privileged to get quite a bit of media attention on this project, including a TV interview! Some selected articles:
Some others: Cornell Chronicle, Quantum Zeitgeist, Interesting Engineering, New Atlas, inkl, WebProNews, TechEBlog, The Hindu, Mid-DayWe thank all those who volunteered to be subjects in our test scenes. This work was supported in part by an NDSEG fellowship to P.M., and the Pioneer Centre for AI, DNRF grant number P1.
@article{noisecodedillumination,
author = {Michael, Peter and Hao, Zekun and Belongie, Serge and Davis, Abe},
title = {Noise-Coded Illumination for Forensic and Photometric Video Analysis},
year = {2025},
issue_date = {October 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {44},
number = {5},
issn = {0730-0301},
url = {https://doi.org/10.1145/3742892},
doi = {10.1145/3742892},
journal = {ACM Trans. Graph.},
month = jun,
articleno = {165},
numpages = {16},
keywords = {Video forensics, video manipulation, forgery detection, computational illumination}
}