
Facia.AI
Liveness Detection, Deepfake Detection & The AWS Story
Context
When I joined as a Product Analyst, Facia already had a mature biometric security product with liveness detection defending against 53+ spoofing attack types. My mandate was distinct: I was not asked to build features. I was asked to break the product. Think like a fraudster, find every way the system could fail, and document vulnerabilities before clients or real attackers discovered them.
The AWS Story
This is the defining story. During adversarial testing, I escalated through three phases of attack complexity: physical presentation attacks (printed photos, screen replays, 3D masks), generative AI attacks (deepfakes, face swaps, face morphs using 40+ tools), and injection attacks (virtual camera injection, API-level synthetic image submission).
One of the earliest and most consequential discoveries was also the simplest. A $2 nylon stocking mask successfully spoofed the liveness detection. The mask preserved enough facial geometry to pass depth analysis while defeating texture detection. This wasn't just a Facia vulnerability. I tested the same attack against Amazon Rekognition, one of the most widely deployed facial recognition services in the world. It worked. I also spoofed BioID's liveness detection.
I documented the AWS finding publicly on LinkedIn. The point wasn't to embarrass a vendor. It was to demonstrate an industry-wide gap. If a $2 mask can defeat a billion-dollar cloud provider's liveness check, the entire industry needs to rethink its approach to presentation attack detection.
What Was Built (Through Breaking)
Comprehensive Adversarial Dataset
Thousands of deepfake images, face swaps, face morphs, AI-generated faces, and manipulated videos using 40+ generative AI tools. Each successful spoof was documented with exact conditions. This dataset became the foundation for two new product lines: Deepfake Detection (100% accuracy on Meta's Deepfake Detection Challenge Dataset of 124,000 videos) and AI Image Detection (detects AI-generated images via colour inconsistencies, lighting anomalies, and metadata).
Age Estimation Improvement
Rigorous testing identified systematic inaccuracies across demographics. Led in-house consented dataset creation with diverse age ranges and ethnicities. The result was an 80% improvement in estimation accuracy for Challenge 21 and Challenge 25 compliance.
Impact
$2
Mask spoofed AWS Rekognition, publicly documented
80%
Improvement in age estimation accuracy
60%
Reduction in manual fraud reviews
Adversarial dataset enabled Deepfake Detection and AI Image Detection product lines
Python automation cut verification testing time by 50%
Reflection
This period crystallised my approach. Most PMs focus on building. I learned to break first. The adversarial mindset, systematically escalating attacks, documenting failure conditions, and strengthening the product, became my lens for every product decision. When I later built AML Watcher's risk scoring or Barie's hallucination minimisation, I was applying the same principle: find the failure before the user does.