← Back to main demo · Document scan · Liveness + doc match
Uses Google MediaPipe Face Landmarker (WASM + model from Google CDN) — no signup, no API key.
Active sequence: blink ×2 → turn head → smile (hold) → open mouth (hold) → hold still 5s → auto capture.
This is practice / dev only — not certified PAD.
Camera: Browsers require a secure context — https:// or http://localhost (not http://192.168…).
Embed (?embed=1 in an iframe / WebView): when capture finishes, the parent receives a
postMessage with the selfie JPEG (abis-liveness-selfie). See
doc/laravel/11-sdk-movil-webview.md in the repo.
Center your face in the oval — chin near the bottom of the ellipse, eyes in the upper third.
MediaPipe loads on “Start camera” (first time may take a few seconds).
Follow the status line. If your face leaves the frame, progress on smile/mouth may reset. Use Reset to restart.
Press “Start camera”, then “Begin challenges”.
If the browser never speaks: follow the Voice line — each liveness step also plays a different beep pattern (Web Audio, not TTS).
If speech fails in the browser (common on Chromium/Linux), you still get beeps and the text under “Voice”.
Natural voice uses edge-tts (neural, needs internet) via the API — install with
pip install edge-tts (already in requirements.txt). If that fails, it falls back to
espeak-ng (robotic, offline): sudo apt install espeak-ng, restart uvicorn.
You can choose Server engine below. Or use the Voice line + beeps;
or Firefox for browser TTS.
(saved in this browser)
(ignored for neural voice)
When all four steps pass, you’ll hear “hold still”, then stay steady — the blue bar tracks accumulated time with acceptable light and sharpness (not a raw 5-second countdown). If it barely moves, improve front lighting; the final phase uses slightly relaxed checks so capture can still complete. The saved JPEG is cropped to the tracked face outline (not the full camera view).
Download JPEG