ABIS Dev — API smoke test

Browser demos (same host, no API key): Active liveness — MediaPipe (blink, head turn, smile, mouth); Document scan — ID / passport guide, optional OpenCV edges; SDK JScreateAbisClient("", { apiKey }) + plantilla / galería / doc / TTS; Liveness + document face match — selfie then ID, /v1/compare.

Client UI v2 (theme + 3 layouts + modo técnico; config por ?client=<slug> vía Laravel GET /demo/api/client-ui-config/{slug}): Liveness v2 · Document scan v2 (tras php artisan migrate + db:seed con ClientUiConfigSeeder).

Probar cada plantilla de cliente — mismo HTML, distinto client (tema/marca/layout por BD): default (liveness, document) · hastaelfinal (liveness, document) · slug sin fila en BD → fallback por API (liveness). Tres skins de layout (override ?layout= sin tocar la BD) — documento: guided, compact, minimal · liveness: guided, compact, minimal.

Same host as this page. OpenAPI /docs. Set ABIS_USE_CPU=1 if you have no NVIDIA GPU.

This page calls the REST API so you can try each flow without writing code. Each section below matches one endpoint: read the short What it does / When to use it blurbs, pick files, then inspect the JSON response. If the server was started with ABIS_SAVE_EMBEDDINGS=1, embeddings may also be written under data/embedding_dumps/ for offline analysis.

Webcam capture

Face (selfie): use the front camera, fill the frame with your face, avoid backlight. Document: use the back camera, place the ID flat, show all corners, reduce reflections.

1:1 — compare two photos

What it does: Detects a face in both images, turns each face into a numeric vector (embedding), and measures how similar they are (cosine distance vs a threshold). Returns verified, distance, threshold, and derived fields.

Reading the JSON: similarity_approx_pct is exp(−k × (distance/threshold)²) × 100 (with k = 1; 0% when distance ≥ threshold) — a smoother presentation score, not a literal “same person %”. similarity_approx_lineal is (1 − distance/threshold) × 100 (margin within the acceptance band). For a stricter match, pass a lower threshold (e.g. 0.25). Use threshold_margin_cosine and distance_as_fraction_of_threshold for operational checks.

When to use it: You have two photos and only need to know if they are likely the same person—for example two selfies, or a live capture vs an ID scan. Nothing is stored on the server unless you enabled embedding dumps.

If the API says no face was detected in image A or B, enable relaxed detection below (uses the whole frame; less reliable) or try clearer face photos.

With ABIS_SAVE_EMBEDDINGS=1 (or ABIS_EMBEDDING_DUMP_DIR), the response includes embedding_dump_urls — paths such as /demo/embedding-dumps/… that open the saved biometric JSON in the browser (links also appear below the response).

Webcam (start above) →

    
  

Template — enroll (JSON)

What it does: Extracts one face embedding from your image and returns a full biometric template JSON: vector, model name, detector, face box, optional subject_id / doc_type metadata, and file hash fields.

When to use it: You want a portable file to verify against later (mobile offline, air-gapped checks, or sending a reference to another system). Save the JSON from the response; it is the enrollment record for that face.


  

Template — verify

What it does: Compares a new photo (probe) to a previously saved template JSON. It recomputes the probe embedding and measures distance to the stored vector—same 1:1 logic as compare, but one side is the file you upload as “template”.

When to use it: Onboarding or login: the user already has an enrolled template; you only upload a fresh selfie to confirm identity without sending both reference and probe images from scratch.

Upload the raw enroll JSON, a template_enroll dump (wrapped under template), or a compare / gallery_enroll embedding dump (top-level model_name + embedding, no pipeline). Save as .json; double-encoded JSON and string template fields are normalized.


  

Gallery 1:N — enroll

What it does: Registers a face under a subject_id in the local SQLite gallery: the embedding is stored server-side for later search.

When to use it: Building a small watchlist or demo database—“who are the people we know?”—so you can later run 1:N search against that set. Re-enrolling the same subject_id overwrites the stored vector.


  

Gallery 1:N — search

What it does: Encodes the probe image to an embedding, then finds the closest enrolled subjects in the gallery (by cosine distance). Returns matches under max_distance; optional include_ranked lists the best distances even if they fail the threshold (for debugging).

When to use it: “Who in my gallery looks like this face?”—e.g. duplicate account detection or matching a probe to a small set of enrolled IDs.

Counts as a match only if cosine distance ≤ max_distance (default 0.3 with Facenet512). License vs passport photos often land above 0.3 even for the same person: raise the threshold (e.g. 0.45) or use include_ranked to see the best distance.

Quality (face only)

What it does: Detects one face and runs quality checks only (face size, optional blur): it does not return a match score or store anything. Useful to reject tiny or blurry crops before enrollment.

When to use it: Pre-flight validation—e.g. “is this document crop good enough before we call enroll or compare?”—without doing a full verification.