A modular JavaScript detection library for face, QR, and future detection types with a clean, unified API.
- Pluggable Detectors — Start with face detection and extend easily with QR, object, etc.(currently only have face).
-
Unified Interface — Every detector implements the same async
detect(input)
method. -
Factory Pattern — Use
createDetector({ type })
to initialize the correct implementation. -
Works in Browser — Supports
<video>
,<canvas>
, and<img>
HTML elements.
It is a modular JavaScript face detection library built on top of MediaPipe Face Detection. It provides an easy-to-use interface to detect faces from various input sources like video, image, or canvas, with built-in initialization, error handling, and bounding box extraction.
currently only face detection part is implemented
Internally, the library uses @mediapipe/face_detection to detect faces. The workflow is:
- Load the detector (loads the MediaPipe model).
import { createDetector } from 'detection-lib';
const detector = await createDetector({ type: 'face' });
- Initialize it (it send dummy static image so that all files needed are loaded).
await detector.initialize();
- Run Detection on any HTML media element (input can be any image,HTMLCanvasElement,HTMLImageElement).
const result = await detectorRef.current.detect(input);
- Receive Results as an object with:
-
status: Numeric status code
-
message: Human-readable string
-
boxes: Array of bounding boxes with x, y, w, h, and score
This library currently runs in the browser using ES modules.
npm install detection-lib
import { useEffect, useRef, useState } from 'react';
import { createDetector } from 'detection-lib';
// Create references and state
const detectorRef = useRef(null);
const [detectorCreated, setDetectorCreated] = useState(false);
const [detectorReady, setDetectorReady] = useState(false);
// Step 1: Create the detector when component mounts
useEffect(() => {
const create = async () => {
const detector = await createDetector({ type: 'face' });
detectorRef.current = detector;
setDetectorCreated(true);
};
create();
// Cleanup on unmount
return () => {
setDetectorCreated(false);
detectorRef.current = null;
};
}, []);
// Step 2: Initialize the detector once created
useEffect(() => {
const initialize = async () => {
if (detectorCreated && detectorRef.current) {
await detectorRef.current.initialize();
setDetectorReady(true);
}
};
initialize();
// Cleanup
return () => setDetectorReady(false);
}, [detectorCreated]);
// Step 3: Run detection (call this function after detector is ready)
const runDetection = async (input) => {
if (detectorReady && detectorRef.current) {
const result = await detectorRef.current.detect(input);
if (result.type === 'face' && result.boxes) {
result.boxes.forEach((box) => {
// Example: Draw box or process coordinates
console.log('Detected Face Box:', box);
});
}
}
};
{
status: 200, // or other status code
message: 'OK', // or descriptive error
boxes: [
{
x: Number, // top-left x
y: Number, // top-left y
w: Number, // width
h: Number, // height
score: Number // confidence score (optional)
},
...
]
}
Code | Message | Description |
---|---|---|
2000 | Single face detected | OK |
2004 | No face detected | No face was found in the frame |
2002 | Multiple faces detected | More than one face detected |
4010 | FaceDetector not initialized | You must call .initialize() first |
5000 | FaceDetector model error | An error occurred while running the model |
4015 | Image load error | An error occurred while loading image |
All detectors implement:
async detect(input: HTMLVideoElement | HTMLImageElement | HTMLCanvasElement): Promise<DetectionResult>
-
Uses @mediapipe/face_detection under the hood
-
Loads model assets from jsDelivr CDN
-
Initialization uses a built-in static image to "warm up" the model
-
Implements result caching to optimize repeated calls
To add a new detector:
1. Create a new file in src/detectors/ (e.g., myDetector.js).
2. Implement a class with a detect() method.
3. Register it in the factory in src/DetectorFactory.js.