A lightweight, type-safe, dependency-free JavaScript/TypeScript library for PaddleOCR, supporting both Node.js and browser environments.
- Cross-platform: Works in Node.js, Bun, and browser environments.
- Type-safe: Written in TypeScript with full type definitions.
- No dependencies: Minimal footprint, no heavy image processing libraries included.
- Flexible model loading: Accepts model files as ArrayBuffer, allowing custom loading strategies (e.g., fetch, fs.readFileSync).
-
ONNX Runtime support: Compatible with both
onnxruntime-web
andonnxruntime-node
. - Customizable dictionary: Pass your own character dictionary for recognition.
- Modern API: Simple, promise-based API for easy integration.
npm install paddleocr
# or
yarn add paddleocr
# or
pnpm add paddleocr
- In browser:
import * as ort from "onnxruntime-web";
- In Node.js or Bun:
import * as ort from "onnxruntime-node";
You can use fetch
, fs.readFileSync
, or any other method to load your ONNX model files and dictionary as ArrayBuffer and string array, respectively.
import { PaddleOcrService } from "paddleocr";
const paddleOcrService = await PaddleOcrService.createInstance({
ort,
detection: {
modelBuffer: detectOnnx,
},
recognition: {
modelBuffer: recOnnx,
charactersDictionary: dict,
},
});
The recognize
method expects an object with width
, height
, and data
(Uint8Array of RGB(A) values). Use your preferred image decoding library (e.g., fast-png
, image-js
).
import { decode } from "fast-png";
const imageFile = await readFile("tests/image.png");
const buffer = imageFile.buffer.slice(imageFile.byteOffset, imageFile.byteOffset + imageFile.byteLength);
const image = decode(buffer);
const input = {
data: image.data,
width: image.width,
height: image.height,
};
const result = await paddleOcrService.recognize(input);
console.log(result);
You can find sample models in the assets/
directory:
PP-OCRv5_mobile_det_infer.onnx
PP-OCRv5_mobile_rec_infer.onnx
ppocrv5_dict.txt
See the examples/
directory for usage samples.
About browser usage with Vite, check out paddleocr-vite-example
Contributions are welcome! Feel free to submit a PR or open an issue.
MIT