[!WARNING]
We are no longer allowing new customers to onboard to Twilio Video. Effective December 5th, 2026, Twilio Video will End of Life (EOL) and will cease to function for all customers. Customers may transition to any video provider they choose, however, we are recommending customers migrate to the Zoom Video SDK and we have prepared a Migration Guide. Additional information on this EOL is available in our Help Center here.
Twilio Video Processors is a collection of video processing tools which can be used with Twilio Video JavaScript SDK to apply transformations and filters to a VideoTrack.
The following Video Processors are provided to apply transformations and filters to a person's background. You can also use them as a reference for creating your own Video Processors that can be used with Twilio Video JavaScript SDK.
- Twilio Video JavaScript SDK (v2.15+)
- Node.js (v14+)
- NPM (v6+, comes installed with newer Node versions)
The Node.js and NPM requirements do not apply if the goal is to use this library as a dependency of your project. They only apply if you want to check the source code out and build the artifacts and/or run tests.
You can install directly from npm.
npm install @twilio/video-processors --save
Using this method, you can import twilio-video-processors
like so:
import * as VideoProcessors from '@twilio/video-processors';
You can also copy twilio-video-processors.js
from the dist/build
folder and include it directly in your web app using a <script>
tag.
<script src="https://my-server-path/twilio-video-processors.js"></script>
Using this method, twilio-video-processors.js
will set a browser global:
const VideoProcessors = Twilio.VideoProcessors;
In order to achieve the best performance, the VideoProcessors use WebAssembly to run TensorFlow Lite for person segmentation. You need to serve the tflite model and binaries so they can be loaded properly. These files can be downloaded from the dist/build
folder. Check the API docs for details and the examples folder for reference.
These processors run TensorFlow Lite using MediaPipe Selfie Segmentation Landscape Model and requires WebAssembly SIMD support in order to achieve the best performance. We recommend that, when calling Video.createLocalVideoTrack, the video capture constraints be set to 24 fps
frame rate with 640x480
capture dimensions. Higher resolutions can still be used for increased accuracy, but may degrade performance, resulting in a lower output frame rate on low powered devices.
Please check out the following pages for best practice.