@pexip/media-processor
TypeScript icon, indicating that this package has built-in type declarations

19.1.0 • Public • Published

@pexip/media-processor

An package for media data processing using Web API.

Install

npm install @pexip/media-processor

Usages

Use Analyzer to get the MediaStream data

const stream = await navigator.mediaDevices.getUserMedia({audio:true});

const fftSize = 64;
// Setup an Audio Graph with `source` -> `analyzer`
const source = createStreamSourceGraphNode(stream);
const analyzer = createAnalyzerGraphNode({fftSize});
const audioGraph = createAudioGraph([[source, analyzer]]);

// Grab the current time domain data in floating point representation
const buffer = new Float32Array(fftSize);
analyzer.node?.getFloatTimeDomainData(buffer);
// Do some work with the buffer
buffer.forEach(...);

// Get the current volume, [0, 1]
const volume = analyzer.getAverageVolume(buffer);

// Release the resources when you have done with the analyzer
await audioGraph.release();

Use AudioGraph to control the audio gain to mute/unmute

const stream = await navigator.mediaDevices.getUserMedia({
  audio: true,
  video: true,
});

const mute = !stream.getAudioTracks()[0]?.enabled;

// Setup an Audio Graph with `source` -> `gain`
const source = createStreamSourceGraphNode(stream);
const gain = createGainGraphNode(mute);
const destination = createStreamDestinationGraphNode();

const audioGraph = createAudioGraph([[source, gain, destination]]);

// Use the output MediaStream to for the altered AudioTrack
const alteredStream = new MediaStream([
  ...stream.getVideoTracks(),
  ...destination.stream.getAudioTracks(),
]);

// Mute the audio
if (gain.node) {
  gain.node.mute = true;
}

// Check if the audio is muted
gain.node.mute; // returns `true`, since we have just set the gain to 0

// Release the resources when you have done with the analyzer
await audioGraph.release();

Use noise suppression

const stream = await navigator.mediaDevices.getUserMedia({
  audio: true,
  video: true,
});

// Fetch denoise WebAssembly
const response = await fetch(
  new URL('@pexip/denoise/denoise_bg.wasm', import.meta.url).href,
);
const wasmBuffer = await response.arrayBuffer();

// Setup an Audio Graph with `source` -> `gain`
const source = createStreamSourceGraphNode(stream);
const destination = createStreamDestinationGraphNode();

const audioGraph = createAudioGraph([[source, destination]]);
// Add worklet module
await audioGraph.addWorklet(
new URL(
        '@pexip/media-processor/dist/worklets/denoise.worklet',
        import.meta.url,
      ).href
);
const denoise = createDenoiseWorkletGraphNode(wasmBuffer);
// Route the source through the denoise node
const audioGraph.connect([source, denoise, destination]);
const audioGraph.disconnect([source, destination]);

// Release the resources when you have done with the analyzer
await audioGraph.release();

Use background blur

import {
  createSegmenter,
  createCanvasTransform,
  createVideoTrackProcessor,
  createVideoTrackProcessorWithFallback,
} from '@pexip/media-processor';

// Grab the user's camera stream
const stream = await navigator.mediaDevices.getUserMedia({video: true});

// Setting the path to that `@mediapipe/tasks-vision` assets
// It will be passed direct to
// [FilesetResolver.forVisionTasks()](https://ai.google.dev/edge/api/mediapipe/js/tasks-vision.filesetresolver#filesetresolverforvisiontasks)
const tasksVisionBasePath =
  'A base path to specify the directory the Wasm files should be loaded from';

const modelAsset = {
  /**
   * Path to mediapipe selfie segmentation model asset
   */
  path: 'A path to selfie segmentation model',
  modelName: 'selfie' as const,
};

const segmenter = createSegmenter(tasksVisionBasePath, {modelAssets});
// Create a processing transformer and set the effects to `blur`
const transformer = createCanvasTransform(segmenter, {effects: 'blur'});
const processor = createVideoProcessor([transformer]);

// Start the processor
await videoProcessor.open();

// Passing the raw MediaStream to apply the effects
// Then, use the output stream for whatever purpose
const processedStream = await videoProcessor.process(stream);

Profiling Web Audio

You can do it with chrome, See here.

How AudioWorkletNode and the AudioWorkletProcessor work together

┌─────────────────────────┐             ┌──────────────────────────┐
│                         │             │                          │
│    Main Global Scope    │             │  AudioWorkletGlobalScope │
│                         │             │                          │
│  ┌───────────────────┐  │             │  ┌────────────────────┐  │
│  │                   │  │ MessagePort │  │                    │  │
│  │   AudioWorklet    │◄─┼─────────────┼─►│    AudioWorklet    │  │
│  │       Node        │  │             │  │      Processor     │  │
│  │                   │  │             │  │                    │  │
│  └───────────────────┘  │             │  └────────────────────┘  │
│                         │             │                          │
└─────────────────────────┘             └──────────────────────────┘
       Main Thread                          WebAudio Render Thread

Constraints when using the AudioWorklet

  • Each BaseAudioContext possesses exactly one AudioWorklet
  • 128 samples-frames
  • No fetch API in the AudioWorkletGlobalScope
  • No TextEncoder/Decoder APIs in the AudioWorkletGlobalScope

References

Readme

Keywords

none

Package Sidebar

Install

npm i @pexip/media-processor

Weekly Downloads

87

Version

19.1.0

License

Apache-2.0

Unpacked Size

550 kB

Total Files

117

Last publish

Collaborators

  • npm-pexip