This library provides a way to work with audio worklets and streams using modern web technologies. It allows for the manual writing of audio frames to a buffer and supports various buffer writing strategies.
This library was created for use in my project fbdplay_wasm. In this project, we utilize only a very limited set of WebAudio functionalities. It might lack features for general use.
- Manual Buffer Writing: Provides the ability to manually write audio frames to a buffer.
- Multiple Buffer Writing Strategies: Includes support for manual, timer-based, and worker-based buffer writing.
- Worker-Based Stability: Utilizes Workers to ensure stable and consistent audio playback, reducing the impact of UI thread throttling.
- Vite Integration: Leverages Vite for easy worker loading and configuration without complex setup.
- Audio Worklet Integration: Seamlessly integrates with the Web Audio API's Audio Worklet for real-time audio processing.
- Optimized Performance: Designed for efficient real-time audio processing with batch frame handling.
Click to expand browser compatibility details
This library uses modern Web APIs and is designed for contemporary browsers only. Tested and confirmed working on:
Feature | Chrome | Chrome (Android) | Firefox | Safari (macOS) | Safari (iOS) | Edge | Opera |
---|---|---|---|---|---|---|---|
Basic Support | ✅ | ✅ | ✅ | ✅ | ❓ | ✅ | ❓ |
Manual Buffer Writing | ✅ | ✅ | ✅ | ✅ | ❓ | ✅ | ❓ |
Timer-Based Buffer Writing | ✅ | ✅ | ✅ | 🔺 | ❓ | ✅ | ❓ |
Worker-Based Stability | ✅ | ✅ | ✅ | ✅ | ❓ | ✅ | ❓ |
Legend:
- ✅: Confirmed and working without issues
- 🔺: Confirmed with limitations (e.g., unstable in background operation)
- ❓: Not yet confirmed
Note: Compatibility with Safari on iOS has not been confirmed due to lack of test devices.
To check compatibility in your environment, please run the demo in the example directory.
- Node.js and npm: Make sure you have Node.js (version 20 or higher) and npm installed. This library hasn't been tested on versions below 20.
- Vite: This library uses Vite as the bundler for its simplicity in loading and configuring workers.
To install the library, run:
npm install @ain1084/audio-worklet-stream
Click to view configuration details
You need to add @ain1084/audio-worklet-stream
to the optimizeDeps.exclude section in vite.config.ts
. Furthermore, include the necessary COOP (Cross-Origin Opener Policy) and COEP (Cross-Origin Embedder Policy) settings to enable the use of SharedArrayBuffer
.
import { defineConfig } from 'vite'
export default defineConfig({
optimizeDeps: {
exclude: ['@ain1084/audio-worklet-stream']
},
plugins: [
{
name: 'configure-response-headers',
configureServer: (server) => {
server.middlewares.use((_req, res, next) => {
res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp')
res.setHeader('Cross-Origin-Opener-Policy', 'same-origin')
next()
})
},
},
],
})
If you are using Nuxt3, add it under vite in nuxt.config.ts
.
export default defineNuxtConfig({
vite: {
optimizeDeps: {
exclude: ['@ain1084/audio-worklet-stream']
},
plugins: [
{
name: 'configure-response-headers',
configureServer: (server) => {
server.middlewares.use((_req, res, next) => {
res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp')
res.setHeader('Cross-Origin-Opener-Policy', 'same-origin')
next()
})
},
},
],
},
nitro: {
rollupConfig: {
external: '@ain1084/audio-worklet-stream',
},
routeRules: {
'/**': {
headers: {
'Cross-Origin-Embedder-Policy': 'require-corp',
'Cross-Origin-Opener-Policy': 'same-origin',
},
},
},
},
})
This library continuously plays audio sample frames using AudioWorkletNode. The audio sample frames need to be supplied externally via a ring buffer. The library provides functionality to retrieve the number of written and read (played) frames and allows stopping playback at a specified frame.
The output-only AudioNode is implemented by the OutputStreamNode class, which inherits from AudioWorkletNode. This class adds functionalities such as stream playback, stopping, and retrieving playback position to the AudioWorkletNode.
Instances of OutputStreamNode cannot be constructed directly. First, an instance of StreamNodeFactory needs to be created. The StreamNodeFactory is instantiated by calling its static create method with a BaseAudioContext as an argument. This method internally loads the necessary modules. Then, through the returned instance, the construction of OutputStreamNode becomes possible.
The library does not handle the construction or destruction of AudioContext. When constructing AudioContext, be sure to do so in response to a user interaction, such as a UI event (e.g., button press).
Example:
import { StreamNodeFactory, type OutputStreamNode } from '@ain1084/audio-worklet-stream'
let audioContext: AudioContext | null = null
let factory: StreamNodeFactory | null = null
const clicked = async () => {
if (!audioContext) {
audioContext = new AudioContext()
factory = await StreamNodeFactory.create(audioContext)
}
// Create manual buffer stream
const channelCount = 1
const [node, writer] = await factory.createManualBufferNode({
channelCount,
frameCount: 4096,
})
// Write frames
writer.write((segment) => {
for (let frame = 0; frame < segment.frameCount; frame++) {
for (let channel = 0; channel < segment.channels; ++channel) {
segment.set(frame, channel, /* TODO: Write sample value */)
}
}
// Return the count of written frames
return segment.frameCount
})
// Start playback
node.start()
audioContext.connect(audioContext.destination)
}
As outlined in the overview, OutputStreamNode requires external audio samples. These samples must be written to a ring buffer, and there are several methods to achieve this.
Note: The diagrams are simplified for ease of understanding and may differ from the actual implementation.
- This method involves manually writing to the ring buffer. Use the
OutputStreamFactory.createManualBufferNode
method, specifying the number of channels and frames to create anOutputStreamNode
. TheFrameBufferWriter
, used for writing to the ring buffer, is also returned by this method along with theOutputStreamNode
. - When the
OutputStreamNode
is first constructed, the ring buffer is empty. You must write to the buffer before starting playback to avoid audio gaps. While the node is playing, you must continue writing to the ring buffer to prevent audio frame depletion (which would cause silence). - If the audio frames run out, the stream playback continues with the node outputting silence.
- To stop the stream playback, call the
stop()
method ofOutputStreamNode
. You can specify the frame at which to stop playback. For example, calling stop() with a frame count stops playback at that exact frame. If you want to play all the written frames, you can specify the total number of written frames, which can be obtained via theFrameBufferWriter
.
- This method writes to the ring buffer using a timer initiated on the UI thread. Create it using the
OutputStreamFactory.createTimedBufferNode()
method, specifying the number of channels, the timer interval, and theFrameBufferFiller
that supplies samples to the buffer. - Writing to the ring buffer is handled by the FrameBufferFiller. The timer periodically calls the fill method of the
FrameBufferFiller
, which supplies audio frames via theFrameBufferWriter
. - If the audio frames run out, the stream playback continues with the node outputting silence.
- If the fill method of the
FrameBufferFiller
returns false, it indicates that no more audio frames are available. OnceOutputStreamNode
outputs all the written frames, the stream automatically stops and disconnects. - Like the Manual method, you can also stop playback at any time using the
stop()
method.
- Similar to the Timed method, this method uses a timer to write to the ring buffer, but the timer runs within a Worker. This approach reduces the impact of UI thread throttling, providing more stable playback.
- Create it using the
OutputStreamFactory.createWorkerBufferNode()
method. - Writing to the ring buffer occurs within the Worker.
- While the ring buffer writing is still managed by the
FrameBufferFiller
, the instance must be created and used within the Worker. - The
FrameBufferFiller
implementation is instantiated within the Worker. - You need to create a custom Worker. However, helper implementations are available to simplify this process. Essentially, you only need to specify the
FrameBufferFiller
implementation class within the Worker. - Depending on how you implement the
FrameBufferFiller
class, you can use the same implementation as the Timed method.
Note: Any data passed from the UI thread to the Worker (such as fillerParams in the WorkerBufferNodeParams) must be serializable (e.g., primitives, arrays, objects). Non-serializable values like functions or DOM elements cannot be passed.
- When the buffer becomes empty, silent audio is output instead of throwing an error.
- The AudioNode continues to operate and consume CPU resources even during silent output.
- Normal audio output resumes automatically when new audio data is supplied.
- An UnderrunEvent is emitted upon recovery from an underrun, reporting the duration of silence (note: this is a post-event notification).
Click to view details about configuration parameters
Understanding these parameters is crucial for optimal audio performance:
- Specifies the number of audio channels (e.g., 2 for stereo).
- Determines the sample composition within each frame.
- Defines the size of the ring buffer in frames.
- For continuous streaming, new frames must be supplied before buffer depletion.
- Larger size: Less susceptible to interruptions, but increases latency.
- Smaller size: Requires more frequent writes, potentially increasing CPU load.
- Manual strategy: Must be specified.
- Timed/Worker strategies: Calculated internally based on fillInterval.
- Specifies the interval for buffer refill timer.
- Default: 20ms.
- Specifies the number of chunks in the total buffer size.
- Default: 5.
Example calculation: For 48kHz sample rate and 20ms fillInterval:
- One chunk size = 48000 * 0.02 = 960 frames
- Total buffer size with default 5 chunks = 960 * 5 = 4800 frames
The actual values may slightly differ from the above because they are rounded up to 128 frame units.
You can find the full API documentation here.
Click to view example details
The example can be found at example.
The provided example demonstrates how to use the library to manually write audio frames to a buffer. It includes:
- Main Application (example/src/main.ts): Sets up and starts the audio stream using different buffer writing strategies.
- Sine Wave Filler (example/src/sine-wave-frame-buffer-filler.ts): Implements a frame buffer filler that generates a sine wave.
- Sine Wave Generator (example/src/sine-wave-generator.ts): Generates sine wave values for the buffer filler.
- Worker (example/src/worker.ts): Sets up a worker to handle buffer filling tasks.
- HTML Entry Point (example/index.html): Provides the HTML structure and buttons to control the audio stream.
For more details, refer to the example/README.md.
Click to expand performance optimization details
This guide provides tips and best practices for optimizing the performance of your audio application using the Audio Worklet Stream Library.
Our library is optimized for processing audio frames in large batches. To maximize performance:
- When implementing your own audio generation or processing logic, work with larger chunks of audio data whenever possible.
- Utilize the library's ability to handle multiple frames at once in your buffer filling strategies.
- Larger buffer sizes can improve performance but may increase latency.
- Experiment with different buffer sizes to find the optimal balance between performance and latency for your specific use case.
For the most stable playback, especially in scenarios where the main thread might be busy:
- Use the Worker-based buffer writing strategy.
- Implement computationally intensive tasks within the Worker to avoid impacting the main thread.
- When working with multiple audio streams, be mindful of memory usage.
- For scenarios requiring numerous concurrent audio streams, pay attention to the creation and resource management of each stream. Consider properly releasing unused streams when necessary.
- Creating new AudioContext instances is expensive. Reuse existing contexts when possible.
- If your application requires multiple audio streams, try to use a single AudioContext for all of them.
- Use browser developer tools to profile your application and identify performance bottlenecks.
- Monitor CPU usage and audio underruns to ensure smooth playback.
By following these guidelines, you can ensure that your audio application runs efficiently and provides a smooth user experience.
Click to expand advanced usage details
This guide covers advanced usage scenarios and techniques for the Audio Worklet Stream Library.
For complex audio processing tasks:
- Create a custom Worker that extends
BufferFillWorker
. - Implement advanced audio processing algorithms within the Worker.
- Use
SharedArrayBuffer
for efficient data sharing between the main thread and the Worker.
When working with multiple audio streams:
- Create separate
OutputStreamNode
instances for each stream. - Manage their lifecycle and synchronization carefully.
- Consider implementing a mixer if you need to combine multiple streams.
You can combine this library with other Web Audio API features:
- Connect the
OutputStreamNode
to other AudioNodes for additional processing. - Use AnalyserNode for visualizations.
- Implement spatial audio using PannerNode.
For robust applications:
- Implement comprehensive error handling, especially for Worker-based strategies.
- Use the UnderrunEvent to detect and handle buffer underruns.
- Implement logging or metrics collection for performance monitoring.
These advanced techniques will help you leverage the full power of the Audio Worklet Stream Library in complex audio applications.
Click to expand troubleshooting details
This guide helps you troubleshoot common issues when using the Audio Worklet Stream Library.
- Check if your AudioContext is in the 'running' state. It may need to be resumed after user interaction.
- Ensure your OutputStreamNode is properly connected to the AudioContext destination.
- Verify that you're writing audio data to the buffer correctly.
- Increase your buffer size to reduce the chance of underruns.
- If using the Timed strategy, consider switching to the Worker strategy for more stable playback.
- Check your system's CPU usage. High CPU usage can cause audio glitches.
If you're experiencing frequent UnderrunEvents:
- Increase your buffer size.
- Optimize your audio data generation/loading process.
- If using the Manual strategy, ensure you're writing to the buffer frequently enough.
- Ensure your Worker file is in the correct location and properly bundled.
- Check for any console errors related to Worker initialization.
- Verify that your FrameBufferFiller implementation in the Worker is correct.
- Check the message passing between the main thread and the Worker.
- Refer to the Browser Compatibility table in the README.
- Ensure you're using the latest version of the browser.
- Check if the required features (like SharedArrayBuffer) are enabled in the browser.
- Ensure
@ain1084/audio-worklet-stream
is properly installed. - Check your bundler configuration, especially the
optimizeDeps.exclude
setting in Vite.
- Verify that you've set the correct COOP (Cross-Origin Opener Policy) and COEP (Cross-Origin Embedder Policy) headers as described in the installation instructions.
- If using a development server, ensure it's configured to send the required headers.
If you're experiencing poor performance:
- Profile your application using browser developer tools.
- Consider using the Worker strategy for computationally intensive tasks.
- Optimize your audio processing algorithms.
If you're still facing issues after trying these solutions, please open an issue on our GitHub repository with a detailed description of the problem and steps to reproduce it.
Click to view known issues and workarounds
When using @ain1084/audio-worklet-stream
in a Nuxt 3 project, you may encounter issues during SSR (Server-Side Rendering) or when importing the package as an ESM module. This can result in errors like:
[nuxt] [request error] [unhandled] [500] Cannot find module '/path/to/node_modules/@ain1084/audio-worklet-stream/dist/esm/events' imported from '/path/to/node_modules/@ain1084/audio-worklet-stream/dist/esm/index.js'
-
Disable SSR for the Component
You can disable SSR for the component that uses the package. This can be done by using
<client-only>
:<client-only> <MyComponent /> </client-only>
-
Use ssr: false in nuxt.config.ts
You can disable SSR for the entire project in
nuxt.config.ts
:export default defineNuxtConfig({ ssr: false, // other configurations })
-
Use import.meta.server and import.meta.client
For a more granular control, you can use
import.meta.server
andimport.meta.client
to conditionally import the module only on the client-side. Note that this method is more complex compared to 1 and 2:if (import.meta.client) { const { StreamNodeFactory } = await import('@ain1084/audio-worklet-stream'); // Use StreamNodeFactory }
To ensure proper operation, it is essential to use ssr: false
or <client-only>
for components and to exclude @ain1084/audio-worklet-stream
from Vite's optimization in your nuxt.config.ts
:
export default defineNuxtConfig({
ssr: false, // or use <client-only> for specific components
vite: {
optimizeDeps: {
exclude: ['@ain1084/audio-worklet-stream']
},
plugins: [
{
name: 'configure-response-headers',
configureServer: (server) => {
server.middlewares.use((_req, res, next) => {
res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp')
res.setHeader('Cross-Origin-Opener-Policy', 'same-origin')
next()
})
},
},
],
},
nitro: {
rollupConfig: {
external: '@ain1084/audio-worklet-stream',
},
// Ensure COEP and COOP settings for SharedArrayBuffer
routeRules: {
'/**': {
headers: {
'Cross-Origin-Embedder-Policy': 'require-corp',
'Cross-Origin-Opener-Policy': 'same-origin',
},
},
},
},
})
We are considering potential enhancements for future releases, including:
- Buffer management optimization: We are considering ways to improve memory efficiency and initialization time, especially when dealing with multiple audio streams.
Please note that these are just considerations and may or may not be implemented in future versions. We always aim to balance new features with maintaining the library's stability and simplicity.
-
Vite as a Bundler: This library utilizes Vite to enable the loading and placement of workers without complex configurations. It may not work out-of-the-box with WebPack due to differences in how bundlers handle workers. While similar methods may exist for WebPack, this library currently only supports Vite. Initially, a bundler-independent approach was considered, but a suitable method could not be found.
-
Security Requirements: Since this library uses
SharedArrayBuffer
, ensuring browser compatibility requires meeting specific security requirements. For more details, refer to the MDN Web Docs on SharedArrayBuffer Security Requirements.
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
This project is licensed under multiple licenses:
You can choose either license depending on your project needs.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.