Bema
Installation
npm add bema
About
Bema is a framework for writing benchamrks. It focused on your workflow of writing and maintain benchmarks over time. Under the hood it uses Benchmark.js as its engine but layers many features on top. Conceptually you can roughly think of it to benchmarks what
jest
is to tests. It was initially developed at Prisma
for internal bencmarking needs and continues to be used today. Its features and roadmap are driven firstly
by Prisma's needs however community contributions are generally welcome too!
Features
- Define groups to organize your benchmarks
- Fluent API maximally leveraging TypeScript for fantastic autocompletion and type safety
- Easy matrix definition (on par with GitHub Actions)
- Benchmark parameterization (global or group level)
- Context system for easily sharing data (or anything else you want) across benchmarks
- Provider system (think express middleware ish) for reusing logic across benchmarks
- CLI
- A statistically insignificant quick mode to try out benchmarks while developing
- Select benchmarks to run
- Select benchmarks to skip
- Target benchmarks by parameter and/or group matches
- Pretty terminal reporting
- GitHub Actions integration
- Download artifacts
- Plug GitHub matrixes into filtered benchmark runs (not actually an integrated feature at all, just works really well)
- Report files merging (e.g. useful when you have many separte report files from a matrix of CI job runs)
- Reports
- Full access to detailed sample statsna and overall stats provided by Benchmark.js
- Metadata (benchmark names, parameter values each had, group each was in, etc.)
- Information organized by matrixes if used
- Multiple formats
- JSON
- CSV
- Markdown
- Integrated nextjs webapp for visualizing benchmarks (work in progress)
- Able to run TypeScript without you having to do anything (e.g. imagine if
jest
hadts-jest
builtin) - Hook onto parameter change events
- Benchmark result sanity check system (verify benchmark runs are doing the work you expect them too)
Useful for complex benchmarks. For example imagine you are testing a set of ORMs with completely different APIs but you want to ensure the data they are querying against the database always returns the exact same set of data otherwise your benchmarks aren't actually comparing apples-to-apples. Bema helps you build confidence around this use-case by having an integrated sanity check step you can opt-into.
Guide (work in progress)
The following gives a taste of bema but there are many other features and more advanced topics that are not covered yet in this guide.
// benchmarks/simple.bench.ts
// Bema exports a singleton so you can get to work quickly.
import bema from 'bema'
// Save a reference to the created+configured group so that you can
// define multiple benchmarks later down in the module.
const simple = bema
// Create groups of benchmarks. This allows you to share configuration across multiple benchmarks
// and affects their default presentation in downstream reporting.
.group('Simple')
// Define custom paramters. Benchmarks are named by their accumulated paramters.
.paramter('name')
// Let's add two to show it off down below.
.paramter('thing')
// A middleware system. You get access to upstream context and can augment
// however you want for downstream parts! Also, your additions here will be statically visible
// downstream thanks to TypeScript!
.use((ctx) => ({
...ctx,
newThing: true,
}))
// Sugar over the middleware system to quickly attach data to the context.
.useData({ text: 'bar' })
simple
// Create a new benchmark
.case({
name: 'just-text',
thing: true,
})
// Add a provider only for this benchmark (doesn't affect the group)
.use(foobar)
// Your actual benchmark implementation. All code in here will be timed in a
// statistically significant way (via Benchmark.js)
.run((ctx) => {
console.log(ctx.text)
})
simple
// Create another benchmark
.case({
name: 'interpolated-text',
thing: false,
})
.run(async (ctx) => {
console.log(`%s`, ctx.text)
})
npx bema
CLI
Reference Docs
On Paka (work in progress)