@smcloudstore/backblaze-b2
This package is a provider for SMCloudStore, for Backblaze B2. SMCloudStore is a lightweight Node.js module that offers a simple API to interact with the object storage services of multiple cloud providers.
Please refer to the main package for the SMCloudStore documentation and instructions on how to use it.
System requirements
The Backblaze B2 provider requires Node.js version 10 or higher.
Provider-specific considerations
There are a few provider-specific considerations for the BackblazeB2 provider.
Connection argument
When initializing the BackblazeB2 provider, the connection
argument is an object with:
-
connection.accountId
: string containing the account ID (the "public key") -
connection.applicationKey
: string containing the application key (the "secret key"). Since this provides uses version 2 of the Backblaze B2 APIs, the key could be either the "master application key" or a "normal application key"; for more information, please refer to the B2 documentation.
Example:
// Require the package
const SMCloudStore = require('smcloudstore')
// Complete with the connection options for Backblaze B2
const connection = {
accountId: 'ACCOUNT_ID_HERE',
applicationKey: 'APPLICATION_KEY_HERE'
}
// Return an instance of the BackblazeB2Provider class
const storage = SMCloudStore.create('backblaze-b2', connection)
Creating a container
When using the storage.createContainer(container, [options])
and the storage.ensureContainer(container, [options])
methods, the options
argument can be used to define some options for the container:
-
options.access
(optional): string determining the permission for all objects inside the container; possible values are:'public'
'private'
Uploading an object
The Backblaze B2 APIs have relatively poor support for streams, as it requires the size of the data to be sent at the beginning of the upload request. Because of that, the Backblaze B2 provider can operate in two separate modes when uploading files:
- If the length of the data can be known before the upload starts, the provider makes a single upload call. This applies to all situations when
data
is a Buffer or a string, and whendata
is a Stream and either theoptions.length
argument is specified (see below), ordata.byteLength
is defined (all data is loaded in memory before being sent to the server in this case). - In the situation when
data
is a Stream and the length can't be known beforehand, if the data is longer thanB2Upload.chunkSize
(default: 9MB; minimum: 5MB) the method will use B2's large files APIs. With those, it's possible to chunk the file into many chunks and upload them separately, thus it's not necessary to load the entire Stream in memory. However, this way of uploading files requires many more network calls, and could be significantly slower. B2 supports up to 1,000 chunks per object, so using 9MB chunks (the default value forB2Upload.chunkSize
), maximum file size is 90GB.
Note: There is currently an issue with an upstream package that limits
B2Upload.chunkSize
to be no more than 10MB.
With the Backblaze B2 provider, the storage.putObject(container, path, data, [options])
method comes with an extra key for the options
dictionary, in addition to the standard options.metadata
key:
-
options.length
(optional): as described above, whendata
is a Stream, if the length of the stream can be known in advance, settingoptions.length
with the byte size (or thebyteLength
property in thedata
object directly) allows the BackblazeB2 provider to upload the object with a single call. This option is ignored ifdata
is a string or Buffer.
Using pre-signed URLs
Backblaze B2 does not support pre-signed URLs, which are not available in their APIs. Because of that, the methods presignedGetUrl
and presignedPutUrl
always throw an exception when called on the BackblazeB2 provider.
Accessing the Backblaze B2 library
The AWS S3 provider is built on top of backblaze-b2, which is exposed by calling storage.client()
.
You can use the object returned by this method to perform low-level operations using the backblaze-b2 module.