@beesley/bags-of-cache
This package contains in memory cache tools. A generic in memory cache class is available to be used by other tools.
Cached Client
This is the base class used by the other classes in this package. It provides the ttl cache implementation and some basic utilities. In general this class can be used whenever there is a need to make repeated calls to the same expensive function by leveraging an in memory cache with a ttl.
Caveats
To ensure items in the in memory cache are immutable they are stored in a serialised format, so when an item is saved it is serialized, and each time it is read it is deserialized. The overhead for this is small, we use v8's serialization methods for this, but bear this in mind when weighing up whether or not cacheing will improve performance.
Usage
import { CacheingClient } from '@beesley/bags-of-cache';
const client = new CacheingClient({
ttl: 60e3, // cache responses for 60 seconds
});
// cache something
const fooValue = 'foo';
client.set('foo', fooValue);
// read from the cache
client.get('foo')
// clear the cache
client.stop();
// memoise an arbitrary async function, leveraging the in memory cache
const fn = async (arg, other) => `arg was ${arg}, other was ${other}`;
const memoised = client.memoise(fn);
const first = await memoised('one');
const second = await memoised('one'); // this will be returned from the cache
Client API
CacheingClient
Base client used to create clients with in memory caches
get
Gets an item in the cache
Parameters
-
key
string The cache key
Returns any {*}
memoise
Memoises an arbitrary function using the cache
Parameters
-
fn
T Any async function
Returns any {T} Memoised version of the function
set
Sets an item in the cache
Parameters
Returns void
stop
Empties the cache
Returns void
createCacheKey
src/cacheing-client.ts:105-107
Util to serialise stuff to use as a cache key
Parameters
-
args
...Array<any> Arguments to pass to serialise
Returns any {string}
Cached Dynamo Client
Where repeated calls are made to dynamo for the same content, this cacheing client can be used to reduce the number of requests sent by cacheing the responses. For some access patterns, the same query is often sent again and again, even when it's response changes very infrequently. This can lead to unnecessarily hammering dynamo with the same requests as well as being unnecessarily slow due to these requests. Under those conditions this module can significantly improve performance by cacheing those dynamo responses for a provided ttl.
When sending a query or get command, the client will initially check it's in memory cache for a result before calling dynamo. If the response is in the cache, it is served from the cache and we don't call dynamo. If the response is not in the cache, after calling dynamo it is stored in the cache for the provided ttl.
It is also possible to individually limit the concurrency of query and get commands to dynamo. This can be used to generally throttle the rate at which dynamo requests are made. A particularly useful case for this though is when you know your application is going to make the same request repeatedly and the queryConcurrency is set to 1, the first of those queries will go to dynamo and any more requests for the same query will be paused until that first response comes back. Each of those paused queries will be resolved with the cached value from that initial request as soon as that first response comes back (basically dogpile protection).
Caveats
For situations where the same request for config is repeatedly made, this module should dramatically improve performance. However, to ensure items in the in memory cache are immutable they are stored in a serialised format, so when an item is saved it is serialized, and each time it is read it is deserialized. The overhead for this is small, we use v8's serialization methods for this, but if your access pattern is such that the same queries are unlikely to be repeated, the only impact from using this library would be an increase in memory use and slower responses (due to serializing each item that gets cached). So consider your access pattern and decide for yourself whether this is the right solution.
Usage
import { DynamoCachedClient } from '@beesley/bags-of-cache/dynamo';
const client = new DynamoCachedClient({
ttl: 60e3, // cache responses for 60 seconds
dynamoTable: 'my-config-dev',
awsRegion: 'eu-central-1',
queryConcurrency: 1, // only allow 1 concurrent dynamo query command - optional
getConcurrency: 1, // only allow 1 concurrent dynamo getItem command - optional
});
// send an arbitrary query, leveraging the in memory cache
const configItems = await client.query({
TableName: 'big-prod-table',
KeyConditionExpression: '#type = :type',
FilterExpression: '#country = :country AND #language = :language',
ExpressionAttributeNames: {
'#type': 'type',
'#country': 'country',
'#language': 'language',
},
ExpressionAttributeValues: {
':configType': 'track',
':country': 'de',
':language': 'default',
},
});
// get an arbitrary item, leveraging the in memory cache
const item = await client.getItem({
country: 'de',
pk: 'foo',
});
Dynamo API
emptyResponseCacheTime
When we get no items back from a query we will not retry the query within this time (ms)
Type: number
queryConcurrency
Limit concurrently sent queries to this value
Type: number
getConcurrency
Limit concurrent getItem calls to this value
Type: number
DynamoCachedClient
Extends CacheingClient
Client for dynamo tables. Makes dynamo requests and caches the results.
getItem
Gets a single item from the table by key and caches the result
Parameters
-
key
Record<string, any> The dynamo key object
Returns any {(Promise<T | undefined>)}
query
Sends an arbitrary dynamo query, cacheing the results
Parameters
-
input
Partial<QueryCommandInput> The dynamo query command input
Returns any {Promise<T[]>}