Let's say you have static site generator and it generates Open Graph images. But this is costly operation and images do not change often, so you want to cache them between runs.
The simplest solution is to cache them on the file system, for example, like this: ImageCache + shorthash
+ deterministicString
(from deterministic-object-hash
). But this approach lacks following features:
- auto-cleanup: if you did some experimentation and generated images that will never be used, they will never be cleaned up, unless you delete whole cache. One of the ways to solve it is, for example, LRU (Least Recently Used)
- expiration: what if instead of generating images locally I want to download them from remote destination and I want to re-download fresh version from time to time. One of the ways to solve it is, for example, TTL (Time To Live) aka expiration date for items
- efficiency: file-system is not the most effective solution for the cache
I'm looking for something more efficiencient than file system. And this is exactly what SQLite is for:
Think of SQLite not as a replacement for Oracle but as a replacement for
fopen()
-- About SQLite
But why not something else? I didn't find anything better that fits following criteria:
- embeded. This disqualifies: Redis, Memcache and similar
- synchronous. This disqualifies: RocksDB, LevelDB (at least their node bindings are asynchronous) and similar
- persistent. This disqualifies: lru-native2, flru and similar
So here we are... I didn't do any benchmarks though
This is slightly modified version of bun-sqlite-cache, which is modified version of cache-sqlite-lru-ttl. So most of the code written by the authors of original packages. Thank you.
Here are some ideas to experiment with (but need proper benchmark first):
- is it better to use raw key or hash them to shorter (and maybe integer) version
-
@node-rs/xxhash
,cyrb53
-
- serialization by default done with
v8.serialize
. How it compares to others: - there is an option to enable compression. By default it will use
node:zlib
, because it doesn't require additional dependendenies. On the other side there are more interesting ways to do it:- Instead of compressing each value separately we can compress whole database:
- There are more modern compression algorithms, for example:
- no need to change default, because it can be configured with
compress
/decompress
options
- Maybe support Bun and Deno, like in great.db
- Maybe support caching promises
- untill promise resolved, cache would return the same promise from
flru
- as soon as promise resolved value would go to the main cache
- if process would terminate before promise resolved it want be stored in the main (persistent) cache
- untill promise resolved, cache would return the same promise from
- cache suppose to reset when those changed:
compress
,decompress
,serialize
,deserialize
- or store them as part of the key, so one can use several versions at the same time
- test all new options:
compress
,decompress
,serialize
,deserialize
,readonly
- maybe store
created_at
for items - maybe drop
withMeta
? - write "usage" section of documentation