Carpenter is a remote-client data orchestration layer based on these principles:
- Data consumers should get the data for a given request profile no matter who initiated it
- Multiple requests to the same profile are merged into a single stream
- Recent request for the same data are mirrored directly off the local cache unless explicitly re-polled
- Update actions to collections cause all cached streams to the collection to be re-polled.
This system leans on the observer pattern. Relevant technologies are RxDB, RxJS, fetch.
The goals are:
- fast use of cached data
- global updates as data is fetched from the network
- minimize redundant fetches
- Easily track retrieval state
- Automatic synergy between input and output streams
A request profile defines a desire for data from a remote system. In practical terms is a union of:
- url
- method
- params
This means "Get users from the user base who are on my friends list" or "Get my shopping cart" or "Get brands that have been tagged 'on sale'". It implies:
- a given data source (url)
- a given retrieval technique (fetch usually)
- possibly a list of params (body)
- potentially other specifics (headers)
Input profiles write into the system; their results are not cached, and their watchers terminate upon response.
Output profiles (generally but not always, "get" methods) have a cache lifespan and update automatically when the collection gets input.
When you start watching an output profile you either piggy back on existing cached data or trigger a retrieval action. that streams through a transactional meta-table and you get notifications as the fetching function retrieves data and puts it into a collection.
Stored data is returned immediately if is inside its maxCache lifetime. and even if it is not.
You get the cached data immediately, and a fresh dataset is pulled to ensure it has not been updated on the server.
Watches with refresh active will automatically refresh the data set while the subscription is active.
The advantages of this system is that only the data streaming model concerns itself with asynchronous behavior. You define an input stream to watch and receive updates as it executes. Fresh data in the cache is provided immediately.
const [friends, setFriends] = useState([])
useEffect(() => {
connst ctrl = new ActionController();
const sub = dataaManager.watch('users', 'friends', (message) => {
if (!message) return;
const {status, error, data)} = message;
if (ctrl.signal.aborted) return sub?.unsubscribe();
setShowSpinner(status === STATUS_REQUEST);
if (status === STATUS_SUCCESS) {
setFriends(data);
} else if (status === STATUS_ERROR) {
toast('error retrieiving Friends', {..})
}
}, {uid: loggedInUser.uid} ))
return () => {
ctrl.abort();
sub.then(s) => s.unsubscribe());
}
}, [loggedInUser])
note that there is not any need for direct async in the effect; all async and stream effects happen inside the data management cycle. the exception is the subscriber that is returns, which is a promise that resolves to an RxJS subscription.
The data manager controls data reflected back from the database. New data is kept in the application until it is transported back to the server. You can if you wish create discrete collections for new data if you want to keep them in a shared space with localized fetchers, then transport it into remote requests when they are ready for transport. otherwise, keeping temporary records in transient states like Formik or hooks until its ready for export.