The Content Service is layer between a source of data, such as a CMS, and where the data will be consumed. It both adds functionality to a CMS, such as WP, and drastically increases the performance. It is designed to be CMS agnostic .
Most of the CMSes out there, both headless or conventional, need to accommodate lots of features. However these features are solved, the end solution is always a compromise. In some cases a CMS might be super fast but less flexible, or very modular and developer friendly but at the cost of a good editor experience.
The Content Service was originally created to add functionality to Wordpress, such as a draft and a live version without the need of double installations. It also serves all content from memory, making it much, much faster than traditional CMSs.
- 2024-12-18:
3.0.0
Added meta column. - 2024-10-09:
2.2.10
Added a new query where you can do a shallow check if certain keys/permalinks exists without returning the whole resource - 2023-05-16:
2.1.0
Reworked cache to use redis instead of in memory. To run the tests, a local redis server is reuqired for now - 2023-05-16:
2.0.18
Added option to disable useSaveTree as a boolean so we can deactivate the whole hierarchy.
The Content Service normally two consumers:
- A source that pushes data into it, such as a CMS
- One or more consumers of the data
The source need to be configurable somehow to push data to the Content Service. In the case of Wordpress, we use hooks to push data every time a post, page, or anything else has been saved in Wordpress. We push the data from the source to a target in the Content Service. A normal setup has a draft
and a live
target.
draft
is always (or should be) in sync with the source and live
is only in sync with draft
when someone wants to publish something the public site.
A consumer is typically an API or a web site. It fetches the data from the Content Service and renders it as it needs. A consumer will normally never access the data directly form the source. It will always get the data from the Content Service.
The Content Service follows these ideas:
- It's NOT developed for a single specific project. On the contrary, it can be used by many different projects.
- It's NOT aware of what data it holds. This means that no feature can be written specifically for some content (for example, nothing should exists that asumes the data in a resource contant vc_content or a property called blocks).
- Everything is loaded in memory and kept there. If something changes, the cache will adapt (currently we have one place where we read from the DB, but that will be removed soon).
- It is quite stupid. It saves each
resource
as one row in the DB, and fetches that resource based on itskey
orexternalId
. It has a couple of helper functions, but not much.
THIS IS NOT TRUE YET: In its current state, the Content Service can be only be used as a NPM module. No docker image exists.
Get it from docker hub at:
24hrservice/rawb-content-service-next:latest
npm install @24hr/rawb-content-service —save
- It only uses graphql.
- It doesn't need two instances, one for
draft
and one forlive
. - It's an NPM module.
- You can have as many targets as needed (in 1.0 there was no concept of "targets". We had
draft
andlive
as hardcoded targets). - The code is cleaner. Many old things have been removed such as a fallback language, huge files and dependencies to things that should be something else.
- No direct connection with an elastic search service.
- No filters that are aware of the data each resource holds.
Every entry that is pushed to the Content Service is called a resource
. It consist of the following parameters:
- a
key
(required), mostly used for the slug, so that a site quickly can fetch the data for a page. - an
externalId
(required), referencing to the id of the resource, so that a change in the key can be done to a resource. - a
parentId
(optional), reference to an externalId so we can create hierarchy. - a
type
(required) used to categorize content - a
date
(optional) used to sort by date. Its value is dictated by whatever thesource
sees as the correct date to be used. - a
host
(optional) to save where the data comes from (used in rare cases to access the data from the source) -
userInfo
(required), to audit logs -
content
(required) then content itself, the main body. This is the content of the page, everything else is meta-data.
To begin development, do the following:
cd ./dev
docker-compose up
This will start a database (mariadb), a redis instance (for event emitting) and a nodejs project that runs them NPM module.
- Create a new or an updated Wordpress plugin to interact with the Content Service. The current WP Plugin is NOT compatible with this.
These are some of the features we plan to do:
- Feature flags - A list of flags with options that can be used to filter out objects in the content of a resource.
- Protected Resources - A way of making a resource not deletable.
DO $$
DECLARE
tbl_name text;
BEGIN
FOR tbl_name IN
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public' -- Adjust schema if needed
AND table_name != 'contexts' -- Exclude the "contexts" table
LOOP
EXECUTE format('ALTER TABLE %I ADD COLUMN IF NOT EXISTS meta jsonb;', tbl_name);
END LOOP;
END $$;