Heavily influenced by Laravel and Express.
This framework is working at least double concurrency compared to express – up to 5 times more connections due to the underlying WebAssembly HTTP module. Implicit this also means more speed.
Note: Performance cost of using the dependency injection is huge. Using up to 2/3 of the time per request on simple requests. On more power hungry requests, this may become better.
Important when upgrading to version >= 0.6.0: Express as backend has a smaller set of functionality supported.
Important when upgrading to version >= 0.8.0: Express has no support for functionality for streaming.
Important when upgrading to version >= 0.9.0: Express as backend has been removed. Use node/uws as backend framework.
- Node v20 or newer.
-
Run yarn to install dependencies:
yarn
-
Run yarn download to download dependent wasm-binaries for your platform:
yarn download
-
Run yarn dev to start serving:
yarn dev
Take the binary missing and download it from here: https://github.com/uNetworking/uWebSockets.js/tree/binaries
Example: wget https://github.com/uNetworking/uWebSockets.js/blob/binaries/uws_linux_x64_83.node?raw=true
- Modular (Everything written as modules)
- Built-in WAF (DOS, DDOS, CSRF, IP blacklist and more)
- Super fast high performance node framework (aims to be as fast as nginx or faster)
- Middlewares (supports Express middlewares)
- Services (from Laravel)
- Dependency injection (Awilix)
- Controllers (domain controllers for you to quickly setup new endpoints)
- Views (templates Swig)
- Built-in rate limiter
- Built-in router (static and regex matching)
- Built-in session management
- Supporting PHP as runtime (.php files can be served through php-fpm)
- Supporting
both Express anduWS as http server. (Express as backend has been removed as of 0.8.0)
ORM: https://vincit.github.io/objection.js/guide/getting-started.html
Register your routes in config/routes.js
- For parameterized routes, use regex and double slash slashes (escape all backslashes like '\' => '\\')
- Static routes are always matched before regex routes (except for the catch-all).
'GET /test/dbquery': {
controller: TestController,
method: 'dbQuery',
},
'GET /api/v1/books/:id': {
controller: BooksController,
method: 'show',
parameters: { id: '\\d+' },
},
'GET /:directory/:file': {
controller: FileController,
method: 'download',
parameters: { directory: ['\/*\\w*\/', '*'], file: '\\w+\\.\\w+' },
},
'GET /:phpFile': {
controller: ExternalRuntimeController,
method: 'execPhp',
parameters: { phpFile: ['.*.php'] },
},
Test it with this:
$ curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "formKey=valueform" --cookie "USER_TOKEN=Yes" http://localhost:3005/index.php?test=form
Services are singleton classes. Can be dependency injected in all controller methods by its registered name/key.
- Register your service in config/services.js
- Inside the controllers, you don't have a lexical scope, you have no 'this'.
To get the controller, you need to dependency inject it with a destructing parameter by typing:
{ controller }
or justcontroller
in your controller method parameters.
- Test PUT, POST, DELETE
- Test POST Json
- Test GET multipart form
- Test POST File
- Test WAF
- Test Rate limiter
- Streaming responses
- Error responses must be in the type of the Accept header.
- Do performance tweaking again, because its 200k vs 44k when early returning request before kernel.
- As far as I know; Scoped DI uses a lot
- Router is probably some.
- Do profiling
Managed to come to half the speed of Nginx, serving the same content. But it is much faster to use this Node directly than going through Nginx first. Using this as gateway for php-fpm is 2x slower than Nginx -> fpm. There are some screenshots of the benchmarks. Add them to this repo.
Update 31.03.2023:
After testing this on my 16 core AMD 7950X I got some interesting results.
node-works (0.5.4, router with regex, physical, auto workers a 32 threads)
Bombarding http://platonpc6:8000 with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 241416/s 41s
Done!
Statistics Avg Stdev Max
Reqs/sec 241909.04 11952.28 272448.09
Latency 411.13us 135.26us 36.87ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 103.11MB/s
The interesting part is that the distribution of load between node workers goes from 80% to 15%, quite evenly spread.
It seems like no load for them. That indicates that my test machine Ryzen 3 3100 couldn't create enough load with bombardier.
Oh now, when I think about it, bombardier does saturate the CPU like 450%, but when I look at the throughput here
– I think I need to do a speed test between the machines.
Nginx (nginx version: nginx/1.22.1)
robert@platonpc5:~/go/bin$ ./bombardier -c 100 -n 10000000 http://platonpc6:80
Bombarding http://platonpc6:80 with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 208203/s 48s
Done!
Statistics Avg Stdev Max
Reqs/sec 208228.69 8590.20 214424.95
Latency 477.54us 172.77us 32.53ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 111.40MB/s
node-works (with express as backend, same settings as the other node-works)
./bombardier -c 100 -n 10000000 http://platonpc6:8000
Bombarding http://platonpc6:8000 with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [================================================================================================================================================================================================] 100.00% 122761/s 1m21s
Done!
Statistics Avg Stdev Max
Reqs/sec 123025.80 15415.93 167395.90
Latency 810.69us 1.00ms 117.40ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 65.82MB/s
So NOW I have done an another test, with less data output, actually only one dot ".".
node-works (uws, auto, 32 threads)
Bombarding http://platonpc6:8000/loadtest with 10000000 request(s) using 100 connection(s)
10000000 / 10000000 [==================================================================================================================================================================================================] 100.00% 247334/s 40s
Done!
Statistics Avg Stdev Max
Reqs/sec 248071.48 12875.44 267839.63
Latency 400.93us 123.37us 33.66ms
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 44.47MB/s
But it still doesn't generate much load on the server. The bottle neck must still be the bombardier library.
I'm going to start two bombardiers: ./bombardier -c 100 -n 10000000 http://platonpc6:8000/loadtest at the same time.
That is weird. I got the same result on both. Approximately the same result in total, 50% - 50%. I am struggling to find the bottle neck here.
I'm going to test from another computer at the same time. It is about 30% slower than this, so I will benchmark it first.
Okay, so this new computer (platonpc4) is terribly slower OR the network in between slows it down. It only generates 11k ish more RPS.
Although it helped anyway because I ran it from the Ryzen 3 3100 at the same time as well, and it got the same result.
So that means its not the server, not the node-works that is the bottle neck. It actually looks like it is the network, but from the Ryzen 3 3100 and the server its only one TPlink Gbit switch.
Why 100 connections? It seems to not mean anything how many there is above 30. It caps at the same number above that.
Now I tested with both platonpc4 (i5 3570k) and platonpc5 (ryzen 3),
the i5 generated ish 95k and the ryzen generated approx 202k, ish sub-300k against nginx.
Going to test again against node-works, but nginx have no troubles delivering this, almost not creating any load at all.
Okay so now this is interesting. Testing against nginx from platonpc4 alone gives me 180k rps on 1000 connections.
Nah, that was wrong. It didnt respond with real response (the "."), it responded with an error "Too many open files".
It is still peaking at around 200k.
Testing against node-works now:
platonpc4 coming out with around 224k on 500 connections
platonpc5 coming out with around 243k on 500 connections
Both together should give ish 500k. It doesnt. It ends at 153k on both, while running together.
node-works doesnt saturate, but there are a lot of softirqd activity.
robert@platonpc5:~/go/bin$ ./bombardier -c 500 -n 10000000 http://192.168.2.192:8000/loadtest
Bombarding http://192.168.2.192:8000/loadtest with 10000000 request(s) using 500 connection(s)
10000000 / 10000000 [=================================================================================================================================================================================================] 100.00% 154207/s 1m4s
Done!
Statistics Avg Stdev Max
Reqs/sec 154403.33 23670.92 266861.49
Latency 3.23ms 9.44ms 1.48s
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 28.28MB/s
obert@PlatonPC4 ~/tmp $ ./bombardier-linux-amd64 -c 500 -n 10000000 http://192.168.2.192:8000/loadtest
Bombarding http://192.168.2.192:8000/loadtest with 10000000 request(s) using 500 connection(s)
10000000 / 10000000 [=================================================================================================================================================================================================] 100.00% 152257/s 1m5s
Done!
Statistics Avg Stdev Max
Reqs/sec 152507.64 22731.04 247298.64
Latency 3.27ms 9.14ms 1.69s
HTTP codes:
1xx - 0, 2xx - 10000000, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 27.92MB/s
bwm-ng shows 97MB/s, I think it's being network capped somewhere.. and that means 300k RPS.
While looking at the CPU load it looks like it should be possible to get node-works up to 1 mill RPS, if network, kernel limits etc is playing ball!
But currently its not CPU limitted, not on the 7950X at least.