Unicorns in Node or When does the cluster module mater?

devops modcloth node.js ruby taskrabbit 
2012-02-01
↞ See all posts


At work the other day, an engineer most familiar with Ruby asked "What is the equivalent to Unicorn in Node?". Unicorn is a great single-threaded server for ruby apps (Sinatra, Rails, etc) which implements a parent-child cluster of workers to share requests. However, as node can handle more than one request in parallel, the metaphor gets a little strange.


This question becomes important when deploying a production app, it’s most cost-effective to use 100% of your resources. That means in both Node and Unicorn, you want a "child" for each CPU you have, assuming you don’t run our RAM. However, you might want even more if your application spends significant time waiting for another server (perhaps a DB). For example, say you have a website which spends 1/2 of its time doing "CPU-bound tasks" (like rendering HTML) and the other 1/2 of its time fetching information from the database. If you have a 4-core server, you would want ~8 workers to take maximum advantage of the processors you have (1). In Ruby/Rails, this would mean that at most you could handle 8 simultaneous requests, but what about node?

Once again, the question gets a little strange. We know that our ruby app is single-threaded and single request-ed. This means that no matter which ORM or framework we use, only one request can be happening at a time (2). In an equivalent application, Node will still take 1/2 the time per request to process those HTML templates, but we have the opportunity to stack requests when the CPU is idle waiting for the database. We can have any number of requests "pending" while waiting for the database (3). This, theoretically, can greatly increase your application’s throughput.

Unicorn, when running, handles a request like this: port -> master -> child. The request is always revived on the same port or socket, and the master process routes the request to a child. The master process can spawn or kill children, reboot them, and otherwise manage them. We can make something like this easily in node with the cluster module, however there are only some types of applications, mainly those that are CPU bound (or using an external service that is CPU bound) which would benefit from a cluster in practice.

Take the simple example webserver from the nodejs.org website:

1var http = require("http"); 2 3http 4 .createServer(function (req, res) { 5 res.writeHead(200, { "Content-Type": "text/plain" }); 6 res.end("Hello World\n"); 7 }) 8 .listen(1337, "127.0.0.1"); 9 10console.log("Server running at http://127.0.0.1:1337/");

This server isn’t doing much, and can probably handle thousands of connections at a time.

Lets "simulate" a slower request that takes 1 second:

1var http = require("http"); 2 3var handleRequest = function (req, res) { 4 setTimeout(function () { 5 res.writeHead(200, { "Content-Type": "text/plain" }); 6 res.end("Hello World\n"); 7 }, 1000); 8}; 9 10http 11 .createServer(function (req, res) { 12 handleRequest(req, res); 13 }) 14 .listen(1337, "127.0.0.1"); 15 16console.log("Server running at http://127.0.0.1:1337/");

Now, even though a client will see the response after 1 second, the server still isn’t doing much. Requests are collected and stored, so more RAM will be used, but there isn’t any real computation happening. I’ll bet this server can still handle thousands of simultaneous requests. You can test this out. Make 10 requests with curl and time them (time curl localhost:1337) as fast as you can, and you will notice that all of them only take 1 second to complete.

What we need to do now is to simulate a "blocking" sleep, which means that we need to engage the CPU the node process is using and block it. Keep in mind this is a terrible idea and should never be done in real life:

1var http = require("http"); 2 3var handleRequest = function (req, res) { 4 var startTime = new Date().getTime(); 5 var sleepDuration = 1000; 6 while (startTime + sleepDuration > new Date().getTime()) {} 7 res.writeHead(200, { "Content-Type": "text/plain" }); 8 res.end("Hello World\n"); 9}; 10 11http 12 .createServer(function (req, res) { 13 handleRequest(req, res); 14 }) 15 .listen(1337, "127.0.0.1"); 16 17console.log("Server running at http://127.0.0.1:1337/");

Note how we use a loop which doesn’t exit until enough time has passed to "block" the CPU. Now we know for a fact that our application will only handle one request per second (and use a whole CPU core to do it). If you now make 10 requests with curl and time them you will notice that the requests stack and take longer. The first request will take 1 second as before, but if you start the requests all at the same time, the second request will take 2 seconds, the third request 3, etc… Now we have an application which will benefit from cluster! If we can launch 10 parallel instance of this server at once, we can go back to handling 10 requests in 1 second (4).

1var http = require("http"); 2var cluster = require("cluster"); 3var desiredWorkers = 10; 4 5var log = function (message) { 6 console.log("[" + process.pid + "] " + message); 7}; 8 9var handleRequest = function (req, res) { 10 var startTime = new Date().getTime(); 11 var sleepDuration = 1000; 12 while (startTime + sleepDuration > new Date().getTime()) {} 13 res.writeHead(200, { "Content-Type": "text/plain" }); 14 res.end("Hello World\n"); 15 log("sent message in " + (new Date().getTime() - startTime) + " ms"); 16}; 17 18var masterSetup = function () { 19 for (var i = 0; i < desiredWorkers; i++) { 20 cluster.fork(); 21 } 22 log("cluster booted!"); 23}; 24 25var childSetup = function () { 26 http 27 .createServer(function (req, res) { 28 handleRequest(req, res); 29 }) 30 .listen(1337, "127.0.0.1"); 31 32 log("Server running at http://127.0.0.1:1337/"); 33}; 34 35if (cluster.isMaster) { 36 masterSetup(); 37} else { 38 childSetup(); 39}

Note how we added a logger which shows the pid of the process saying the message, and we can show that there are now 11 distinct processes running, 10 children and a parent.

As a bonus, you can look at how to instrument a cluster implementation like this signals, so you can tell the master process to add or remove children, reboot them, etc by looking at the actionhero cluster module

There are always limits to how many requests an application can handle at time, even with the simplest example here. However, the benefits of parallelism increase substantially with CPU-bound workloads.

Footnotes:

  1. In reality you probably want 7 workers on an 4-core system, to leave some flex space
  2. Sinatra and Rails follow this, EventMachine is the exception in Ruby frameworks
  3. It’s very possible to overload a database with too many simultaneous requests, like any server. This is why most database adapters (and some ORMs) operate a connection pool which will limit the number of requests it will make at a time. Subsequent requests are queued in-app.
  4. This really only works if you have a computer with 10+ CPUs, but you get the idea.
Hi, I'm Evan

I write about Technology, Software, and Startups. I use my Product Management, Software Engineering, and Leadership skills to build teams that create world-class digital products.

Get in touch