Figure 2: The Apache Prefork Multi-processing Module.
use.
Otherwise, as asynchronous tasks execute upon
separate worker threads, the event loop thread
continues serving other pending requests, invoking
callbacks, or spawning additional asynchronous
tasks upon worker threads as needed. Asynchronous
tasks, once completed, return control to the
application developer via callbacks, which are
placed on the end of the event queue as new events,
and wait their turn to be handled by the Node.js
event loop as appropriate.
For performance reasons, it is critical that users
take care to avoid introducing computationally
heavy code, or other synchronous tasks, into either
request handlers or callbacks that are executed on
the event loop thread, as these block execution of the
event loop. In fact, programmers must explicitly
specify when and if they wish to use a synchronous
version of a given operation, as any synchronous
implementation will block the event loop until its
task completes (Tilkov and Vinoski, 2010).
However, if the asynchronous functionality that
comes packaged with Node.js is employed, and the
event loop remains unburdened with synchronous
operations, then the load of processing each
individual item on the event loop queue remains
relatively light, and chiefly consists of spawning an
asynchronous task on a worker thread.
As Figure 2 demonstrates, Apache employs a
configurable multi-process, multi-threaded solution
to concurrency. This modern Apache architecture
represents a departure from the original Apache
engine, which was designed at a time when
concurrency was not a severely limiting factor for
most use cases. The contemporary Apache maintains
a thread pool (in varying patterns by version,
configuration, and host operating system) for
handling request service operations in parallel
(Menasce, 2003). However, as mentioned above,
while these processes are carried out in true parallel
fashion on multiple threads – and possibly even on
multiple processors – each individual thread will
block in its own right while waiting on outstanding
operations that might take long stretches of time,
such as database or file system access. Moreover,
the construction, tear-down, and context switching
of these threads can be costly in and of itself. We
aim to mitigate the computational overhead of
Apache/PHP’s management of many parallel
threads, each of which may individually block as it
handles a single user request. We compare against
the aforementioned Node.js event queue model,
where a single thread responds to multiple requests
and multiple responses using the same single queue,
and farms subtasks to a pool of worker threads.
The first efforts to provide web benchmarks
include SPECweb and SPECjbb from the Standard
Performance Evaluation Corporation. Although
SPECjbb, specific to Java servers and JVMs,
continues to be supported with a 2015 release
(SPECjbb, 2015), SPECweb’s collection of server-
side applications implemented in JSP, PHP, and
ASPX has retired as of 2012 (SPECweb, 2009).
Although these benchmarks have been used to
compare web server implementations before,
including power characteristics (Economou et al.,
2006) and maximum concurrent users (Nahum,
Barzilai and Kandlur, 2002), the underlying
architecture of these servers has changed in recent
years, and the applications are no longer
representative of the feature-set nor asynchronous
APIs provided by modern web services. Studies
using WebBench or other traffic generators to load-
test Apache and other web-servers have measured
server performance when accessed by tens of
simultaneous clients (Haddad, 2001), rather than the
hundreds or more expected on contemporary
services.
Prior work has also investigated the performance
of JavaScript virtual machines on different mobile
platforms (Charland and Leroux, 2011), or have
compared the benchmarks offered by JavaScript
engines to the execution of JavaScript on the
websites of famous web applications
(Ratanaworabhan, 2010). Both of these studies limit
performance analysis of JavaScript to client-side
execution, either measured coarsely over the
application duration, or by analysing fine-grain
events recorded by instrumented client browsers.
Although these studies compare different browsers
and/or different client hardware, they do not
demonstrate the scaling advantages of JavaScript
when executed on the server side.
One recent study in particular measures the
server-side execution of Node.js in comparison to
Apache/PHP and Nginx, another open source web
server competitor (Chaniotis, Kyriakou and Tselikas,