Each JavaScript framework takes a quite different
approach to fulfill its goals. This has an impact on
several relevant issues (R. Gómez, 2013) including
the overall performance of the application.
Nevertheless, this is seldom taken into account when
deciding about the framework to be used. Many
studies have shown that performance has a profound
impact on end users. An application that takes a long
time to load or a browser that apparently freezes are
some visible effects of performance problems.
A recent study (S. Casteleyn, I. Garrigo,
J.Mazón, 2014) showed the need for more research
in several relevant aspects of RIA such as security,
offline functionality, and performance. This last
aspect is becoming more and more important with
the rapid increase in the use of mobile devices. The
reason is that mobile devices are, in general, less
powerful that a desktop or a laptop computer and, of
course, the performance of the application depends
on the machine where it runs.
We developed a test environment that facilitates
the application of performance tests to any RIA or
SPA and we used it to conduct a wide range of tests
to different implementations of the same single page
application. Each implementation corresponded to a
version of the application that was built using a
different JavaScript framework. This strategy
allowed us to validate our test environment in a real
scenario, and on the other hand, to learn more about
the performance behavior of most popular
frameworks.
The rest of the paper is organized as follows. In
the second section we put our research in
perspective by reviewing the relevant related work.
Section 3 describes our test environment and tools
used in the research. Section 4 describes the results
obtained after using our test tools to measure relative
performance of the most popular JavaScript
frameworks. Finally, in section 5 we provide a
conclusion for this work.
2 RELATED WORK
In Gizas et al.( A.B. Gizas, S.P. Christodoulou and
T.S. Papatheodorou, 2012), the relevance of careful
choosing a JS framework is expressed. The research
evaluates the quality, validation and performance of
different JavaScript libraries/frameworks (ExtJS,
Dojo, jQuery, MooTools, Prototype and YUI). The
quality is expressed in terms of size, complexity and
maintainability metrics. The performance tests
corresponded to measurements of the execution time
of the framework with SlickSpeed Selectors test
framework. The tests are designed to evaluate the
internals of the libraries themselves and do not mix
with the application built upon. Additionally none of
the evaluated libraries in Gizas work does provide
an architectural context to develop an application,
they only help access the DOM and to communicate
through AJAX calls.
Graziotin et al. (D. Graziotin and P.
Abrahamsson, 2013) extends Gizas work proposing
a design towards a comparative analysis framework
of JavaScript MV* frameworks to help practitioners
select a suitable one (JavaScript framework). The
authors interviewed some front-end developers in
order to get first hand opinions on the relevant
aspects to ease their work.
Vicencio et al. (S. Vicencio, J. Navon, 2014)
carried out a more recent research work on the
relative performance of client side frameworks.
They focused on the time it takes the application to
load and to render the page in the browser, and the
time it takes the application to execute a given action
on the user interface. The results compared several
well-known frameworks (Backbone, Ember,
Angular, Knockout) using the TodoMVC
application as a basis. They did not build or
implemented any test tools but they used existing
tools Webpagetest (P. Meenan, 2014) and
PhantomJS (A. Hidayat, 2014).
Petterson (J. Petersson, 2012) compares a tiny
framework called MinimaJS to Backbone and
Ember in a similar way as Vicencio (S. Vicencio, J.
Navon, 2014). In his work, Runeberg (J. Runeberg,
2013) performs a comparative study between
Backbone and Angular. One of the aspects revised in
the study covers some performance test with
PhantomJS for page automation.
There are few comparative studies on
frameworks for the mobile web as well. Heitkötter
(H. Heitkötter, T. A. Majchrzak, B. Ruland, T.
Webber, 2013) elaborates a set of evaluation criteria
for converting web applications into apps for the
different mobile operative systems. This is a future
step on the investigation of performance, since it
may be a good way to reduce the code that is
constantly downloaded from mobile devices.
Nolen (D. Nolen, 2013), on the other hand,
created a library named Om, which takes a different
approach when it comes to data handling. He
implemented the same TodoMVC application using
this library, and showed some benchmarks,
comparing this implementation with the TodoMVC
Backbone.js (A. Osmani, 2013) one. The test
includes creating, toggling and deleting 200 to-do
entries. The differences in the time it takes to each
WEBIST2015-11thInternationalConferenceonWebInformationSystemsandTechnologies
48