Metrics to Support IT Service Maturity Models
A Systematic Mapping Study
Bianca Trinkenreich
1
, Gleison Santos
1
and Monalessa Perini Barcellos
2
1
Department of Computing, Universidade Federal do Estado do Rio de Janeiro (UNIRIO), Rio de Janeiro, Brazil
2
Department of Computing, Universidade Federal do Espírito Santo (UFES), Vitoria, Brazil
Keywords: Measurement, Key Performance Indicator, IT Service Quality, Maturity Models, Systematic Mapping
Study.
Abstract: Background: Maturity models for IT service such as CMMI-SVC and MR-MPS-SV requires identification
of critical business process and definition of relevant metrics to support decision-making, but there is no
clear direction or strict suggestion about which should be those processes and metrics. Aims: We aim to
identify adequate metrics to be used by organizations deploying IT service maturity models and the
relationship between those metrics and processes of IT service maturity models or standards. Research
questions are: (i) Which metrics are being suggested for IT service quality improvement projects? (ii) How
do they relate to IT service maturity models processes? Method: We have defined and executed a systematic
mapping review protocol. A specialist on systematic mapping review and IT service maturity models
evaluated the protocol and its results. Results: Of 114 relevant studies, 13 addressed the research questions.
All of them presented quality metrics, but none presented tools or techniques for metrics identification.
Conclusions: We identified 133 metrics, 80 related to specific processes areas of service maturity models.
Even being a broad result, not all models aspects were considered in this study.
1 INTRODUCTION
Service is about delivering value to customers by
facilitating results they want to achieve without
taking costs and risk ownership (Davenport, 2013).
IT service management is a set of specialized
organizational capabilities for providing value to
customers through services. Its practice has been
growing by adopting an IT management service-
oriented approach to support applications,
infrastructure and processes (TSO, 2011).
Guidance on how to develop and improve IT
service maturity practices is a key factor to improve
service performance and customer satisfaction
(Forrester et al., 2010). CMMI-SVC (Capability
Maturity Model Integration for Services) (Forrester
et al., 2010) and MR-MPS-SV (Reference Model for
IT Services Improvement) (Softex, 2012a) models
have been created to attend this need. These models
require appropriate metrics to be identified in order
to monitor various processes executed for service
delivering to customers. Thus, selection of processes
to be measured must be aligned with organizational
goals in order for measurement results to be able to
deliver relevant information for decision-making and
business support. However, there is no clear
direction or strict suggestion about which business
processes and metrics should be considered.
This paper describes the main results of a study
carried out aiming to identify in the literature
metrics used to monitor IT service quality that are
appropriate to meet IT service maturity models
requirements. The paper is structured as follows:
background on quality and IT service measurement,
and similar studies (Section 2), systematic mapping
planning (Section 3), systematic mapping results
(Section 4) and final considerations (Section 5).
2 BACKGROUND
Service quality is an abstract concept due to the
nature of the term “service,” which is intangible,
non-homogeneous and its consumption and
production are inseparable (Parasuraman et al.,
1985). In order to be able to offer quality, the
supplier must continually assess the way service is
being provided and what the customer expects in the
330
Trinkenreich B., Santos G. and Perini Barcellos M..
Metrics to Support IT Service Maturity Models - A Systematic Mapping Study.
DOI: 10.5220/0005376003300337
In Proceedings of the 17th International Conference on Enterprise Information Systems (ICEIS-2015), pages 330-337
ISBN: 978-989-758-097-0
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
future. A customer will be unsatisfied with IT
service providers who occasionally exceed
expectations, but at other times disappoint.
Providing consistent quality is important, but is also
one of the most difficult aspects of the service
industry (ISO/IEC 20000, 2011).
As production and consumption of many services
are inseparable, quality is delivered during service
delivery, making customer reporting of high
relevance for quality evaluation (Parasuraman et al.,
1985). Quality assessments are not just service
outputs; they also involve service delivery process
evaluation (Parasuraman et al., 1985).
Although methods such as GQM (Solingen and
Berghout, 1999) support measurement planning, it is
still necessary to select metrics properly to be
collected and analysed for decision-making.
However, such methods do not suggest what to
measure, they only reinforce that metrics should be
aligned with the organization’s goals. Thus, we must
define which metrics and indicators are suitable for
monitoring service quality and customer satisfaction.
2.1 IT Service Maturity Models
Maturity models focus on improving organizations
processes with to the assumption that product or
system quality is highly influenced by the quality of
process used to develop/maintain it. Through
essential elements of effective processes and an
evolutionary path for improvement, maturity models
provide guidelines on how to design processes, as an
application of principles to meet the endless cycle of
process improvement (Forrester et al, 2010).
CMMI-SVC (Forrester et al., 2010) is a maturity
model based on CMMI concepts and practices, and
other standards and service models such as ITIL
(TSO, 2011) ISO/IEC 20000 (ISO/IEC, 2011),
COBIT (ISACA, 2012), ITSCMM (Niessink et al.,
2005). CMMI-SVC was created for service
providers and covers the necessary steps to create,
deliver and manage services. Maturity levels are
used to describe a recommended evolutionary path
for organizations that aim to improve service
delivery processes. Table 1 shows 5 maturity levels,
from level 1, where processes are ad hoc or chaotic.
Initial levels consider creation and description of
process and work plans whereas the higher levels
demonstrate processes that are quantitatively and
continuously controlled and improved. From 24
CMMI-SVC process areas, only 7 are CMMI-SVC
specific (italic in Table 1).
MPS.BR Program (Kalinowski et al., 2014) is an
initiative funded by the Brazilian government that
seeks to make it possible for micro, small and
medium-sized Brazilian companies to invest in
process improvement and software quality. Since
2004, more than 600 companies have already been
assessed on the reference model for software process
improvement, MR-MPS-SW (Softex, 2012b)
(source: www.softex.br/mps.br). In 2012, the
reference model for IT services improvement (MR-
MPS-SV) (Softex, 2012a) was created to provide a
maturity model more suitable for micro, small and
medium-sized Brazilian companies, but also
compatible with internationally accepted quality
standards that take advantage of existing expertise in
other standards and maturity models.
Table 1: CMMI-SVC process areas by maturity level
(Forrester et al., 2010).
Level Process Areas
5
Organizational Performance Management (OPM),
Casual Analysis & Resolution (CAR)
4
Organization Process Performance (OPP), Quantitative
Work Management (QWM)
3
Organizational Process Focus (OPF), Organizational
Process Definition (OPD), Organizational Training (OT),
Integrated Work Management (IWM), Decision Analysis
& Resolution (DAR), Risk Management (RSKM),
I
ncident Resolution & Prevention (IRP), Strategic
Service Management (STSM), Service System Transition
(SST), Capacity & Availability Management (CAM),
Service System Development (SSD), Service Continuity
(SCON)
2
Requirements Management (REQM), Work Planning
(WP), Work Monitoring & Control (WMC), Supplier
Agreement Management (SAM), Measurement &
Analysis (MA), Process & Product Quality Assurance
(PPQA), Service Delivery (SD), Configuration
Management (CM)
Table 2: MR-MPS-SV processes (Softex, 2012a).
Level Processes
A (no new processes are added)
B (no new processes are added)
C
Decision Management (GDE), Risk Management (GRI),
Capacity Management (GCA), Service Continuity and
Availability (GCD), Release Management (GLI),
I
nformation Security Management (GSI), Service
Reports (RLS)
D
Service System Development (DSS), Budget and
Accounting Services (OCS)
E
Process Establishment (DFP), Process Assessment and
Improvement (AMP), Change Management (GMU),
Human Resources Management (GRH)
F
Measurement (MED), Acquisition (AQU),
Configuration Management (GCO), Quality Assurance
(GQA), Work Portfolio Management (GPT), Problem
Management (GPL)
G
Requirements Management (GRE), Work Management
(GTR), Service Delivery (ETS),
I
ncident Management
(GIN), Service Level Management (GNS)
Table 2 depicts MR-MPS-SV (Softex, 2012a) 7
MetricstoSupportITServiceMaturityModels-ASystematicMappingStudy
331
maturity levels (from G to A, the highest)
comprising 24 processes, of which 12 are based on
ISO/IEC 20000 quality of services standard and
therefore have no equivalent in MR-MPS-SW
(shown in italic in Table 2).Initial levels of both
presented maturity models use measurement in a
traditional way. At this point, metrics are generally
collected and analysed by comparing planned and
executed values, which allow corrective actions to
be taken in future executions. At highest maturity
models levels (CMMI-SVC levels 4/5, MR-MPS-SV
levels A/B), aiming to meet quantitative
management, measurement is associated with
statistical process control techniques (Forrester et
al., 2010) (Softex, 2012a).
2.2 Similar and Related Studies
A challenge faced by organizations is about
selecting metrics to help them monitor provided
service quality aspects, support service management
improvement, and positively affect customers’
quality perception. The identification of such metrics
is not an easy task. Liu et al., (2011) present a case
study about an IT service management framework of
an ITIL-based company. The authors discuss key
performance indicators (KPI) evaluation, service
level agreement, an improvement for this framework
and IT service management processes (Liu et al,
2011). However, those KPI are only evaluated based
on ITIL processes of a specific company. Lepmets et
al., (2011) present a quality metrics framework for
IT services derived from ITIL, ISO/IEC 20000 and
SERVQUAL, by conducting studies in industry. The
framework is later extended through a systematic
review (Lepmets et al., 2012); (Lepmets et al.,
2013), but no relationship between IT service quality
metrics to services maturity models process areas are
proposed.
3 SYSTEMATIC MAPPING
PLANNING
Initially we conducted an informal literature review
about metrics and indicators for IT service quality
and service maturity models in order to obtain
knowledge about IT service domain. We noticed the
importance of having a comprehensive view of how
to quantify and measure service quality, in order to
map metrics to be used by IT service maturity
models and their processes areas.
The goal of systematic mappings is to collect and
evaluate evidence on a particular topic, but also find
results less dependent on researcher's own case,
allowing research replication and results comparison
(Kitchenham and Charters, 2007). The systematic
mapping scope consists of identifying papers
presenting metrics that could be used to assess IT
service quality within the context of IT service
maturity models. A protocol based on systematic
reviews (Kitchenham and Charters, 2007) was used
to guide the systematic mapping execution, but due
to space limitations, it is not fully detailed.
To assist systematic mapping analysis, research
questions were defined (see Table 3) and later used
for generating the data collection form. Publication
selection was done in three steps. First: search string
execution and results cataloguing. Second: titles and
abstracts reading and applying inclusion (IC) and
exclusion (EC) criteria (Table 4) to the publications
selected in the first step. Third: full text reading
(publications selected in the second step) and
verification if they really meet defined criteria.
Table 3: Research questions.
# Main Questions # Secondary Questions
1
Which metrics are
suitable for IT
service quality
improvement
initiatives?
1.1 What are the metrics origins?
1.2
Is there any evidence of practical
applications for the metrics?
1.3
What software tool for planning,
collection, analysis and dissemination
of metrics is described?
2
What is the
relationship
between found
metrics and IT
service maturity
models processes?
2.1
What models of quality improvement
services are mentioned?
2.2
Is there any technical reference used
to identify metrics?
2.3
Is there any Statistical Process Control
application detailed for the metrics?
Table 4: Inclusion and exclusion criteria.
IC1
The publication main contribution proposes or describes
the use of indicators or metrics to assess quality and/or
performance and/or IT service maturity.
IC2
Techniques, methods, processes and/or tools related to
metrics identification to assess quality and/or performance
and/or maturity of IT service are addressed by the
publication.
EC1
The publication is not derived from peer reviewed
conferences and journals.
EC2
The publication is not a book chapter not subject to peer-
review (such as not originated from conference papers) or
other non-scientific publications (such as whitepapers).
Search string was: ("IT service" OR "IT
services") AND (maturity OR quality OR
performance OR qos) AND (itil OR cobit OR
"ISO/IEC 20000" OR itsm OR cmmi-svc OR
"CMMI for Services" OR mps-sv OR mr-mps-sv)
AND (TITLE-ABS-KEY (measurement OR metric
OR metrics OR measure OR measures OR
ICEIS2015-17thInternationalConferenceonEnterpriseInformationSystems
332
measuring OR kpi OR "Key Performance
Indicator")). Scopus (www.scopus.com) search
engine was selected due to its reliable and replicable
results and due to index most control papers.
An expert on maturity models, IT service and
systematic mappings evaluated protocol about:
search string ability to identify relevant papers,
including control papers; research questions scope
and extent in relation to research objectives;
adequacy of control papers; ability of data collection
form on identifying important aspects related to the
systematic mapping objective.
4 SYSTEMATIC MAPPING
EXECUTION
In August 2014, the search expression was executed
on Scopus search engine, returning 114 publications.
The second step returned 45 publications. After the
third step, 13 remained. We did not filter any venue
while executing the search string. All publications
returned are properly indexed by Scopus engine.
Table 5 presents the selected publications, Papers
in bold are the control ones. It is worth noting the
almost complete absence of venues related to
Software Quality, Software Engineering or
Experimental Software Engineering (but PROFES,
EuroSPI and QUATIC).
As a quality assurance procedure, a specialist
analysed all selection steps, and inclusion and
exclusion criteria application. Aiming to dispel
doubts and avoid judgments of subjectivity, research
questions (both primary and secondary) as well as
inclusion and exclusion criteria evolved during
study. Protocol described in section 4 already
reflects these decisions. We also evaluated the
papers presented in Table 5 regarding their overall
quality and soundness. Although this is partially
accomplished through the application of both
exclusion criteria (EC1 and EC2) that guarantee all
papers were peer-reviewed, we also critically read
all the papers to assure that proper methodological
aspects were applied.
Finally, the decision of whether or not to keep
papers in systematic mapping scope in each of
selection steps (Tables 3 and 4) and data collection
from papers was evaluated. After this step, data
collected was summarized. Metrics similarities were
analysed considering their name, description and
formula. Most papers had not presented description
and formula and for those cases, we analysed only
the metric name and defined a unique name to
represent similar or identical metrics, consolidating
in only one metric.
Table 5: Selected papers after complete reading.
# Title, Authors, Publication Year, Source
1
DSS Based IT Service Support Process Reengineering Using ITIL: A Case Study - Valverde, R., Malleswara, T. - Journal
Intelligent Decision Technologies (2014)
2
The Evaluation of the IT Service Quality Measurement Framework in Industry – Lepmets, M., Mesquida, A., Cater-Steel,
A., Mas, A., Ras, E. - Global Institute of Flexible Systems Management (2014)
3
Application Management Services Analytics - Li, W., Li, T., Liu, R., Yang, J., Lee, J. - Service Operations and Logistics, and
Informatics International Conference
4
Toward a model of effective monitoring of IT application development and maintenance suppliers in multisourced environments -
Herz, T., Hamel, F., Uebernickel, F., Brenner, W. - International Journal of Accounting Information Systems (2013)
5
Proposal of a new model for ITIL framework based on comparison with ISO/IEC 20000 standard - Tanovic, A., Orucevic, F. -
World Scientific and Engineering Academy and Society (2012)
6
Extending the IT Service Quality Measurement Framework through a Systematic Literature Review - Lepmets, M., Cater-
Steel, A., Gacenga, F., Ras, E. - SRII Global Conference (2012)
7 A Quality Measurement Framework for IT Services - Lepmets, M., Ras, E., Renault, A. - SRII Global Conference (2011)
8
Measuring Service Solution Quality in Services Outsourcing Projects using Value Driver Tree Approach - Akkiraju, R., Zhou, R. -
SRII Global Conference (2011)
9
Case Study on IT Service Management Process Evaluation Framework Based on ITIL-Liu, M., Gao, A., Luo, W., Wan, J. -
International Conference on Business Management and Electronic Information (2011)
10
SLA Perspective in Security Management for Cloud Computing - Chaves, S., Westphall, C., Lamin, F. - International Conference
on Networking and Services (2010)
11
Business-impact analysis and simulation of critical incidents in IT service management - Bartolini, C., Stefanelli, C., Tortonesi, M.
- International Symposium on Integrated Network Management (2009)
12
Measurement of Service Effectiveness and Establishment of Baselines - Donko, D., Traljic, I. - World Scientific and Engineering
Academy and Society (2009)
13
The most applicable KPIs of Problem Management Process in Organizations - Sharifi, M., Ayat, M., Ibrahim, S., Sahibuddin, S. -
International Journal of Simulation Systems, Science & Technology (2009)
MetricstoSupportITServiceMaturityModels-ASystematicMappingStudy
333
Table 6: Identified metrics.
Metrics CMMI-SVC MPS-SV
Amount of incidents that had caused business impact due to performance issues CAM IRP GIN GCA
Percentage of exactness of capacity forecast; Amount of capacity adjustments cases; Amount of resolution hours due
to capacity shortage cases; Amount of money for capacity reserves
CAM GCA
Service availability CAM GCD
Amount of incidents caused by growth rate issues CAM IRP GIN CM
Response time for a change request; Successful/failed change requests; Not tested changes because of due date;
Emergency/normal., Rejected/accepted, Major x minor, Released/pending changes; Average interactions with Change
Management process
CM GMU
Frequency of configuration updates, Percentage of configuration correctness, Mean time between versions CM GCO
Amount of IT service versions CM GLI
Amount of changes that had caused incidents and problems CM IRP
GMU GIN
GPL
Amount of change requests after a transition to production (considering a certain period) CM SST GMU GLI
Amount of incidents caused by change requests CM IRP GMU GIN
Amount of avoided incidents per day; Mean time between incidents; Mean time to restore system; Amount of
recurrent, escalated and redirected incidents; Average time to register an incident by phone and system; Average time
to categorize, prioritize, start solving action, complete action, solve an incident; Amount of incidents per SLA meet,
application, period of day, month, support person and support level, resolution way (local/remote), status, priority;
Average response time per support level; Percentage of correctness incident description; Percentage of existence of
service desk support script
IRP GIN
Amount of time to find/solve root cause; Rate of closed/on-going problems; Rate of recurrent/new problems; Amount
of time between issue start and problem open; Amount of problems solved by known errors; Average cost to solve a
problem; Amount of problems per status, month, a
p
plication, configuration item, with/without root cause,
repeated/new, overdue/on time
IRP GPL
Rate of problem number increase comparing to incidents; Recurrent incidents with/without an associated problem
record to investigate it
IRP
GIN
GPL
Rate of onshore x offshore allocated resources for projects; Amount of previous projects executed successfully for the
same client; Rate of delivered projects with/without cost optimization
IWM GPT
Frequency of organization policies update; Amount of CMMI maturity or capacity level matches; Amount of process
evaluations; Amount of identified weaknesses; Rate of improvement initiatives completed/pending; Num
b
er of cases
where process is being circumvented
OPF AMP
Frequency and amount of time hours for people training; Rate of employees who finished the training; Number of
trainings per year
OT GRH
Amount of systems maintenance correctness after training OT SSD GRH DSS
Amount of time, frequency and duration used for verification activities PPQA GQA
Amount of identified risks per severity, per area, per application, per status; Average impact of risks; Rate of
deviations from the expected real goals; Amount of reduced deviations; Frequency of backup execution; Amount of
hours to execute backup routines
RSKM GRI
Amount of identified issues during security tests amount RSKM GRI GSI
Service outages caused by capacity and availability issues SCON CAM GCA GCD
Metrics CMMI-SVC MPS-SV
Frequency of SLA monitoring; Grades of SLA satisfaction level; Amount of services covered by SLA and OLA;
Amount of delivered services in accordance with SLA; Average of time for SLA change request approval; Amount of
fines paid because of SLA failures; Amount of SLAs under review; Number of identified contract breaches
SAM GNS
MTBF – mean time between system failures; Business im
act caused by IT service outages; Service interruptions
number and duration per month, application, configuration item; Business processes with/without continuity
agreements; Number of disaster practices, shortcomings and gaps per month, application, configuration item; Number
of implemented preventive metrics
SCON GCD
Deployments duration; Release backouts; Automatic/manual release distribution; Failed/succeed release component
acceptance tests; New services released to production per application, month
SST GLI
Grades received on user satisfaction about received IT service; Support calls received/abandoned per day; Support
calls average time per day, month and person; Business impact caused by late service deliveries; Service request time
per user, month, application; User complaint response time; Service requests on time/late, with correct/wrong
description, completed/pending
SD ETS
Retention rate of specific key employees WMC GRH
Projects delivered in/not accordance of scope, time, resources and budget; Learned lessons by project; Projects per
defined risk status
WMC GTR
Amount of incidents caused by new releases transitioned to production SST IRP GLI GIN
Application defect density and complexity; Requirement defects found per project phase; Service documentation
update frequency; Hours spent on rework, review, inspection and tests; Cost and Defects per application function
point; correction time effort, project phase, severity; Function points delivered by developer per day; Application
components per business results; Time per each application development phase; Failed/accepted acceptance tests;
Reduced/increased time for maintenance; Planned/unplanned new services
SSD DSS
ICEIS2015-17thInternationalConferenceonEnterpriseInformationSystems
334
4.1 Results
From the content of the 13 papers in the systematic
mapping scope, it was possible to find more than
300 individual metrics. After performing repeated
significance metrics aggregation and removing
metrics unrelated to any process maturity models
area, neither CMMI-SVC (Forrester et al., 2010) nor
MR-MPS-SV (Softex, 2012a), this number dropped
to 133. Metrics unrelated to any process maturity
models areas are for example: Financial (Actual
price paid for received service), Service importance
to business (Utilization rate of IT service functions),
Climate (Employees know how provided service
contributes to better performance). Answering the
two main research questions, Table 6 presents the
identified metrics and the processes to which they
relate. Some metrics are related to more than one
process area. We aggregated metrics after reading
and understanding their intended use, and its
possible association to service maturity models
objectives and goals.
We classified around 24% of metrics as related
to incidents. This may indicate trends on
summarizing IT service measurement in incidents,
possibly leaving other areas without proper
attention. Around 11% of metrics were classified as
related to Service System Development. For the
development/software version of target maturity
models, the implementation concept is related to
software products and its component’s coding. On
the other hand, for the service version,
implementation relates to configuration and delivery
of all required elements to provide service, including
software development or not. Even then, a relevant
number of metrics were related to coding
performance and defects. As we searched for IT
service, it shows academy and industry trends to
consider software development and maintenance as
services. Around 9% of metrics were classified as
related to Change Management, which in system
development is also a relevant area about
maintaining applications that affect other areas that
need to be studied further. Around 8% of metrics
were classified as related to Service Delivery, whose
goals are to ensure that there are policies, guidelines
and documented approaches for service delivery and
operation. It ensures that all required elements
(infrastructure, resources, etc.) for service provision
are available and that service supply system
automated or not, is ready for operation and has
periodic maintenance to keep delivery of agreed
services continuity. Around 7% of metrics were
classified as related to Service Continuity, which
many times can be measured directly by monitoring
applications and generating a reliable result and is
usually one of indicators that service providers need
to meet to be in accordance with service level
agreements they have in contracts with customers.
Figure 1: Top5 CMMI-SVC areas with more metrics
found by systematic mapping.
Following, we briefly present some results
related to systematic mapping secondary questions.
Regarding "What are the metrics origin?," the
literature is the most frequent source (papers # 2, 5,
6, 7, 8, 10, 11, 12 and 13 on Table 5). In relation to
"Is there any evidence of practical applications for
the metrics?", we found that most of the papers
indicate usage of metrics in industry (# 1, 2, 3, 4, 5,
8, 9 and 11 on Table 5) and so we can say there is
already some applicability evaluation in
organizations. During publications analysis, we
looked for "What software tool for planning,
collection, analysis and dissemination of metrics is
described?" Some tools like IBM AMS Analytics
(#3 on Table 5) and two decision support systems
were found (#1 and 11 on Table 5). With regards to
"What models of quality improvement services are
mentioned?," we found CMMI, ISO/IEC 15939,
20.000 and 25.020, VAL IT, SERVQUAL, PSM
and GQM. We also looked if “Is there any technical
reference used to identify metrics?" and, even
though there are others, ITIL (TSO, 2011) is fully
cited in all publications, showing the high relevance
of this source in IT service management field.
Finally, due to its relevance to higher maturity
levels, we sought to identify if "Is there any
Statistical Process Control application detailed for
the metrics?." Only one example was obtained (#3
of Table 5), in which authors present a system that
makes advanced analysis to help manage operations.
This system would be beneficial in predicting
incidents volume to support future resource
requirements and service performance expectations,
MetricstoSupportITServiceMaturityModels-ASystematicMappingStudy
335
making a team fair sizing, without hurting SLAs.
4.2 Threats to Validity
This systematic mapping has construct and
conclusion threats (Wohlin et al., 2012) that can
influence the validity of the results.
Construct Threats: Services maturity models that are
this study’s focus (CMMI-SVC (Forrester et al.,
2010) and MR-MPS-SV (Softex, 2012a)) are
relatively recent (created in 2009 and 2012,
respectively). Therefore, there is still little research
about them in the literature. The search scope was
turned into these models origin, which is the formal
origin of IT service quality. It was not possible to
find metrics for all maturity models process areas,
and publications indicating metrics or indicators for
IT service quality do not relate them to IT service
maturity models process areas. Because of that, the
authors interpreted, based on almost intrinsic
relationship between metrics characteristics and
various IT service maturity models process areas,
considering their relevant aspects (Incident, Service
Delivery, Capacity, Availability, Continuity, etc.)
and proceeding with association.
We could not consider only papers presenting
experimental evidence about proposed metrics
usage. In order to minimize this threat, the authors
recorded practices and applications evidence level
for considered metrics in general results analysis.
Due its relevance and comprehensiveness,
Scopus search database was chosen as the search
source. However, Scopus did not index one of the
four control papers. Thus, the paper "Extending the
IT Service Quality Measurement Framework
through the Systematic Literature Review", which is
indexed only by Springer (link.springer.com), was
added to the selected publications, aiming to reduce
the. Even with this Scopus limitation.
Conclusion Threats: After applying criteria selection
and exclusion, only 39% of the papers selected by
the search string remained as part of systematic
mapping scope. The full text of some papers was not
available for reading. To avoid premature
elimination and reduce this threat, an email was sent
to the papers authors asking for the papers. As a
result, in the third step it was possible to access the
full text of 84% of the selected papers.
The search period was limited to the last six years.
This decision was made because the oldest control
paper year was from 2010 and CMMI-SVC
(Forrester et al., 2010) maturity model was created
in 2009. Making 2009 the cut-off year, 114 papers
were found. Moreover, this threat is minimized
because one selected paper presents a list of other
selected papers by a systematic mapping describing
IT service metrics with only one paper prior to 2009.
Therefore, we believe that the impact of limiting
papers to 2009 on is low to this study’s results.
4.3 Further Work and Remarks
By analyzing the identified metrics, we noticed that
they could indicate cause-effect correlation between
process areas. For instance, the metric “Amount of
incidents caused by changes” considers the relation
between the Incidents and Changes areas. This
relation is starting to be deeply studied, as we can
see in the last Business Processing Intelligence
Challenge 2014 (www.win.tue.nl/bpi/2014/
challenge), which asked participants to propose a
method to find the impact of changes on Service
Desk workload of a fictitious organization by
process mining analysis.
This paper is part of ongoing research. Our next
step is to evaluate the mapping study results and
usage in industry. The first case study (Trinkenreich
and Santos, 2015) was performed in a global large
company, including a quantitative study based on
experimental correlation test to understand cause-
effect between Changes and Incidents, identification
of how IT service metrics are being used in this real
organization and whether metrics had been found in
literature. To understand cause-effect between
different areas can help organizations to improve
service quality as a whole, instead of only measuring
independent indicators of each area.
We expect that a list of metrics like the one this
paper provides can help organizations in metrics
selection. They can use the list as a start point and
chose metrics according to the process area to be
measured, speeding up the metrics selection activity.
In the context of metrics selection, we
understand that another limitation is the absence of
information about how to collect and analyse
metrics. The selected papers do not present any
information in this sense. Thus, in response to that
limitation, we plan to create a metrics catalogue
containing the metrics and their operational
definitions, which guide, among others, on how to
collect and analyse the metrics.
5 FINAL CONSIDERATIONS
This paper presented a mapping study that aimed to
identify metrics used to monitor IT service quality
ICEIS2015-17thInternationalConferenceonEnterpriseInformationSystems
336
that are suitable for service maturity models.
Monitoring and Process Control (GP2.8) is a
CMMI-SVC common practice (Forrester et al.,
2010) that indicates 103 service metrics examples
for all process areas, while only 37 of them are
related to CMMI-SVC specific process areas. The
systematic mapping identified 133 service metrics,
80 of them suitable for CMMI-SVC specific
processes areas. This result demonstrates that there
are improvement opportunities in suggested metrics
of CMMI-SVC (Forrester et al., 2010).
As future work, we plan to extend the systematic
mapping to other research databases. Moreover, we
plan to detail the metrics collection, analysis, and
how to maintain association between organization
goals and metrics. We also plan to conduct other
case studies in the industry and study correlations
between processes areas, aiming to get a deeper
understanding of how one process affects another.
ACKNOWLEDGEMENTS
Authors would like to thank the financial support
granted by FAPERJ (project E-26/110.438/2014)
and CNPq (Process Number 461777/2014-2).
REFERENCES
Davenport, T., 2013. Process Innovation: Reengineering
Work Through Information Technology. Harvard
Business Press.
Forrester, E., Buteau, B., Shrum, S., 2010. CMMI For
Services, Guidelines for Superior Service. CMMI-SVC
Version 1.3 2
nd
Edition. SEI. Addison-Wesley.
ISACA, 2012. COBIT 5 – Control Objectives
Management Guidelines Maturity Models: A Business
Framework for the Governance and Management of
Enterprise IT. Information Systems Audit and Control
Association, USA.
ISO/IEC, 2011. ISO/IEC 20.000-1: Information
Technology – Service Management – Part 1: Service
management system requirements. International
Standard Organization/International Electrotechnical
Commission, Switzerland.
Kalinowski, M.,Weber, K. C., Franco, N., Barroso, E.,
Duarte, V., Zanetti, D., Santos, G., 2014. Results of 10
Years of Software Process Improvement in Brazil
Based on the MPS-SW Model. 9th Int. Conf. on the
Quality in Information and Communications
Technology (QUATIC), Guimarães, Portugal, 2014.
Kitchenham, B, Charters, S., 2007. Guidelines for
performing systematic literature reviews in software
engineering. Tech. EBSE-2007-01 Keele University.
Lepmets, M., Ras, E., Renault, A., 2011 “A Quality
Measurement Framework for IT Services”, SRII
Global Conference.
Lepmets, M., Cater-Steel, A., Gacenga, F., Ras, E., 2012.
“Extending the IT Service Quality Measurement
Framework through a Systematic Literature Review”,
SRII Global Conference.
Lepmets, M., Mesquida, A., Cater-Steel, A., Mas, A., Ras,
E., 2013. “The Evaluation of the IT Service Quality
Measurement Framework in Industry“, Global Journal
of Flexible Systems Management - Volume 15.
Liu, M., Zhiheng, G., Weiping, L, Jiangping, W., 2011.
“Case Study on IT Service Management Process
Evaluation Framework Based on ITIL”. International
Conference on Business Management and Electronic
Information.
McGarry, J., Card D., Jones, C., Layman, B., Clark, E.,
Dean, J., Hall, F., 2002. Practical Software
Measurement: Objective Information for Decision
Makers. Assison Wesley.
Niessink, F., Clerc, V., Tijdink, T., Vliet, H., 2005. The IT
Service Capability Maturity Model - IT Service CMM,
version 1.0RC1.
Parasuraman, A. Zeithaml, L. Berry, 1985. A conceptual
model of service quality and its implications for future
research Journal of Marketing, vol. 49, pp. 41-50.
Softex, 2012a. MPS.BR – Guia Geral MPS de Serviços.
Available at www.softex.br (Portuguese and Spanish)
Softex, 2012b, MPS.BR – Guia Geral MPS de Software.
Available at www.softex.br (Portuguese and Spanish).
Solingen, R., Berghout, E., 1999. The Goal Question
Metric Method: A Practical Guide for Quality
Improvement of Software Development McGraw-Hill.
Trinkenreich, B., Santos, G., 2015 “Metrics to Support IT
Service Maturity Models – A Case Study”, 17th
International Conference on Enterprise Information
Systems (ICEIS), Barcelona, Spain.
TSO (The Stationery Office), 2011. An Introductory
Overview of ITIL. Available at www.tsoshop.co.uk.
Wohlin, C., Runeson, P., Höst, M., Regnell, B., Wesslén,
A., 2012 Experimentation in Software Engineering,
Springer, ISBN: 978-3642290435.
MetricstoSupportITServiceMaturityModels-ASystematicMappingStudy
337