Evidence Collection in Cloud Provider Chains
Thomas R
¨
ubsamen
1
, Christoph Reich
1
, Nathan Clarke
2
and Martin Knahl
3
1
Institute for Cloud Computing and IT Security, Furtwangen University, Robert-Gerwig-Platz 1, Furtwangen, Germany
2
Centre for Security, Communications and Network Research, Plymouth University, Portland Square, Plymouth, U.K.
3
Furtwangen University, Robert-Gerwig-Platz 1, Furtwangen, Germany
Keywords:
Cloud Computing, Audit, Federated Cloud, Security, Digital Evidence
Abstract:
With the increasing importance of cloud computing, compliance concerns get into the focus of businesses
more often. Furthermore, businesses still consider security and privacy related issues to be the most prominent
inhibitors for an even more widespread adoption of cloud computing services. Several frameworks try to ad-
dress these concerns by building comprehensive guidelines for security controls for the use of cloud services.
However, assurance of the correct and effective implementation of such controls is required by businesses
to attenuate the loss of control that is inherently associated with using cloud services. Giving this kind of
assurance is traditionally the task of audits and certification. Cloud auditing becomes increasingly challeng-
ing for the auditor the more complex the cloud service provision chain becomes. There are many examples
for Software as a Service (SaaS) providers that do not own dedicated hardware anymore for operating their
services, but rely solely on other cloud providers of the lower layers, such as platform as a service (PaaS)
or infrastructure as a service (IaaS) providers. The collection of data (evidence) for the assessment of policy
compliance during a technical audit is aggravated the more complex the combination of cloud providers be-
comes. Nevertheless, the collection at all participating providers is required to assess policy compliance in the
whole chain. The main contribution of this paper is an analysis of potential ways of collecting evidence in an
automated way across cloud provider boundaries to facilitate cloud audits. Furthermore, a way of integrating
the most suitable approaches in the system for automated evidence collection and auditing is proposed.
1 INTRODUCTION
As cloud computing becomes more accepted by main-
stream businesses and replaces more and more on-
premise IT installations, compliance with regulation,
industry best-practices and customer requirements be-
comes increasingly important. The main inhibitor for
even more widespread adoption of cloud services still
remain security and privacy concerns of cloud cus-
tomers (Cloud Security Alliance, 2013). In Germany,
a preference for cloud providers that fall under Ger-
man jurisdiction and also run their own data centers
in Germany or at least inside the European Union
can be observed recently (Bitkom Research GmbH,
2015). This comes as no surprise when privacy vio-
lations that have become known to the general popu-
lation in recent years are considered (e.g., NSA and
Snowden revelations). A feasible way to assess and
ensure compliance of cloud services regularly is by
using audits. For any technical audit, information has
to be collected in order to assess compliance. This
automated process is called evidence collection in our
system. In our previous work on cloud auditing, the
focus was put on automating the three major parts of
an audit system, i) evidence collection and handling,
ii) evaluation against machine-readable policies and
iii) presentation of audit results (R
¨
ubsamen and Re-
ich, 2013; R
¨
ubsamen et al., 2013; R
¨
ubsamen and Re-
ich, 2014; R
¨
ubsamen et al., 2015).
Today, it is common to not only have a single
cloud provider to provision a service to its customers,
but multiple. The composition of multiple services
provided by different providers can already be ob-
served where Software as a Service (SaaS) providers
host their offering on top of the computing resources
provided by an Infrastructure as a Service (IaaS)
provider. For instance, Dropbox and Netflix both host
their services using Amazon’s infrastructure. These
composed services - they can be considered to form
a chain of cloud providers, therefore cloud provider
chain - can become very complex and opaque with
respect to the flow of data between providers. Several
new challenges for the auditing of such cloud provider
chains can be identified, which will be discussed in
Rübsamen, T., Reich, C., Clarke, N. and Knahl, M.
Evidence Collection in Cloud Provider Chains.
In Proceedings of the 6th International Conference on Cloud Computing and Services Science (CLOSER 2016) - Volume 1, pages 59-70
ISBN: 978-989-758-182-3
Copyright
c
2016 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
59
this paper. The other major contribution of this paper
is a proposed solution to auditing of cloud provider
chains, which is an extension to our previous work in
this area.
This paper is structured as follows: in Section 2,
related research projects and industrial approaches are
discussed. Following that, in Section 3 the authors
elaborate on the definition and properties of cloud
provider chains and auditing. Afterwards, a discus-
sion of three different approaches to evidence collec-
tion for provider chain auditing in Section 4 is pre-
sented. In Section 5, the architectural integration of
the approaches in a system for automating cloud au-
dits is presented and evaluated for their effectiveness
using a fictitious scenario. Section 6 concludes this
paper.
2 RELATED WORK
Standards and catalogues such as ISO27001 (ISO,
2005), Control Objectives for Information and Re-
lated Technology (COBIT) (Information Systems Au-
dit and Control Association, 2012) or NIST 800-
53 (National Institute of Standards and Technology),
2013) define information security controls. A major
part of these frameworks is auditing, both in regu-
lar auditing as an control itself and by using audits
to ensure the correct and effective implementation of
the controls. They are typically generic and target in-
formation systems in general and do not address the
specifics of cloud computing.
There are some extensions to the previous frame-
works such as the Cloud Controls Matrix (Cloud Se-
curity Alliance, 2014). It aims at the integration
of aspects from ISO and COBIT, and NIST’s more
cloud-focused security and privacy protection recom-
mendations 800-144 (National Institute of Standards
and Technology, 2011), as well as domain-specific
frameworks such as PCI-DSS (PCI Security Stan-
dards Council, 2015) or FedRAMP (U.S. General Ser-
vices Administration, 2014), into a common controls
framework for cloud computing that facilitates the
risk assessment of using cloud services. CSAs Secu-
rity, Trust & Assurance Registry (Cloud Security Al-
liance, 2015) enables comparison of cloud providers
based on self-certification of cloud providers using
the Cloud Controls Matrix. However, conducting au-
dits based on these standards is mostly a manual pro-
cess, still. Our proposed approach supports the au-
tomatic collection and evaluation of evidence based
on policies that may stem from these frameworks and
therefore could enable continuous certification.
Monitoring systems provide similar functional-
ity to audit systems with respect to the collection
of data and synthesizing metrics that are compared
against defined thresholds. There are several solu-
tions for IT monitoring such as Nagios (Nagios En-
terprises, LLC, 2014) or Ganglia (Ganglia, 2015) and
several big commercial solutions. However, they of-
ten have a very distinct heritage in data center, clus-
ter and grid monitoring and are therefore not neces-
sarily suitable for the cloud due its dynamic infras-
tructure and potential chaining of cloud providers.
More specialized monitoring systems such as Ama-
zon’s CloudWatch (Amazon Web Services, 2016) or
Rackspace’s monitoring (Rackspace, 2016) are nat-
urally proprietary and do not support chaining out-
side of the providers own set of services. The inte-
gration of an evidence collection system with such
widely used monitoring systems is of great impor-
tance, since they provide deep insight into cloud ser-
vices and therefore are considered valuable sources of
evidence.
Auditing and monitoring in cloud computing has
gained more momentum in recent years and a growing
number of research projects is addressing their unique
challenges. Povedano-Molina et al. (2013) propose
Distributed Architecture for Resource manaGement
and mOnitoring in cloudS (DARGOS) that enables
efficient distributed monitoring of virtual resources
based on the publish/subscribe paradigm. They uti-
lize monitor agents to gather information for their
centralized collector node. Katsaros et al. (2012) de-
scribe a similar approach to cloud monitoring with
virtual machine units (VMU) that contain data collec-
tors (scripts). Their focus is on self-adaptation of the
monitoring system by adjusting monitoring intervals
and other parameters. While they introduce isolation
of tenants in cloud environments, they do not at this
stage show how their system would work in a multi-
provider scenario.
Massonet et al. (2011) propose an approach to
monitoring data location compliance in federated
cloud scenarios, where an infrastructure provider is
chained with a service provider (i.e., the service
provider uses resource provided by the infrastructure
provider). A key requirement of their approach is
the collaboration of both providers with respect to
collecting monitoring data. Infrastructure monitoring
data (from the IaaS provider) is shared with the ser-
vice provider (SaaS provider) that uses it to generate
audit trails. However, their main focus is to monitor
virtual execution environments (VEE) that ”are fully
isolated runtime modules that abstract away the phys-
ical characteristics of the resource“, which roughly
translates to virtual machines. The actual infrastruc-
ture layer is out of scope. Also, opposed to our ap-
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
60
proach, monitoring probes (data collectors) do not
have a way to be dynamically deployed where needed,
but rather are included in the VEE on deployment
time.
Kertesz et al. (2013) follow the idea of tightly inte-
grating monitoring into their management system for
federated clouds, inorder to facilitate provider selec-
tion on the basis of availability and reliability met-
rics. They introduce service monitoring by reusing
SALMonADA (Muller et al., 2012). Their approach
is geared towards provider decision making for state-
less services based on performance metrics and does
neither include protection mechanisms and dynamic
collector distribution that are required in a system for
evidence collection in the cloud.
Montes et al. (2013) introduce an important aspect
to cloud monitoring by also including the client-side
in the data collection in addition to the cloud provider.
However, they do not consider integrating third-party
cloud providers as well.
Xie and Gamble (2012, 2013); Xie et al. (2014)
describe an approach to inter-cloud auditing on the
web service level, where audit assets are requested
from a federated service.
3 COMPLEX CLOUD PROVIDER
CHAINS FOR SERVICE
PROVISION
While a lot of today’s cloud use cases only involve
one service provider for service provision, there are
also many cases where multiple providers are in-
volved. A prominent example is Dropbox that heav-
ily uses Amazon’s S3 and EC2 services to provide its
own SaaS offering (Tom Cook, 2015).
There are several terms for the concept of provider
chains such as federated cloud, inter-cloud and cloud
service composition. In this work these terms are used
synonymously. The concept of a provider chain is de-
fined as follows:
1. At least two cloud providers (characterized by be-
ing either IaaS, PaaS or SaaS providers) are in-
volved in the provision of a service to a cloud con-
sumer (who can be an individual or business).
2. One of the cloud providers acts as a primary ser-
vice provider to the cloud consumer.
3. Subsequent cloud providers do not have a direct
relationship with the cloud consumer.
4. The primary service provider must be and the sub-
sequent providers can be cloud consumers them-
selves, if they use services provided by other
cloud providers.
5. The data handling policies agreed between the
cloud consumer and the primary service provider
must not be relaxed if data is processed by a sub-
sequent provider.
The terms cloud consumer and cloud customer are
used synonymously as well, while relying on the def-
inition of a cloud consumer and auditor provided by
NIST (Liu et al., 2011).
Figure 1 depicts a simplified scenario where three
cloud service providers are involved in provision-
ing of a seemingly single service to a cloud con-
sumer. The SaaS provider acts as the primary service
provider, while it uses the PaaS provider’s platform
for hosting its service. The PaaS provider in turn does
not have its own data center but uses resources pro-
vided by an IaaS provider.
The data handling policy applies to the whole
chain (depicted by the dashed rectangle in Figure 1).
Data handling policies thereby govern the treatment
of data such as data retention (the deletion of data
after a certain time), location (geographical restric-
tions) and security requirements (access control rules
and protection of systems that handle the data).
All cloud provider produce evidence of their cloud
operations.
Figure 1: Cloud Provider Chains for Service Provision.
3.1 Evidence of Compliance in Cloud
Provider Chains
At the core of any audit is evidence of compliance or
non-compliance that is being analyzed. The types of
evidence are closely linked to the type of audit (e.g.,
security audit, financial audit etc.) and are - from a
technological perspective - especially diverse in the
cloud due to the heterogeneity of its subsystems, ar-
chitectures, layers and services. The notion of evi-
dence for cloud audits was discussed in our previous
work in more detail (R
¨
ubsamen and Reich, 2013).
In general, we follow the definition of digital ev-
idence that is “information of probative value that
Evidence Collection in Cloud Provider Chains
61
is stored or transmitted in binary form” (Scientific
Working Groups on Digital Evidence and Imaging
Technology, 2015). This means, that the types of
evidence are diverse and include for example logs,
traces, files, monitoring and history data from cloud
management system like OpenStack’s Nova service.
Evidence collection at a single cloud provider is
already a complex task due to the diverse types of
evidence sources and sheer amount of potentially re-
quired data that is being produced continuously. In
a provider chain, these problems are intensified by
the introduction of administrative domains and the
lack of transparency regarding the number of involved
providers and their relationships.
Another problem that is introduced with the con-
cept of provider chains are changing regulatory do-
mains. In a single-provider scenario, there are typi-
cally only two regulatory domains to be considered:
i) the one that applies to the cloud consumer and
ii) the one that applies to the cloud provider. With
the addition of more cloud providers, the complexity
of achieving regulatory compliance increases tremen-
dously.
A simple example for such a case is the recent de-
cision of the European Court in 2015 to declare Safe
Harbor invalid, which leads to data transfers outside
the European Union that are only governed by Safe
Harbor to be invalid. In a provider chain, where a
European Cloud provider transfers data about Euro-
pean individuals to another provider in the US, reg-
ulatory compliance could have been lost overnight.
Here, it can be seen that regulatory domains can have
a tremendous impact on how a compliance audit may
have to look like, and on the type of evidence that may
need to be collected at the different providers.
As previously suggested, the third major chal-
lenge for evidence collection in cloud provider chains
is their inherent technological heterogeneity. APIs,
protocols and data formats differ by provider and typi-
cally cannot be integrated easily (e.g., providers offer-
ing proprietary APIs). There are some approaches to
homogenize some of the technologies, such as for ex-
ample CSA CloudTrust Protocol (Cloud Security Al-
liance, 2016) that aims to provide a well-defined API
that enables cloud providers to export transparency-
enhancing information to auditors and cloud con-
sumers. In this approach, the technological hetero-
geneity on the architectural level of the system is ad-
dressed by ensuring flexibility and extensibility and
enabling the easy development of adapters for differ-
ent evidence sources.
3.2 Audit Frameworks
Policy compliance assessment and validation is the
main goal of our audit system. Policies can be of var-
ious kinds, for instance, a data protection policy is
a typical tool used by cloud providers to frame their
data protection and handling practices. In such poli-
cies, limits and obligations that a provider has to ful-
fill are defined. Typically, these documents are not
machine-readable and are geared towards limiting li-
ability of the provider.
Additionally, there are well-known standards,
frameworks and industry best practices, which de-
fine various aspects of how data handling and protec-
tion should be implemented in practice. Such frame-
works are for instance the well-known ISO27001 for
information security management in general, COBIT
for IT governance and CSAs Cloud Controls Matrix
(CCM) (Cloud Security Alliance, 2014) for cloud-
specific risk assessment. However, requirements and
obligations stated by these frameworks are typically
not available in a machine-readable format. There are
approaches to making these requirements and obliga-
tions explicit in a machine-readable way, for example
Accountability Primelife Policy Language (Azraoui
et al., 2014) for defining data protection and data
handling-related obligations for data processing in the
cloud.
Traditionally, policy compliance is evaluated us-
ing audits and asserted with a certification of compli-
ance (e.g., ISO27001 compliance certification). Typ-
ically, the intervals in which an audit is repeated are
quite long (often yearly or longer). In the meantime,
policy violations can potentially remain undetected
for extended periods of time. One of our main goals
is to address these periods of uncertainty by enabling
the continuous assessment of cloud operations with
respect to policy compliance. This is an important
step towards continuous certification.
3.3 Auditing Cloud Provider Chains
According to NIST, a cloud auditor is defined as
A party that can conduct independent assessment of
cloud services, information system operations, perfor-
mance and security of the cloud implementation” (Liu
et al., 2011). In our proposed system, the auditor is
supported by a system for automated evidence collec-
tion and assessment. Evidence in the audit system is
any kind of information that is indicative of compli-
ance with policies or a violation of those. Typically,
evidence is collected at the auditee. In general, an au-
ditee is an organization that is being audited, which in
this paper, is always a cloud provider.
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
62
Complex cloud service provision scenarios in-
troduce new challenges with respect to auditing.
While in a typical scenario, where there is one cloud
provider and one cloud consumer, policies can be
agreed upon relatively easily between the two, this is
not as easy in a provider chain. In fact, the cloud con-
sumer might not be aware of or even interested in the
fact that there is an unknown third-party that might
have access to his data as long as his expectations re-
garding the protection and processing of his data are
fulfilled. However, to assert compliance, the whole
chain of providers, including data flows that are gov-
erned by the previously mentioned policies, have to
be considered. This means that an audit with respect
to a single policy rule may need to be split into several
smaller evidence collection and evaluation tasks that
are distributed among the providers.
For instance: assuming there is a restriction on
data retention put in place that states that certain types
of data (e.g., Personal Identifiable Information PII)
has to deleted by the provider after a certain fixed
period of time and no copies may be left over. This
restriction can stem from regulatory framework such
as the European Data Protection Directive or sim-
ply preferences that were stated by the data subject,
whose data is being processed in the cloud. Such
requirements can be formulated and enforced in for
example the Accountability PrimeLife Policy Lan-
guage (A-PPL) and its enforcement engine (A-PPL-
E) Azraoui et al. (2015).
Auditing for compliance with such a policy re-
quires, on a higher level, the check for the implemen-
tation of appropriate mechanisms and controls at each
provider where the data itself or a copy thereof could
have been stored. On a lower-level, the correct en-
forcement of the data retention rule could be evalu-
ated in an audit by using evidence of data deletion
that is being collected from all the cloud providers. In
the overview depicted in Figure 1, that evidence could
comprise of:
Data deletion enforcement events generated by
the service at the primary service provider as a
reaction to the retention period being reached,
Database delete log events produced by a database
management system at the PaaS provider,
And scan results on the IaaS level for data that
may be still available outside of the running ser-
vice in a backup subsystem provided by the IaaS
provider.
The importance of widening the scope of audits
in such dynamic scenarios is apparent, especially if at
the same time the depth of analysis is widened beyond
checking whether or not security and privacy controls
are put in place.
4 APPROACHES FOR
COLLECTING EVIDENCE IN
CLOUD PROVIDER CHAIN
AUDITS
There are several approaches available when it comes
to collecting evidence for audit purposes in a service
provider chain. These approaches differ in the follow-
ing aspects:
1. The level of control an auditor has over the extent
of the data that is being published, i.e. whether the
auditor is limited to information that a provider is
already providing or if he has more fine-grained
control and access to a provider’s infrastructure.
2. Technical limitations imposed by the technolog-
ical environment, i.e. the extent to which cloud
providers have to implement additional evidence
collection mechanisms.
3. The expected willingness or acceptance to pro-
vide such mechanisms by the publishing service
provider, i.e. the potential disclosure of confiden-
tial provider information and required level of ac-
cess to the provider’s systems.
In the remainder of this chapter, three approaches
are described and rated by the above-mentioned fac-
tors.
The focus lies thereby on inspecting common
components at two exemplary cloud providers that
form a provider chain for the provision of a service.
These components are:
AuditSys. An audit system that enables automated,
policy-based collection of evidence as well as the
continuous and periodic evaluation of said evi-
dence during audits.
Collector. A component that enables the collection
of evidentiary data such as logs at various archi-
tectural layers of the cloud, while addressing the
heterogeneous nature of said evidence sources by
acting as an adapter.
Source. A location where evidence of cloud opera-
tions is generated.
Implementation details of these components are
discussed in our previous work. The following dis-
cussion focuses on the different approaches to extend
the system for cloud provider chains.
The first approach focuses on reusing already
existing evidence sources by collecting via remote
Evidence Collection in Cloud Provider Chains
63
APIs of a cloud system. The second approach uses
provider-provisioned evidence collectors and the third
approach leverages the mobility of software agents (as
used in the prototype implementation of our system)
for evidence collection.
4.1 Remote API Evidence Collector
The first approach for collecting evidence that is rele-
vant to automated auditing, leverages already existing
APIs in cloud ecosystems. Several cloud providers,
such as Amazon or Rackspace, already provide im-
proved transparency over their cloud operations by
providing their customers with access to proprietary
monitoring and logging APIs (see (Amazon Web Ser-
vices, 2016; Rackspace, 2016)). The extent to which
data is shared is typically limited to information that
is already produced by the clod provider’s system
(e.g., events in the cloud management system) and
restricted to information that immediately affects the
cloud customer (e.g., events that are directly linked to
a tenant).
Data such as logs that are generated by the under-
lying systems are very important sources of evidence,
since they expose a lot of information about the op-
eration of cloud services. A specific example of such
evidence are for instance: VM lifecycle events (cre-
ated, suspended, snapshotted etc.) including times-
tamp of the operation and who performed. This can
be requested from OpenStack’s Nova service via its
REST interface. The type of information is highly
dependent on the actual system, the granularity of the
produced logs and the scope of the provided APIs.
For instance, on the infrastructure level, there are log
events produced and shared that provide insight on
virtual resource lifecycle (e.g., start/stop events of vir-
tual machine).
Figure 2 depicts such a scenario. The AuditSys at
Cloud A operates a collector that implements the API
of the remote data source at Cloud B. It is configured
with the access credentials of Cloud A, thus enabling
the collector to request evidence from Cloud B. Fur-
thermore, since different services may provide differ-
ent APIs (e.g., OpenStack vs. OpenNebula API), the
collector is service-specific. For instance, a collec-
tor implements the data formats and protocols as de-
fined in the OpenStack Nova API to collect evidence
about the images that are owned or otherwise associ-
ated with Cloud A as a customer of Cloud B.
4.1.1 Level of Auditor Control
a The amount of evidence that can be collected is
severely limited by the actual APIs that are provided
by a cloud provider. It is either: i) the evidence that
Figure 2: Remote API Evidence Collector.
an auditor is looking for is immediately available be-
cause the provider already monitors all relevant data
sources and makes that data accessible via the API or
ii) the data is not available. Since a lot of the cloud
provider’s systems expose remote APIs anyway, they
have to be considered. However, the completeness
of the exposed APIs and therefore the completeness
of the collected evidence is questionable due to the
aforementioned reasons.
If an auditee for some reason does not implement
or provide access to the audit system, an auditor may
still collect evidence to a limited degree using this ap-
proach.
4.1.2 Technical Limitations
If lower-level access to the providers infrastructure is
required to collect evidence (e.g., log events gener-
ated on the network layer or block storage-level ac-
cess to data), an auditor might not be able to gain ac-
cess to that information.
4.1.3 Acceptance
This approach poses some challenges with respect to
security, privacy and trust required by the auditee.
Since the auditee is already exposing the APIs pub-
licly, it can be expected that they will be used for au-
diting and monitoring purposes. The implementation
of security and privacy-preserving mechanisms on the
API-level is therefore assumed. However, the extent
to which such mechanisms are implemented highly
depends on the actual implementation of the APIs on
the provider side.
While this way of providing evidence to auditors
is likely to be accepted by cloud providers, it may be
too limited with respect to the extent to which evi-
dence can be collected at lower architectural levels.
4.2 Provider Provisioned Evidence
Collector
In this approach, the audit system still is the main
component for evidence collection. Here, all cloud
providers that are part of the service provision chain
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
64
are running a dedicated system for auditing. How-
ever, the instantiation and configuration of the collec-
tor is delegated to the auditee. The auditee assumes
full control over the collector and merely grants the
auditor access to interact with the collector for evi-
dence collection.
The auditee (see Cloud B in Figure 3) provisions
evidence collectors and provides access to them to the
auditor. The auditor (who is using AuditSys at Cloud
A) configures evidence collection for the audit to con-
nect to the collectors at Cloud B.
Figure 3: Provider-provisioned Evidence Collector.
4.2.1 Level of Auditor Control
The configuration of the evidence collector can be ad-
justed by the auditor to a degree that is controlled by
the auditee (e.g., applying filters to logs). He is pro-
vided limited means to configure a collector but no
direct, low-level access such as freely migrating the
collector in the auditee’s infrastructure. At any time,
the auditee can disconnect, change or otherwise con-
trol the collector. An auditor may be put off by the
limitations posed by this approach, since he is effec-
tively giving up control over the central part of evi-
dence collection and is relying solely on the cooper-
ation of the auditee. For instance, simple tasks such
as reconfiguring or restarting a collector may require
extensive interaction between the two audit systems
and potentially intervention by a human (e.g., an ad-
ministrator).
4.2.2 Technical Limitations
This approach is only limited by the availability of
collectors for evidence sources.
4.2.3 Acceptance
In this approach, the auditee retains full control over
the collector and the potential evidence that can be
collected by it. The auditor can take some influ-
ence on the filtering of data that is collected from the
evidence source and on general parameters, such as
whether evidence is pushed by or pulled from the col-
lector. Most of the baseline configuration though, is
performed by the auditee (such as access restrictions
and deployment of the collector). The auditor’s abil-
ity to influence the collector is severely limited by the
restriction of interactions to a well-defined set of con-
figuration parameters and the evidence exchange pro-
tocol. This level of control that the auditee has over
the evidence collection process may have positive in-
fluence on provider acceptance.
4.3 Mobile Evidence Collector
This approach is specific to a central characteristic of
software agent systems, which is the ability to mi-
grate over a network between runtime environments.
In this approach, the migration of evidence collectors
between separate instances of the audit system run-
ning at both Cloud A and B is proposed.
In our implementation, we opted for the
well-known Java Agent Development framework
(JADE) JADE (2014) for implementing collectors.
The migration of collectors between providers is
thereby performed by using JADE’s mobile agent ca-
pabilities.
As depicted in Figure 4, the auditor prepares the
required collector fully (i.e., agent instantiation and
configuration) and then migrates the collector (shaded
box named Collector) to the auditee (Collector‘).
There, the collector gathers evidence that is sent back
to the auditor for evaluation. Generally however,
agents do not cross from one particular administrative
domain to another, but remain at one. In this case, the
collector crosses from Cloud As administrative do-
main to Cloud B. This may have significant impact
on the acceptance of the approach.
Figure 4: Mobile Evidence Collector.
4.3.1 Level of Auditor Control
The auditor retains full control over the type of collec-
tor and its configuration. The auditee may not in any
way change or otherwise influence the collector since
this could be deemed a potentially malicious manipu-
lation.
4.3.2 Technical Limitations
Since the auditor knows most about the actual con-
Evidence Collection in Cloud Provider Chains
65
figuration required for a collector, it is logical to take
this approach and simply hand-over a fully prepared
collector to the auditee. However, this only works if
both run the same audit system, or the auditee at the
very least provides a runtime environment for the col-
lector. In any case, this approach offers the most com-
plete and most flexible way of collecting evidence at
an auditee due to the comprehensive evidence collec-
tion capabilities.
4.3.3 Acceptance
The main problem with this approach is required
trust by the auditee. Since the collector that is be-
ing handed over to him by the auditor is in fact soft-
ware that the auditee is supposed to run on its in-
frastructure, several security, privacy and trust-related
issues associated with such cross-domain agent mo-
bility need to be addressed. Several security con-
trols need to be implemented in order to make cloud
providers consider the implementation of an audit
system including the proposed approach of using mo-
bile collectors.
The main security concerns of this approach stem
from the fact that the auditee is expected to execute
software on his infrastructure over which he does not
have any control. He cannot tell for certain whether or
not the agent is accessing only those evidence sources
which he expects it to.
Without any additional security measures, it can-
not be expected that any cloud provider is willing to
accept this approach. However, with the addition of
security measures such as ensuring authenticity of the
collector (e.g., using collector code reviews and code
signing) this approach becomes more feasible. The
discussion of such measures depends on the technol-
ogy used by the implementation and is out of scope
of this paper. Without any additional measures, it can
be assumed that this approach is only feasible, if the
auditor is completely trusted by the auditee. In that
case, this approach is very powerful and flexible.
4.4 Round-up
All three approaches for evidence collection in
provider chains have their distinct advantages and dis-
advantages. Using remote API evidence collectors
is simple, quickly implemented, securely and readily
available, but severely limited regarding access to ev-
idence sources. Using provider-provisioned evidence
collectors is more powerful with respect to access to
evidence sources, but requires more effort in the con-
figuration phase and leaves full control to the auditee.
Using mobile evidence collectors is the most flexible
approach that allows broad access to evidence sources
at the auditee’s infrastructure and leaves full control
over the evidence collection to the auditor. There-
fore, a balance has to be struck between broad ac-
cess to evidence sources when using mobile collec-
tors (effectively having low-level access to logs and
other files for evidence collection) and more limited
access when using remote APIs (evidence limited to
what the system that exposes the API provides).
In the audit system, the use of remote APIs is inte-
grated due to its simplicity and mobile collectors due
to their flexibility and powerfulness as the main ap-
proaches to evidence collection.
5 SCENARIO-BASED PROVIDER
CHAIN AUDITING
EVALUATION
In the previous Section 4, the approaches that can be
taken when collecting evidence for auditing purposes
in cloud provider chains were described. In this sec-
tion, it is demonstrated how to incorporate the feasi-
ble approaches into an extension of the proposed audit
system to enable automated, policy-driven auditing of
cloud provider chains. The focus is put on the Remote
API Evidence Collector and Mobile Evidence Collec-
tor approaches (see Section 4.1 and 4.3 respectively).
The approach is validated by discussing a fictitious
use case.
5.1 Audit Agent System
In Figure 5, an example deployment of the automated
audit system is depicted. This deployment is not nec-
essarily representative of real-world cloud environ-
ments but used to highlight possible combinations of
services and data flows that can happen in a multi-
cloud scenario. There are four cloud providers, which
are directly or indirectly involved in the service pro-
vision. The SaaS provider A1 uses the platform pro-
vided by a PaaS provider B1, who does not have its
own data center but uses computing resources pro-
vided by yet another IaaS provider C1. The IaaS
provider C2 provides a low-level backup as a service
solution that is used by provider C1. To enable au-
diting of the whole provider chain, each provider is
running its own instance of the audit system (Audit-
Sys, as described in Section 4).
5.2 Provider Chain Auditing Extension
The auditor that uses AuditSys at the primary ser-
vice provider A1 defines and configures continuous
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
66
audits based on data protection and handling policy
statements. Since these policy statements do not in-
clude any information about the service architecture,
the auditor introduces his knowledge about the cloud
deployment into the audit task, by defining evidence
collection tasks that gather data on the PaaS and IaaS
layer and also at the primary service provider. An
audit task consists of collector, evaluator and notifica-
tion agents. The type of evidence collection approach
that has to be taken (as described in Section 4) is also
defined by the auditor.
In this scenario it is assumed that all providers
allow the auditor at A1 to collect evidence using
the mobile evidence collectors and that the infras-
tructure providers also provide the auditor with ac-
cess to their management system’s APIs. As previ-
ously mentioned, the auditor is assumed trustworthy
by all parties, which enables broad access to all cloud
providers. Additionally, it is assumed that all cloud
providers are acting in good faith and see the au-
dit process as an opportunity to transparently demon-
strate that they are acting in compliance with data
handling policies.
As depicted in Figure 5, the auditor uses A1’s Au-
ditSys to define and audit task based on the data han-
dling policy that is in effect. That task refers to the
data retention obligation that was described earlier in
Section 3.3. The retention time is defined as 6 months
for every PII data record that is gathered about the
users of provider A1. If the retention time is reached,
the following delete process is executed as part of the
normal operation of the service A1 provides:
1. The delete event fires at A1 due to max retention
time being reached and the event is propagated to
B1.
2. The data record is deleted from the database at B1.
3. The database is hosted on virtual machines pro-
vided by C1 and therefore does not require any
delete actions.
4. A backup of the B1’s database is available in C2’s
backup system and the delete action was not trig-
gered in C2.
As part of the delete event, the following evidence
is collected by the mobile evidence collectors as part
of building an evidence trail for compliance evalua-
tion at A1.
1. The data retention event is recorded as evidence
by the collector running at A1.
2. The delete action of the database is recorded as
evidence by the collector running at B1.
3. No evidence is recorded by the collector at C1
since there are no leftover copies such as virtual
machine snapshots available.
4. The backup’s meta-data such as creation times-
tamps are recorded as evidence by the collector
at C2.
The evidence from all collectors (A1, B1, C2) is
sent to the AuditSys at A1, where it is evaluated and a
policy compliance statement is generated for the au-
ditor. In this particular case, a policy violation is de-
tected, because the audit trail shows that the record
that should have been deleted is still available in a
backup at C2. Provider A1, and B1 acted compliant
by deleting the data, whereas C1 never stored a copy
outside of B1’s database.
5.3 Pre-processing and Intermediate
Results of Audit Evidence
Evaluation
The audit system uses a component at the AuditSys
that is responsible for storing evidence records that
are collected by the collector agents. Externally col-
lecting evidence and merging it at a central evidence
store that is only reachable via the network, can easily
become a bottle-neck in audit scenarios where either
a lot of evidence records are produced externally or
where the record size is big. This obviously has sig-
nificant impact on the scalability of the whole system.
The problem can be addressed by making the ev-
idence store (which is just a specialized form of an
agent with a secure storage mechanism) distributable
and also by de-centralizing parts of the evidence eval-
uation process. There are generally two concepts:
1. Pre-processing: Pre-processing allows the evi-
dence collector agent to apply filtering and other
types of evidence pre-processing. The goal is
to reduce the amount of collected evidence to a
manageable degree (without negatively impacting
the completeness of the audit trail) and to rea-
sonably reduce the amount of network operations
by grouping evidence records and storing them in
bulk. For example, by filtering the raw data at the
evidence source for certain operations, subjects,
tenants, or time frames. Data that is not immedi-
ately required for the audit is filtered out.
2. Intermediate Result Production: A second pre-
processing strategy is to move (parts of) the eval-
uation process near the collector. This means that
the collected evidence is already reduced to the
significant portions that indicate partial compli-
ance or violation of policies. However, this strat-
egy requires specific audit task types (where an
audit result can be produced by combining several
intermediate results).
Evidence Collection in Cloud Provider Chains
67
Figure 5: Provider Chain Auditing Architecture.
The three concepts bring several implications with
them with respect to privacy and security.
Pre-processing can be considered a manipulation
of evidence. Therefore, the unaltered source upon
which the pre-processing happened should be pro-
tected to later be able to trace pre-processed evidence
back to its unaltered form.
Immediate result production effectively moves the
evaluation of evidence step of the audit into the do-
main of the auditee, where it would be easy for him
to manipulate the result. However, the same applies
to the collection of evidence as well where an audi-
tor can intentionally manipulate the evidence source
or the collector.
This case is not considered in the current iteration
of the system but it is assumed that cloud providers
(auditees) are acting in good faith. This assumption
can be justified by the potential increase in trans-
parency and the associated strengthening of trust in
the cloud provider that can mean a competitive ad-
vantage. On the other hand, intentional manipula-
tion of evidence or intermediate results can have dis-
astrous impact on a provider‘s credibility, reputation
and trustworthiness upon detection.
6 CONCLUSIONS
Cloud auditing is becoming increasingly important as
cloud adoption increases and compliance of data pro-
cessing is put into focus of the cloud consumer. The
key to making cloud audits a useful tool is the effec-
tiveness of collection process that is used to build the
basis for the evaluation of policy compliance or lack
thereof.
While there are many systems for monitoring
cloud providers (with varying level of completeness),
there are fewer systems that automate audit tasks
and even fewer still that enable continuous auditing,
which is a key enabler of continuous certification. As
long as there is only one cloud provider involved in
service provisioning to the cloud consumer, monitor-
ing and auditing is relatively simple (with the above
mentioned restrictions). However, in more complex
scenarios where there are chains of providers (or fed-
erations of cloud providers), current approaches are
severely limited.
In this paper an extension to our previous work on
automating continuous cloud audits that enables the
collection of evidence across the boundaries of mul-
tiple cloud providers in a cloud provider chain was
presented. The concept of cloud provider chains and
three different approaches to evidence collection with
their advantages and disadvantages were discussed.
Furthermore, their implementation in an audit system
was presented and validated using a scenario-based
approach. It was shown how automated cloud audits
can be extended to scenarios, where more than one
cloud provider is involved in the service provision.
In the future, the analysis of the different ap-
proaches and their integration in our system will be
expanded in two main areas: i) expanding the secu-
rity mechanisms that are already present to account
for the notion of provider chains and ii) demonstrat-
ing the scalability and efficiency of the system.
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
68
ACKNOWLEDGEMENTS
This work has been partly funded from the Euro-
pean Commission’s Seventh Framework Programme
(FP7/2007-2013), grant agreement 317550, Cloud
Accountability Project - http://www.a4cloud.eu/ -
(A4CLOUD).
REFERENCES
Amazon Web Services (2016). Amazon cloudwatch. https:
//aws.amazon.com/de/cloudwatch/.
Azraoui, M., Elkhiyaoui, K.,
¨
Onen, M., Bernsmed, K.,
De Oliveira, A., and Sendor, J. (2015). A-ppl: An
accountability policy language. In Garcia-Alfaro, J.,
Herrera-Joancomart
´
ı, J., Lupu, E., Posegga, J., Aldini,
A., Martinelli, F., and Suri, N., editors, Data Privacy
Management, Autonomous Spontaneous Security, and
Security Assurance, volume 8872 of Lecture Notes in
Computer Science, pages 319–326. Springer Interna-
tional Publishing.
Azraoui, M., Elkhiyaoui, K.,
¨
Onen, M., Bernsmed, K., San-
tana De Oliveira, A., and Sendor, J. (2014). A-PPL:
An accountability policy language. In DPM 2014,
9th International Workshop on Data Privacy Manage-
ment, September 10, 2014, Wroclaw, Poland, Wro-
claw, POLAND.
Bitkom Research GmbH (2015). Cloud Monitor
2015. https://www.kpmg.com/DE/de/Documents/
cloudmonitor%202015 copyright%20 sec neu.pdf.
Cloud Security Alliance (2013). Top threats to cloud com-
puting survey results update 2012. https://downloads.
cloudsecurityalliance.org/initiatives/top threats/
Top Threats Cloud Computing Survey 2012.pdf.
Cloud Security Alliance (2014). Cloud Controls Matrix.
https://cloudsecurityalliance.org/research/ccm/.
Cloud Security Alliance (2015). Security, Trust & Assur-
ance Registry. https://cloudsecurityalliance.org/star/.
Cloud Security Alliance (2016). Cloud Trust Protocol.
https://cloudsecurityalliance.org/research/ctp.
Ganglia (2015). Ganglia. http://ganglia.sourceforge.net/.
Information Systems Audit and Control Association (2012).
Control Objectives for Information and Related Tech-
nology (COBIT) 5. http://www.isaca.org/cobit/.
ISO (2005). ISO27001:2005. http://www.iso.org/iso/
catalogue detail?csnumber=42103.
JADE (2014). Java Agent DEvelopement framework. http:
//jade.tilab.com.
Katsaros, G., Kousiouris, G., Gogouvitis, S. V., Kyriazis,
D., Menychtas, A., and Varvarigou, T. (2012). A
self-adaptive hierarchical monitoring mechanism for
clouds. Journal of Systems and Software, 85(5):1029
– 1041.
Kertesz, A., Kecskemeti, G., Oriol, M., Kotcauer, P., Acs,
S., Rodr
´
ıguez, M., Merc
`
e, O., Marosi, A., Marco, J.,
and Franch, X. (2013). Enhancing federated cloud
management with an integrated service monitoring
approach. Journal of Grid Computing, 11(4):699–
720.
Liu, F., Tong, J., Mao, J., Bohn, R., Messina, J., Bad-
ger, L., and Leaf, D. (2011). Nist cloud computing
reference architecture. http://www.nist.gov/customcf/
get pdf.cfm?pub id=909505.
Massonet, P., Naqvi, S., Ponsard, C., Latanicki, J., Rochw-
erger, B., and Villari, M. (2011). A monitoring and
audit logging architecture for data location compli-
ance in federated cloud infrastructures. In Parallel
and Distributed Processing Workshops and Phd Fo-
rum (IPDPSW), 2011 IEEE International Symposium
on, pages 1510–1517.
Montes, J., S
´
anchez, A., Memishi, B., P
´
erez, M. S., and
Antoniu, G. (2013). Gmone: A complete approach to
cloud monitoring. Future Generation Computer Sys-
tems, 29(8):2026 – 2040.
Muller, C., Oriol, M., Rodriguez, M., Franch, X., Marco, J.,
Resinas, M., and Ruiz-Cortes, A. (2012). Salmonada:
A platform for monitoring and explaining violations
of ws-agreement-compliant documents. In Principles
of Engineering Service Oriented Systems (PESOS),
2012 ICSE Workshop on, pages 43–49.
Nagios Enterprises, LLC (2014). Nagios. http://www.
nagios.org/.
National Institute of Standards and Technology (2011).
Guidelines on security and privacy in public cloud
computing. http://csrc.nist.gov/publications/nistpubs/
800-144/SP800-144.pdf.
National Institute of Standards and Technology) (2013).
Security and privacy controls for federal information
systems and organizations. http://nvlpubs.nist.gov/
nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf.
PCI Security Standards Council (2015). Payment Card
Industry Data Security Standard (PCI-DSS). https:
//www.pcisecuritystandards.org/.
Povedano-Molina, J., Lopez-Vega, J. M., Lopez-Soler,
J. M., Corradi, A., and Foschini, L. (2013). Dargos: A
highly adaptable and scalable monitoring architecture
for multi-tenant clouds. Future Generation Computer
Systems, 29(8):2041 – 2056.
Rackspace (2016). Rackspace cloud monitoring. http:
//www.rackspace.com/cloud/monitoring.
R
¨
ubsamen, T., Pulls, T., and Reich, C. (2015). Secure
Evidence Collection and Storage for Cloud Account-
ability Audits. In CLOSER 2015 - Proceedings of
the 5th International Conference on Cloud Comput-
ing and Services Science, Lisbon, Portugal, May 20 -
22, 2015. SciTePress.
R
¨
ubsamen, T. and Reich, C. (2013). Supporting cloud ac-
countability by collecting evidence using audit agents.
In Cloud Computing Technology and Science (Cloud-
Com), 2013 IEEE 5th International Conference on,
volume 1, pages 185–190.
R
¨
ubsamen, T. and Reich, C. (2014). An Architecture
for Cloud Accountability Audits. In 1. Baden-
W
¨
urttemberg Center of Applied Research Symposium
on Information and Communication Systems SInCom
2014.
Evidence Collection in Cloud Provider Chains
69
R
¨
ubsamen, T., Reich, C., Wlodarczyk, T., and Rong, C.
(2013). Evidence for accountable cloud computing
services. http://dimacs.rutgers.edu/Workshops/TAFC/
TAFC\ a4cloud.pdf.
Scientific Working Groups on Digital Evidence and
Imaging Technology (2015). SWGDE and
SWGIT Digital & Multimedia Evidence Glos-
sary. https://www.swgde.org/documents/Current%
20Documents/2015-05-27%20SWGDE-SWGIT%
20Glossary%20v2.8.
Tom Cook (2015). Dropbox at AWS re:Invent
2014. https://blogs.dropbox.com/tech/2014/12/
aws-reinvent-2014/.
U.S. General Services Administration (2014). Federal Risk
and Authorization Program. http://www.fedramp.gov.
Xie, R. and Gamble, R. (2012). A tiered strategy for au-
diting in the cloud. In Cloud Computing (CLOUD),
2012 IEEE 5th International Conference on, pages
945–946.
Xie, R. and Gamble, R. (2013). An architecture for cross-
cloud auditing. In Proceedings of the Eighth Annual
Cyber Security and Information Intelligence Research
Workshop, CSIIRW ’13, pages 4:1–4:4, New York,
NY, USA. ACM.
Xie, R., Gamble, R., and Ahmed, N. (2014). Diagnosing
vulnerability patterns in cloud audit logs. In Han,
K. J., Choi, B.-Y., and Song, S., editors, High Perfor-
mance Cloud Auditing and Applications, pages 119–
146. Springer New York.
CLOSER 2016 - 6th International Conference on Cloud Computing and Services Science
70