4 DISCUSSION & CONCLUSIONS
As shown in the previous sections, our proposal takes
benefit from many techniques used in distributed sys-
tems. However, these techniques have never been put
together and neither tested and need to be discussed:
Master Cluster Reduction. Some members of the
master region might be excluded from the active repli-
cation. Active replication does not scale well (Wies-
mann and Schiper, 2005) and with the proper selec-
tion of representatives we could speed up this process.
Enhance the Takeover Process. A pseudo-primary
could do active replication within its cluster. This role
is not an exclusive one in the cluster, it can be also
responsible for several smart meters and, thus, col-
laborate in the active replication protocol. Recall that
there are not so many nodes in a given cluster.
Failure Detection. Active replication within a
pseudo-primary cluster may (1) enhance the failure
node detection process, and (2) speed up the synchro-
nization of the new device in the replication chain.
Distributed Computing. Our proposed architec-
ture allows to perform distributed computation on the
read steps. Thanks to the fact that required data travel
across the replication chain, each node might be able
perform a piece of the computation required.
Dynamic Replication Depth Tuning. If we were
able to dynamically adjust thisvalue our system might
adapt better to their requirements. Hence, we could
use a cognitivesystem and apply some machine learn-
ing techniques (Mitchell, 1997) in order to (1) evalu-
ate the whole system status and (2) predict the optimal
value of the replication depth for each data item.
In this paper we have defined a way to distribute
and store information across the network so that
the computation needed for smart functions can be
greatly reduced. This work aims to provide some in-
sight into the world of smart grids from a data per-
spective. For the sake of simplicity during the presen-
tation of our system, we have outlined simple scenar-
ios about the replication policy or fault-tolerance is-
sues that need to be treated in detail in further works.
ACKNOWLEDGEMENTS
The research leading to these results has received
funding from the European Union European Atomic
Energy Community Seventh Framework Programme
(FP7/2007-2013 FP7/2007-2011) under grant agree-
ment n 247938 for Joan Navarro and August Cli-
ment and by the Spanish National Science Founda-
tion (MEC) (grant TIN2009-14460-C03-02) for Jos´e
Enrique Armend´ariz-I˜nigo.
REFERENCES
Aguilera, M. K. and et al. (2009). Sinfonia: A new
paradigm for building scalable distributed systems.
ACM Trans. Comput. Syst., 27(3).
Amir, Y. and Tutu, C. (2002). From total order to database
replication. In ICDCS, pages 494–.
Bernstein, P. A. and et al. (1987). Concurrency Control
and Recovery in Database Systems. Addison-Wesley
Longman Publishing Co., Inc., Boston, MA, USA.
Brown, R. E. (2008). Impact of Smart Grid on distribution
system design. In Power and Energy Society General
Meeting - Conversion and Delivery of Electrical En-
ergy in the 21st Century, 2008 IEEE, pages 1–4.
Cristian, F. (1991). Understanding fault-tolerant distributed
systems. Commun. ACM, 34(2):56–78.
Das, S., Agrawal, D., and Abbadi, A. E. (2010). Elastras:
An elastic transactional data store in the cloud. CoRR,
abs/1008.3751.
DeCandia, G., Hastorun, D., Jampani, M., Kakulapati,
G., Lakshman, A., Pilchin, A., Sivasubramanian, S.,
Vosshall, P., and Vogels, W. (2007). Dynamo: ama-
zon’s highly available key-value store. In SOSP, pages
205–220.
Golab, L. and Johnson, T. (2011). Consistency in a Stream
Warehouse. In CIDR.
Jim´enez-Peris, R., Pati˜no-Mart´ınez, M., Kemme, B., and
Alonso, G. (2002). Improving the scalability of fault-
tolerant database clusters. In ICDCS, pages 477–484.
Mitchell, M. (1997). An Introduction to Genetic Algo-
rithms. The MIT Press, Cambridge, Massachusetts.
Palankar, M. R. and et al. (2008). Amazon s3 for science
grids: a viable solution? In DADC ’08: Proceedings
of the 2008 international workshop on Data-aware
distributed computing, pages 55–64, New York, NY,
USA. ACM.
Pati˜no-Mart´ınez, M. and et al. (2005). MIDDLE-R: consis-
tent database replication at the middleware level. ACM
Trans. Comput. Syst., 23(4):375–423.
Paz, A., Perez-Sorrosal, F., Pati˜no-Mart´ınez, M., and
Jim´enez-Peris, R. (2010). Scalability evaluation of the
replication support of jonas, anindustrial j2ee applica-
tion server. In EDCC, pages 55–60.
Pedone, F., Wiesmann, M., Schiper, A., Kemme, B., and
Alonso, G. (2000). Understanding replication in
databases and distributed systems. In ICDCS.
Vogels, W. (2009). Eventually consistent. Commun. ACM,
52(1):40–44.
White, Tom (2009). Hadoop: The Definitive Guide.
O’Reilly Media, 1 edition.
Wiesmann, M. and Schiper, A. (2005). Comparison of
database replication techniques based on total order
broadcast. IEEE Trans. Knowl. Data Eng., 17(4):551–
566.
DYNAMIC DISTRIBUTED STORAGE ARCHITECTURE ON SMART GRIDS
221