Bolukbasi, T., Wang, J., Dekel, O., and Saligrama, V.
(2017). Adaptive neural networks for efficient infer-
ence. In Proceedings of the 34th International Con-
ference on Machine Learning-Volume 70, pages 527–
536. JMLR. org.
Coleman, A. (2017). How much does it cost to
keep your computer online? (lots, it turns
out). http://www.telegraph.co.uk/business/energy-
efficiency/cost-keeping-computer-online/. Accessed
on 15/01/2019.
El Naqa, I. and Murphy, M. J. (2015). What is machine
learning? In Machine Learning in Radiation Oncol-
ogy, pages 3–11. Springer.
Gao, W., Zhang, X., Yang, L., and Liu, H. (2010). An
improved sobel edge detection. In 2010 3rd Interna-
tional Conference on Computer Science and Informa-
tion Technology, volume 5, pages 67–71. IEEE.
Garc
´
ıa-Mart
´
ın, E. (2017). Energy efficiency in machine
learning: A position paper. In 30th Annual Workshop
of the Swedish Artificial Intelligence Society SAIS
2017, 137(3):68–72.
Garrido-Merch
´
an, E. C. and Hern
´
andez-Lobato, D. (2017).
Dealing with integer-valued variables in bayesian op-
timization with gaussian processes. arXiv preprint
arXiv:1706.03673.
Garrido-Merch
´
an, E. C. and Hern
´
andez-Lobato, D. (2018).
Dealing with categorical and integer-valued variables
in bayesian optimization with gaussian processes.
arXiv preprint arXiv:1805.03463.
Harris, C. and Stephens, M. (1988). A combined corner
and edge detector. In In Proc. of Fourth Alvey Vision
Conference, pages 147–151.
Koutsoukas, A., Monaghan, K. J., Li, X., and Huan, J.
(2017). Deep-learning: investigating deep neural net-
works hyper-parameters and comparison of perfor-
mance to shallow methods for modeling bioactivity
data. Journal of cheminformatics, 9(1):42.
Krizhevsky, A. (2018). The cifar-10 dataset. https:
//www.cs.toronto.edu/
∼
kriz/cifar.html. Accessed on
18/01/2019.
LeCun, Y., Cortes, C., and Burges, C. J. (2018). The mnist
database. http://yann.lecun.com/exdb/mnist/. Ac-
cessed on 18/01/2019.
Leroux, S., Bohez, S., De Coninck, E., Verbelen, T.,
Vankeirsbilck, B., Simoens, P., and Dhoedt, B. (2017).
The cascading neural network: building the internet
of smart things. Knowledge and Information Systems,
52(3):791–814.
Li, D., Chen, X., Becchi, M., and Zong, Z. (2016).
Evaluating the energy efficiency of deep convo-
lutional neural networks on cpus and gpus. In
2016 IEEE International Conferences on Big Data
and Cloud Computing (BDCloud), Social Comput-
ing and Networking (SocialCom), Sustainable Com-
puting and Communications (SustainCom)(BDCloud-
SocialCom-SustainCom), pages 477–484. IEEE.
Manuskin, A., Jimenez, D., Moritz, D., and Johnstone,
A. (2019). Github - amanusk/s-tui: Terminal-based
cpu stress and monitoring utility. https://github.com/
amanusk/s-tui. (Accessed on 06/10/2019).
Mike Yi, P. K. (2018). Intel power gadget. https://software.
intel.com/en-us/articles/intel-power-gadget-20. Ac-
cessed on 17/01/2019.
Morar, M. T., Knowles, J., and Sampaio, S. (2017). Ini-
tialization of bayesian optimization viewed as part
of a larger algorithm portfolio. http://ds-o.org/
images/Workshop\ papers/Morar.pdf. (Accessed on
06/09/2019).
Oh, C., Tomczak, J. M., Gavves, E., and Welling,
M. (2019). Combinatorial bayesian optimiza-
tion using graph representations. arXiv preprint
arXiv:1902.00448.
Panda, P., Ankit, A., Wijesinghe, P., and Roy, K. (2017).
Falcon: Feature driven selective classification for
energy-efficient image recognition. IEEE Transac-
tions on Computer-Aided Design of Integrated Cir-
cuits and Systems, 36(12).
Panda, P., Sengupta, A., and Roy, K. (2016). Conditional
deep learning for energy-efficient and enhanced pat-
tern recognition. In 2016 Design, Automation & Test
in Europe Conference & Exhibition (DATE), pages
475–480. IEEE.
Park, E., Kim, D., Kim, S., Kim, Y.-D., Kim, G., Yoon,
S., and Yoo, S. (2015). Big/little deep neural network
for ultra low power inference. In Proceedings of the
10th International Conference on Hardware/Software
Codesign and System Synthesis, CODES ’15, pages
124–132, Piscataway, NJ, USA. IEEE Press.
Roy, D., Panda, P., and Roy, K. (2018). Tree-cnn: a hier-
archical deep convolutional neural network for incre-
mental learning. arXiv preprint arXiv:1802.05800.
Snoek, J., Larochelle, H., and Adams, R. P. (2012). Prac-
tical bayesian optimization of machine learning algo-
rithms. In Advances in neural information processing
systems, pages 2951–2959.
Souza, K. D. (2017). PowerKap - A tool for Improv-
ing Energy Transparency for Software Developers on
GNU/Linux (x86) platforms. Master’s thesis, Imperial
College London.
Stamoulis, D., Cai, E., Juan, D.-C., and Marculescu, D.
(2018). Hyperpower: Power-and memory-constrained
hyper-parameter optimization for neural networks. In
2018 Design, Automation & Test in Europe Confer-
ence & Exhibition (DATE), pages 19–24. IEEE.
Sui, Y., Gotovos, A., Burdick, J., and Krause, A. (2015).
Safe exploration for optimization with gaussian pro-
cesses. In International Conference on Machine
Learning, pages 997–1005.
Teerapittayanon, S., McDanel, B., and Kung, H. (2017).
Distributed deep neural networks over the cloud, the
edge and end devices. In 2017 IEEE 37th Interna-
tional Conference on Distributed Computing Systems
(ICDCS), pages 328–339. IEEE.
Venkataramani, S., Raghunathan, A., Liu, J., and Shoaib,
M. (2015). Scalable-effort classifiers for energy-
efficient machine learning. In Proceedings of the
52nd Annual Design Automation Conference, page 67.
ACM.
SMARTGREENS 2020 - 9th International Conference on Smart Cities and Green ICT Systems
158