it is possible to provide useful information that can be
utilized for planning customers' measures for
membership rank up.
5 CONCLUSIONS
In this paper, we proposed the attribute influence
scoring method to address the relationship among
input and output data of the nonlinear classification
model with self-organizing map and locally
approximation to linear models. The proposed
method clarifies local characteristic with LIME score
on each node on the constructed self-organizing map.
It also shows global attribute influence score by
calculating weighted average value with LIME score
and hit count on every node. Thus, our method
enables analysts to have object-wise, cluster-wise and
global views in terms of targeting nonlinear model.
We applied our method to the actual use case of
customers’ membership rank-up analysis for digital
marketing to evaluate the validity of our method.
REFERENCES
Cai, Z., Fan, O., Feris, R., Vasconcelos, N., 2016. A unified
multiscale deep convolutional neural network for fast
object detection. In Proceedings of the 14
th
European
Conference on Computer Vision.
Agarwal, S., Awan, A., Roth, D., 2004. Learning to detect
objects in images via sparse, part-based representation.
In IEEE transactions on pattern analysis and machine
intelligence, 26, 11.
Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G.,
Elsen, E., Prenger, R., Satheesh, S., Sengupta, S.,
Coates, A., Ng, A., 2014. Deep Speech: Scaling up end-
to-end speech recognition. In arXiv:1412.5567.
Hannun, A., Maas, A., Jurafsky, D., and Ng, A., 2014. First-
Pass Large Vocabulary Continuous Speech Recognition
using Bi-Directional Recurrent DNNs. In arXiv:1408.
2873.
Ribeiro, M., Singh, S., and Guestrin, C., 2016. Why Should
I Trust You?: Explaining the Predictions of Any
Classifier. In Proceedings of NAACL-HLT 2016.
Le, Q., Ranzato, M., Monga, R., Devin, M., Chen, K.,
Corrado, G., Dean, J., and Ng, A., 2012. Building High-
level Features Using Large Scale Unsupervised
Learning. In Proceedings of the 29
th
International
Conference on Machine Learning.
Mahendran, A. and Vedaldi, A., 2014. Understanding Deep
Image Representations by Inverting Them. In arXiv:
1412.0035.
Smilkov, D., Thorat, N., Kim, B., Viegas, F., and
Wattenberg, M., 2017. SmoothGrad: removing noise by
adding noise. In arXiv: 1706.03825.
Springenberg, J., Dosovitskiy, A., Brox, T., and Riedmiller,
M., 2015. Striving for Simplicity: The All
Convolutional Net. In Proceedings of ICLR-2015.
Koh, P. and Liang, P., 2017. Understanding Black-box
Predictions via Influence Functions. In Proceedings of
the 34
th
International Conference on Machine
Learning.
Tolomei, G., Silvestri, F., Haines, A., and Lalmas, M., 2017.
Interpretable Predictions of Tree-based Ensembles via
Actionable Feature Tweaking. In KDD-2017.
Selvaraju, R., Cogswell, M., Das, A., Vedantam, R., Parikh,
D., Batra, D., 2017. Grad-CAM: Visual Explanations
from Deep Networks via Gradient-based Localization.
In arXiv: 1610.02391v3.