5 CONCLUSIONS
In this paper, we tackle the problem of hand-crafted
feature learning limitations in the context of volume
rendering. We introduced a pre-trained 3D CNN deep
learning framework, consisting of a new architecture
inspired by Resnet50, that learns weights from the 2D
Resnet50 for initialization. The novel CNN allows
for gathering deeper information from the data vox-
els. We incorporate the 3D CNN into an incremental-
based image-centric method to improve the feature
learning process and classification efficiency. The
performance of the image-centric was evaluated on
many popular 3D datasets. Results were compared
against other methods. We demonstrated that the
new framework performs with the highest accuracy
on all three datasets. The empirical results con-
firmed that the 3D CNN improves the performance of
visualization-based classification. This framework al-
lows users to interact with an intuitive user interface
and control the final rendering results. Finally, we
compared the required training time for the proposed
system with other methods and showed that the pro-
posed CNN-based framework outperforms the other
methods in all experimented datasets. As a result, the
proposed method achieves real-time rendering of vi-
sual results while the users select regions of interest,
thanks to the use of incremental classification. In the
future, we aim to improve the network performance.
The proposed method has a few limitations that will
be worked on. In this approach, we worked with the
best slices along X ,Y , and Z directions to create a sim-
ple GUI without being crowded with unnecessary im-
ages. The choice of the slices is based only on en-
tropy. To ensure that we pick the best representative
ones, other criteria could also be used like informa-
tion gain or mutual information. Besides, the final re-
sult depends strongly on the user’s choice. Even if the
system needs to be used by a domain expert who is fa-
miliar with grayscale images and knows exactly what
he looks for, a bad choice does not provide satisfac-
tory results. So rather than restarting the system, the
elimination of some selected voxels needs to be taken
into consideration. As a result, using a decremental
classification could be useful in some cases to per-
form satisfactory results. Also, we can investigate the
effectiveness of the 3D CNN-based method to explore
volumetric data in real-time clinical applications.
REFERENCES
Athawale, T. M., Ma, B., Sakhaee, E., Johnson, C. R.,
and Entezari, A. (2020). Direct volume rendering
with nonparametric models of uncertainty. IEEE
Transactions on Visualization and Computer Graph-
ics, 27(2):1797–1807.
Berger, M., Li, J., and Levine, J. A. (2019). A generative
model for volume rendering. IEEE transactions on vi-
sualization and computer graphics, 25(4):1636–1650.
Childs, H., Brugger, E., Whitlock, B., Meredith, J., Ahern,
S., Pugmire, D., Biagas, K., Miller, M., Harrison, C.,
Weber, G. H., et al. (2012). Visit: An end-user tool for
visualizing and analyzing very large data.
Correa, C. and Ma, K.-L. (2009). The occlusion spec-
trum for volume classification and visualization. IEEE
Transactions on Visualization and Computer Graph-
ics, 15(6):1465–1472.
Correa, C. D. and Ma, K.-L. (2010). Visibility his-
tograms and visibility-driven transfer functions. IEEE
Transactions on Visualization and Computer Graph-
ics, 17(2):192–204.
Drebin, R. A., Carpenter, L., and Hanrahan, P. (1988). Vol-
ume rendering. ACM Siggraph Computer Graphics,
22(4):65–74.
Fogal, T. and Kr
¨
uger, J. H. (2010). Tuvok, an architecture
for large scale volume rendering. In VMV, volume 10,
pages 139–146.
Ge, F., No
¨
el, R., Navarro, L., and Courbebaisse, G. (2017).
Volume rendering and lattice-boltzmann method. In
GRETSI.
Guo, H., Mao, N., and Yuan, X. (2011). Wysiwyg (what
you see is what you get) volume visualization. IEEE
Transactions on Visualization and Computer Graph-
ics, 17(12):2106–2114.
Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., and Ben-
namoun, M. (2020). Deep learning for 3d point
clouds: A survey. IEEE transactions on pattern anal-
ysis and machine intelligence, 43(12):4338–4364.
He, T., Hong, L., Kaufman, A., and Pfister, H. (1996).
Generation of transfer functions with stochastic search
techniques. In Proceedings of Seventh Annual IEEE
Visualization’96, pages 227–234. IEEE.
Hong, F., Liu, C., and Yuan, X. (2019). Dnn-volvis: Inter-
active volume visualization supported by deep neural
network. In 2019 IEEE Pacific Visualization Sympo-
sium (PacificVis), pages 282–291. IEEE.
Itten, J. (1961). The art of color the subjective experience
and objective rationale of color.
Khan, N. M., Ksantini, R., and Guan, L. (2018). A novel
image-centric approach toward direct volume render-
ing. ACM Transactions on Intelligent Systems and
Technology (TIST), 9(4):1–18.
Khan, N. M., Kyan, M., and Guan, L. (2015). Intuitive
volume exploration through spherical self-organizing
map and color harmonization. Neurocomputing,
147:160–173.
Kim, S., Jang, Y., and Kim, S.-E. (2021). Image-based
tf colorization with cnn for direct volume rendering.
IEEE Access, 9:124281–124294.
Kindlmann, G., Whitaker, R., Tasdizen, T., and Moller, T.
(2003). Curvature-based transfer functions for direct
volume rendering: Methods and applications. In IEEE
Visualization, 2003. VIS 2003., pages 513–520. IEEE.
Deep Interactive Volume Exploration Through Pre-Trained 3D CNN and Active Learning
177