this issues but can clearly not be made responsi-
ble for the overall performance of NeuralQAAD.
Additionally applying QAAD REASSIGNMENT (see
QAAD REASSIGNMENT in Table 1) and, hence, fol-
lowing the smoothing bias of neural networks, boosts
the results again. In summary, the conducted experi-
ments strongly support the capabilities and the neces-
sity of the proposed contributions.
The behavior of NeuralQAAD, in the case where
the number of sampled points is reduced, is very ro-
bust. Figure 5 shows that NeuralQAAD, with and
without our training scheme, is always superior to
AtlasNetV2, regardless of the number of sampled
points. More importantly, when the number of sam-
pled points is reduced, the performance loss is signifi-
cantly lower than with AtlasNetV2. The observed ro-
bustness to sampling demonstrates the efficiency and
scalability of NeuralQAAD with respect to the size
of the point clouds to be processed. This also equips
NeuralQAAD for the future, in which the resolution
of point clouds will most probably continue to in-
crease in most areas.
5 CONCLUSION
We introduced a new scaleable and robust point
cloud autodecoder architecture called NeuralQAAD
together with a novel training scheme. Its scalability
comes from low level feature sharing across multiple
foldable patches. In addition, refraining from classi-
cal encoders makes NeuralQAAD robust to sampling.
Our novel training scheme is based on two newly
developed algorithms to efficiently determine an ap-
proximate solution that follows the smoothing bias of
neural networks. We showed that NeuralQAAD pro-
vides better results than the previous state-of-the-art
applicable to high resolution point clouds. Our com-
parisons are based on the EM-kD, a novel scalable
and fast upper bound for the EMD. In our experi-
ments, the EM-kD has proven to reasonably reflect
visual differences between point clouds.
The next steps will be to make our approach ap-
plicable to generative models and to bridge the gap
to correspondence problems. Although at first glance
generative tasks seem to be a straightforward ex-
tension, preliminary results have shown an unstable
training process. Recent advancements in continuous
learning might be adaptable to diminish the effects of
sampling.
REFERENCES
Achenbach, J., Brylka, R., Gietzen, T., zum Hebel, K.,
Sch
¨
omer, E., Schulze, R., Botsch, M., and Schwa-
necke, U. (2018). A multilinear model for bidirec-
tional craniofacial reconstruction. In VCBM 2018,
pages 67–76.
Achlioptas, P., Diamanti, O., Mitliagkas, I., and Guibas,
L. J. (2018). Learning representations and generative
models for 3d point clouds. In ICML 2018, pages 40–
49.
Beckman, M. and Koopmans, T. (1957). Assignment prob-
lems and the location of economic activities. Econo-
metrica, pages 53–76.
Bernardini, F., Mittleman, J., Rushmeier, H. E., Silva, C. T.,
and Taubin, G. (1999). The ball-pivoting algorithm
for surface reconstruction. IEEE Trans. Vis. Comput.
Graph., (4):349–359.
Bertsekas, D. P. (1988). The auction algorithm: A dis-
tributed relaxation method for the assignment prob-
lem. Ann. Oper. Res., (1–4):105–123.
Bogo, F., Romero, J., Pons-Moll, G., and Black, M. J.
(2017). Dynamic FAUST: registering human bodies
in motion. In CVPR 2017, pages 5573–5582.
Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. (2014).
Spectral networks and locally connected networks on
graphs. In ICLR 2014.
Burkard, R. E. (1984). Quadratic assignment problems.
European Journal of Operational Research, (3):283–
289.
Chen, S., Duan, C., Yang, Y., Li, D., Feng, C., and Tian, D.
(2020). Deep unsupervised learning of 3d point clouds
via graph topology inference and filtering. IEEE
Trans. Image Processing, 29:3183–3198.
Deprelle, T., Groueix, T., Fisher, M., Kim, V. G., Russell,
B. C., and Aubry, M. (2019). Learning elementary
structures for 3d shape generation and matching. In
NIPS 2019, pages 7433–7443.
Edmonds, J. and Karp, R. M. (1972). Theoretical improve-
ments in algorithmic efficiency for network flow prob-
lems. J. ACM, (2):248–264.
Fan, H., Su, H., and Guibas, L. J. (2017). A point set gen-
eration network for 3d object reconstruction from a
single image. In CVPR 2017, pages 2463–2471.
Feydy, J., S
´
ejourn
´
e, T., Vialard, F.-X., Amari, S.-i., Trouv
´
e,
A., and Peyr
´
e, G. (2019). Interpolating between op-
timal transport and mmd using sinkhorn divergences.
In The 22nd International Conference on Artificial In-
telligence and Statistics, pages 2681–2690.
Gietzen, T., Brylka, R., Achenbach, J., zum Hebel, K.,
Sch
¨
omer, E., Botsch, M., Schwanecke, U., and
Schulze, R. (2019). A method for automatic foren-
sic facial reconstruction based on dense statistics of
soft tissue thickness. PLOS ONE, pages 1–19.
Girdhar, R., Fouhey, D. F., Rodriguez, M., and Gupta, A.
(2016). Learning a predictable and generative vector
representation for objects. In ECCV 2016, pages 484–
499.
NeuralQAAD: An Efficient Differentiable Framework for Compressing High Resolution Consistent Point Clouds Datasets
821