generated area of the object through two three-
dimensional coordinates information.
5 CONCLUSIONS
In this work, we proposed 3D-BBGAN for 3D
object generation. We demonstrated that our models
are able to generate novel 3D objects with more
detailed geometries. Through the addition of the
conditional information to the generator and the
discriminator in the training process, we have
realized the effective limitation of the probability
space of the generated object. We effectively limit
the probability space of generation object to shorten
the training time and improve the generation effect
of 3D objects. And by adjusting the information we
add to the generator, we can direct the size of the
generated object. Next, we will try to add the
appropriate conditional information to guide the type
of the generated object.
Figure 5: Objects generated by the 3D-BBGAN system
trained on the ModelNet10 chair class. Here we list the
generate results of the three bounding box information.
The graph on the left of each column represents the
generated 3D Object, the right of each column shows the
render result with the correspond bounding box
information.
ACKNOWLEDGEMENTS
This work is supported by Sichuan Science and
Technology Program (2015GZ0358, 2016GFW0077,
2016GFW0116, 2018GZ0889) and Chengdu
Science and Technology Program (2018-YF05-
01138-GX).
REFERENCES
Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin,
D., Silva, C.T.: Computing andrendering point set
surfaces. IEEE Transactions on visualization and
computer graphics 9(1), 3–15 (2003).
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan.
arXiv preprint arXiv:1701.07875 (2017).
Carlson, W.E.: An algorithm and data structure for 3d
object synthesis using surface patchintersections.
ACM SIGGRAPH Computer Graphics 16(3), 255–
263 (1982).
Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P.,
Huang, Q., Li, Z., Savarese, S.,Savva, M., Song, S., Su,
H., et al.: Shapenet: An information-rich 3d model
repository. arXiv preprint arXiv:1512.03012 (2015).
Chaudhuri, S., Kalogerakis, E., Guibas, L., Koltun, V.:
Probabilistic reasoning for assemblybased 3d
modeling. In: ACM Transactions on Graphics (TOG).
vol. 30, p. 35. ACM (2011).
Donahue, J., Anne Hendricks, L., Guadarrama, S.,
Rohrbach, M., Venugopalan, S., Saenko,K., Darrell, T.:
Long-term recurrent convolutional networks for visual
recognition and description. In: Proceedings of the
IEEE conference on computer vision and pattern
recognition. pp. 2625–2634 (2015).
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville,A., Bengio, Y.:
Generative adversarial nets. In: Advances in neural
information processing systems. pp. 2672–2680 (2014)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V.,
Courville, A.C.: Improved training ofwasserstein gans.
In: Advances in Neural Information Processing
Systems. pp. 5769–5779 (2017).
Mirza, M., Osindero, S.: Conditional generative
adversarial nets. arXiv preprint arXiv:1411.1784
(2014).
Smith, E., Meger, D.: Improved adversarial systems for 3d
object generation and reconstruction. arXiv preprint
arXiv:1707.09557 (2017).
Tangelder, J.W., Veltkamp, R.C.: A survey of content
based 3d shape retrieval methods. In:Shape Modeling
Applications, 2004. Proceedings. pp. 145–156. IEEE
(2004).
Wang, W., Huang, Q., You, S., Yang, C., Neumann, U.:
Shape inpainting using 3d generativeadversarial
network and recurrent convolutional networks. arXiv
preprint arXiv:1711.06375 (2017).
Wu, J., Zhang, C., Xue, T., Freeman, B., Tenenbaum, J.:
Learning a probabilistic latent spaceof object shapes
via 3d generative-adversarial modeling. In: Advances
in Neural Information Processing Systems. pp. 82–90
(2016).
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X.,
Xiao, J.: 3d shapenets: A deeprepresentation for
volumetric shapes. In: Proceedings of the IEEE
conference on computer vision and pattern recognition.
pp. 1912–1920 (2015).
Yang, B., Wen, H., Wang, S., Clark, R., Markham, A.,
Trigoni, N.: 3d object reconstructionfrom a single
depth view with adversarial learning. arXiv preprint
arXiv:1708.07969 (2017).