5.2 Light Placement
Our algorithm can also position lights appropriately
(Fig. 12). The interface is almost the same as that for
camera placement. The user paints the region where
the light is needed, and the algorithm creates a spot-
light to illuminate the area specified. The light posi-
tion, the direction and cut-off ratio for the spot light
correspond to the e, n and θ in Section 3 respectively.
This is efficient for CG animations. In CG production,
there is often a need to place lights in an eccentric po-
sition to obtain plausible shading.
(a) Painting ROI (b) Lighting result
Figure 12: Light placement by painting.
6 CONCLUSIONS
We have presented a painting interface for camera
placement. Our algorithm requires only specification
of the region of interest by painting, and the cam-
era is then repositioned based on the ROI. We remark
that our method is not the alternative of other existing
methods, and we suppose to use the proposed method
with them. For instance, we use our method for find-
ing intial viewpoint and we adjust the viewpoint by
other methods(e.g. zoom out).
We also have plans for related future work. First,
the good up vector need to be computed. Second, we
would like to extend this method to volumetric mod-
els or we would like to find a viewpoint by direct
painting on volume rendering display.
ACKNOWLEDGEMENTS
Models are cortesy of Georgia Tech(Fig.1),
AIM@shape(Fig. 6) and Stanford Digital Michelan-
gelo Project (Fig. 7). This work is partially suppoted
by KAKENHI (No. 22246018, No. 22700091).
REFERENCES
Blinn, J. (1988). Where Am I? What Am I Looking At?
IEEE CG&A, 8(4):76–81.
Bordoloi, U. D. and Shen, H.-W. (2005). View selection for
volume rendering. IEEE Visualization, 0:487–494.
Boykov, Y. and Kolmogorov, V. (2004). An experimental
comparison of min-cut/max-flow algorithms for en-
ergy minimization in vision. IEEE PAMI, 26(9):1124–
1137.
Chen, M., Mountford, S. J., and Sellen, A. (1988). A study
in interactive 3-d rotation using 2-d control devices.
In SIGGRAPH ’88, pages 121–129.
Foskey, M., Otaduy, M. A., and Lin, M. C. (2002). Artnova:
Touch-enabled 3d model design. In IEEE Virtual Re-
ality, page 119.
Fu, H., Cohen-Or, D., Dror, G., and Sheffer, A. (2008). Up-
right orientation of man-made objects. ACM Transac-
tions on Graphics, 27:42:1–42:7.
Hachet, M., Decle, F., Kn¨odel, S., and Guitton, P. (2008).
Navidget for easy 3d camera positioning from 2d in-
puts. In Proceedings of IEEE 3DUI, pages 83–88.
Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., and
Stuetzle, W. (1992). Surface reconstruction from un-
organized points. In SIGGRAPH ’92, pages 71–78.
Kamada, T. and Kawai, S. (1988). A simple method
for computing general position in displaying three-
dimensional objects. Comput. Vision Graph. Image
Process., 41(1):43–56.
Khan, A., Komalo, B., Stam, J., Fitzmaurice, G., and
Kurtenbach, G. (2005). Hovercam: interactive 3d nav-
igation for proximal object inspection. In I3D ’05,
pages 73–80, New York, NY, USA. ACM.
Lee, C. H., Varshney, A., and Jacobs, D. W. (2005). Mesh
saliency. In ACM SIGGRAPH 2005 Papers, pages
659–666.
McCrae, J., Mordatch, I., Glueck, M., and Khan, A. (2009).
Multiscale 3d navigation. In I3D ’09, pages 7–14,
New York, NY, USA. ACM.
Podolak, J., Shilane, P., Golovinskiy, A., Rusinkiewicz, S.,
and Funkhouser, T. (2006). A planar-reflective sym-
metry transform for 3d shapes. In ACM SIGGRAPH
2006 Papers, pages 549–559.
Shoemake, K. (1992). Arcball: a user interface for speci-
fying three-dimensional orientation using a mouse. In
Graphics interface ’92, pages 151–156.
Takahashi, S., Fujishiro, I., Takeshima, Y., and Nishita, T.
(2005). A feature-driven approach to locating optimal
viewpoints for volume visualization. IEEE Visualiza-
tion, 0:495–502.
VIEWPOINT SELECTION BY PAINTING
301