Figure 5: The assignment of a line to a single point. There
are three clusters found, rather than only the obvious two.
5 CONCLUSIONS
The Infinite Line Mixture Model that is proposed ex-
tends the familiar Bayesian linear regression model
to an infinite number of lines using a Dirichlet Pro-
cess as prior. The model is a full Bayesian method
to detect multiple lines. A full Bayesian method, in
contrast to ad-hoc methods such as the Hough trans-
form or RANSAC, means optimal inference (Zellner,
1988) given the model and noise definition.
Results in section 4 show high values for differ-
ence performance metrics for clustering, such as the
Rand Index, the Adjusted Rand Index, and other met-
rics. The Bayesian model is solved through two types
of algorithms. Algorithm 1 iterates over all obser-
vations and suffers from slow mixing. The individ-
ual updates makes it hard to reassign large number of
points at the same time. Algorithm 2 iterates over en-
tire clusters. This allows updates for groups of points
leading to much faster mixing. Note, that even opti-
mal inference results in occasional misclassifications.
The dataset is generated by a random process. Hence,
occassionally two lines are generated with almost the
same slope and intercept. Points on these lines are
impossible to assign to the proper line.
The essential contribution of this paper is the in-
troduction of a fully Bayesian method to infer lines
and there are two ways in which the postulated model
can to be extended for full-fledged inference in com-
puter vision as required in robotics. First, the exten-
sion of lines in 2D to planes in 3D. This is quite a
trivial extension that does not change anything of the
model except for the dimension of the data points.
Second, somehow a prior needs to be incorporated to
limit the lines of infinite length, to line segments. To
restrict points on the lines to a uniform distribution of
points over a line segment, a symmetric Pareto distri-
bution can be used as prior (for the end points). This
would subsequently allow for a hierarchical model in
which these end points are in their turn part of more
complicated objects. Hence, the Infinite Line Mix-
ture Model is an essential step towards the use of
Bayesian methods (and thus properly formulated pri-
ors) for robotic computer vision.
REFERENCES
Antoniak, C. E. (1974). Mixtures of Dirichlet processes
with applications to Bayesian nonparametric prob-
lems. The annals of statistics, pages 1152–1174.
Bolles, R. C. and Fischler, M. A. (1981). A RANSAC-based
approach to model fitting and its application to finding
cylinders in range data. In IJCAI, volume 1981, pages
637–643.
Bonci, A., Leo, T., and Longhi, S. (2005). A bayesian ap-
proach to the hough transform for line detection. Sys-
tems, Man and Cybernetics, Part A: Systems and Hu-
mans, IEEE Transactions on, 35(6):945–955.
Box, G. E. and Tiao, G. C. (2011). Bayesian inference in
statistical analysis, volume 40. John Wiley & Sons.
Buntine, W. L. (1994). Operations for learning with graph-
ical models. JAIR, 2:159–225.
Chen, H., Meer, P., and Tyler, D. E. (2001). Robust regres-
sion for data with multiple structures. In Computer
Vision and Pattern Recognition, 2001. CVPR 2001.
Proceedings of the 2001 IEEE Computer Society Con-
ference on, volume 1, pages I–1069. IEEE.
Dahyot, R. (2009). Statistical hough transform. Pattern
Analysis and Machine Intelligence, IEEE Transac-
tions on, 31(8):1502–1509.
de Finetti, B. (1992). Foresight: Its logical laws, its sub-
jective sources. In Breakthroughs in statistics, pages
134–174. Springer.
Escobar, M. D. and West, M. (1995). Bayesian density es-
timation and inference using mixtures. Journal of the
american statistical association, 90(430):577–588.
Fienberg, S. E. et al. (2006). When did Bayesian inference
become “Bayesian”? Bayesian analysis, 1(1):1–40.
Gael, J. V., Teh, Y. W., and Ghahramani, Z. (2009). The
infinite factorial hidden markov model. In Advances in
Neural Information Processing Systems, pages 1697–
1704.
Gallo, O., Manduchi, R., and Rafii, A. (2011). CC-
RANSAC: Fitting planes in the presence of multiple
surfaces in range data. Pattern Recognition Letters,
32(3):403–410.
Geman, S. and Geman, D. (1984). Stochastic relaxation,
gibbs distributions, and the bayesian restoration of
images. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, (6):721–741.
Ghahramani, Z. and Griffiths, T. L. (2005). Infinite la-
tent feature models and the indian buffet process. In
ICPRAM 2016 - International Conference on Pattern Recognition Applications and Methods
126