framework. Journal of Multiple-Valued Logic and Soft
Computing, 17:255–287.
Bache, K. and Lichman, M. (2013). UCI machine learning
repository. http://archive.ics.uci.edu/ml.
Bouckaert, R. (2004). Naive bayes classifiers that per-
form well with continuous variables. In Proc. of the
17th Australian Conference on Artificial Intelligence,
pages 1089 – 1094.
Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J.
(1984). Classification and regression trees. Chapman
& Hall/CRC.
Cowell, R. G., Dawid, A. P., Lauritzen, S. L., and Spiegel-
halter, D. J. (1999). Probabilistic Networks and Ex-
pert Systems. Statistics for Engineering and Informa-
tion Science. Springer.
Dash, M. and Liu, H. (1997). Feature selection for classifi-
cation. Intelligent Data Analysis, 1(3):131 – 156.
Domingos, P. and Pazzani, M. (1996). Beyond indepen-
dence: Conditions for the optimality of the simple
bayesian classifier. In Proceedings of the Interna-
tional Conference on Machine Learning.
Domingos, P. and Pazzani, M. (1997). On the optimality
of the simple bayesian classifier under zero-one loss.
Machine Learning, 29:103 – 130.
Dougherty, J., Kohavi, R., and Sahami, M. (1995). Super-
vised and unsupervised discretization of continuous
features. In y S. Russell, A. P., editor, Machine Learn-
ing: Proceedings of the Twelfth International Confer-
ence, pages 194–202. Morgan Kaufmann, San Fran-
cisco.
Fayyad, U. M. and Irani, K. B. (1993). Multi-interval dis-
cretization of continuous-valued attributes for classi-
fication learning. In Proceedings of the 13th Inter-
national Joint Conference on Artificial Intelligence
(IJCAI-93), pages 1022 – 1027.
Fern´andez, A. and Salmer´on, A. (2008). Extension of
Bayesian network classifiers to regression problems.
In Geffner, H., Prada, R., Alexandre, I. M., and David,
N., editors, Advances in Artificial Intelligence - IB-
ERAMIA 2008, volume 5290 of Lecture Notes in Arti-
ficial Intelligence, pages 83–92. Springer.
Friedman, N., Geiger, D., and Goldszmidt, M. (1997).
Bayesian network classifiers. Machine Learning,
29:131–163.
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann,
P., and Witten, I. H. (2009). The weka data min-
ing software: An update. SIGKDD Explor. Newsl.,
11(1):10–18.
Hollander, M. and Wolfe, D. A. (1999). Nonparametric
Statistical Methods. Wiley, 2nd edition edition.
Jensen, F. V. and Nielsen, T. D. (2007). Bayesian Networks
and Decision Graphs. Springer.
John, G. H. and Langley, P. (1995). Estimating continuous
distributions in bayesian classifiers. In Proceedings
of the Eleventh conference on Uncertainty in Artificial
Intelligence, pages 338 – 345.
Kozlov, D. and Koller, D. (1997). Nonuniform dynamic
discretization in hybrid networks. In Geiger, D. and
Shenoy, P., editors, Proceedings of the 13th Confer-
ence on Uncertainty in Artificial Intelligence, pages
302–313. Morgan & Kaufmann.
Langseth, H., Nielsen, T. D., P´erez-Bernab´e, I., and
Salmer´on, A. (2013). Learning mixtures of truncated
basis functions from data. International Journal of
Approximate Reasoning.
Lauritzen, S. and Wermuth, N. (1989). Graphical mod-
els for associations between variables, some of which
are qualitative and some quantitative. The Annals of
Statistics, 17:31–57.
L´opez-Cruz, P. L., Bielza, C., and Larra˜naga, P. (2013).
Learning mixtures of polynomials of multidimen-
sional probability densities from data using b-spline
interpolation. International Journal of Approximate
Reasoning, In Press.
Lucas, P. J. (2002). Restricted Bayesian network structure
learning. In G´amez, J. and Salmer´on, A., editors, Pro-
ceedings of the 1st European Workshop on Probabilis-
tic GraphicalModels (PGM’02), pages 117–126.
Minsky, M. (1963). Steps towards artificial inteligence.
Computers and Thoughts, pages 406–450.
Moral, S., Rum´ı, R., and Salmer´on, A. (2001). Mixtures
of Truncated Exponentials in Hybrid Bayesian Net-
works. In Benferhat, S. and Besnard, P., editors, Sym-
bolic and Quantitative Approaches to Reasoning with
Uncertainty, volume 2143 of Lecture Notes in Artifi-
cial Intelligence, pages 156–167. Springer.
Morales, M., Rodr´ıguez, C., and Salmer´on, A. (2007). Se-
lective na¨ıve Bayes for regression using mixtures of
truncated exponentials. International Journal of Un-
certainty, Fuzziness and Knowledge Based Systems,
15:697–716.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Sys-
tems. Morgan-Kaufmann. San Mateo.
P´erez, A., Larra˜naga, P., and Inza, I. (2009). Bayesian
classifiers based on kernel density estimation: Flex-
ible classifiers. International Journal of Approximate
Reasoning, 50(2):341 – 362.
R Core Team (2013). R: A Language and Environment for
Statistical Computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0.
Romero, V., Rum´ı, R., and Salmer´on, A. (2006). Learning
hybrid Bayesian networks using mixtures of truncated
exponentials. International Journal of Approximate
Reasoning, 42:54–68.
Rum´ı, R., Salmer´on, A., and Moral, S. (2006). Estimating
mixtures of truncated exponentials in hybrid Bayesian
network. Test, 15:397–421.
Rum´ı, R., Salmer´on, A., and Shenoy, P. P. (2012). Tractable
inference in hybrid bayesian networks with determin-
istic conditionals using re-approximations. In Pro-
ceedings of the Sixth European Workshop on Proba-
bilistic Graphical Models (PGM’2012), pages 275 –
282.
Sahami, M. (1996). Learning limited dependence Bayesian
classifiers. In KDD96: Proceedings of the second in-
ternational Conference on Knowledge Discovery and
Data Mining, pages 335–338.
Schuster, E. F. (1985). Incorporating support constraints
into nonparametric estimators of densities. Commu-
NaiveBayesClassifierwithMixturesofPolynomials
23