Let us explain results of Table 5. M
LC L
contains
a BCF formula that has 2 options. The accuracy of
the model is higher which means that LCL is fully
adapted for modeling experts’ reasoning for providing
recommendations of antibiotics. The model does not
predict correctly some antibiotics. The reason is that
it is not possible to obtain a lexicographic ordering
over the two options of the BCF formula. In fact, the
learned model should verify the following conditions:
i) only antibiotics with rank 1 satisfy the two options
of the formula, ii) only antibiotics with rank 2 sat-
isfy the first option and not satisfy the second option,
and iii) only antibiotics with rank 3 satisfy the second
option and falsify the first option. There is no model
that verifies these conditions in the antibiotic database
for the considered clinical situation. The model given
here does not verify the condition iii). This is due cer-
tainly to some inconsistencies in the database (Tsopra
et al., 2018).
6 CONCLUSION
We proposed a method for learning preferences in the
context of a logic-based preference formalism, LCL.
The method is based on an adaptation of associa-
tion rules based on the Apriori algorithm. The LCL
learned model is qualitative and easily interpretable
for the user. To the best of our knowledge, this is the
first proposition for learning preferences in the con-
text of LCL.
The choice of train data plays an important role of
the learned LCL model. It can be different following
the considered train data. For example, if we consider
a preference database with D={1, 2, 3, ∞}, then the
learned model will contain an LCL formula with 2 op-
tions. However, if we consider a preference database
with D={1, 2, 3, 4, ∞}, then the learned model will
contain an LCL formula with 3 options. The formula
to be learned from D={3, 4, ∞} will be certainly dif-
ferent from the one learned from D={1, 2, 3, 4, ∞}.
The problem of learning LCL preferences is consid-
ered as an instance ranking problem where the set of
satisfaction degrees corresponds to the set of labels
and the set of outcomes corresponds to the set of in-
terpretations. In future work, we would perform some
evaluations to compare our method with other prefer-
ence learning methods, particularly those of instance
ranking problem.
REFERENCES
Agrawal, R., Imieli
´
nski, T., and Swami, A. (1993). Min-
ing association rules between sets of items in large
databases. In Proceedings of the 1993 ACM SIGMOD
International Conference on Management of Data,
SIGMOD ’93, pages 207–216, New York, NY, USA.
ACM.
Benferhat, S. and Sedki, K. (2008). Two alternatives for
handling preferences in qualitative choice logic. Fuzzy
Sets and Systems, 159(15):1889–1912.
Bernreiter, M., Maly, J., and Woltran, S. (2022). Choice
logics and their computational properties. Artif. In-
tell., 311:103755.
Boudjelida, A. and Benferhat, S. (2016). Conjunctive
choice logic. In International Symposium on Artifi-
cial Intelligence and Mathematics, ISAIM 2016, Fort
Lauderdale, Florida, USA, January 4-6, 2016.
Brewka, G., Benferhat, S., and Berre, D. L. (2004). Quali-
tative choice logic. Artif. Intell., 157(1-2):203–237.
Cohen, W. W., Schapire, R. E., and Singer, Y. (2011).
Learning to order things. CoRR, abs/1105.5464.
H
¨
ullermeier, E., F
¨
urnkranz, J., Cheng, W., and Brinker, K.
(2008). Label ranking by learning pairwise prefer-
ences. Artif. Intell., 172(16-17):1897–1916.
Joachims, T., Granka, L., Pan, B., Hembrooke, H., and Gay,
G. (2005). Accurately interpreting clickthrough data
as implicit feedback. In SIGIR’05: Proceedings of the
28th annual international ACM SIGIR conference on
Research and development in information retrieval,
pages 154–161, New York, NY, USA. ACM Press.
Johannes, F. and H
¨
ullermeier, E. (2010). Preference Learn-
ing. Springer-Verlag New York, Inc., New York, NY,
USA, 1st edition.
Sedki, K., Lamy, J., and Tsopra, R. (2022). Qualitative
choice logic for modeling experts recommendations
of antibiotics. In Proceedings of the Thirty-Fifth Inter-
national Florida Artificial Intelligence Research Soci-
ety Conference, FLAIRS 2022.
Tsopra, R., Lamy, J., and Sedki, K. (2018). Using pref-
erence learning for detecting inconsistencies in clin-
ical practice guidelines: Methods and application to
antibiotherapy. Artificial Intelligence in Medicine,
89:24–33.
Vembu, S. and G
¨
artner, T. (2010). Label ranking algo-
rithms: A survey. In Preference Learning., pages 45–
64.
Waegeman, W. and De Baets, B. (2010). A survey on roc-
based ordinal regression learning. In F
¨
urnkranz, J. and
H
¨
ullermeier, E., editors, Preference learning, pages
127–154. Springer.
Learning Preferences in Lexicographic Choice Logic
1019