ability to do well on a problem is limited by the quality
of our information with respect to what we are trying
to learn. In other words: garbage in, garbage out.
7 CONCLUSION
We extend the Algorithmic Search Framework from
discrete target sets to a continuous measure of suc-
cess, addressing one the framework’s core limitations
and increasing its versatility. We generalize theorems
previously proven using the discrete ASF to the con-
tinuous and quantized cases, and derive novel results.
Specifically, we prove an upper bound on performance
under an arbitrary level of quantization, demonstrating
that increasing the granularity of our success metric
reduces our maximum theoretical performance. We
bound the absolute difference in performance between
the binary and continuous cases. We provide an ex-
ample how the ASF can be applied to a regression
problem and show how different processes for generat-
ing data or measuring success change key terms, like
I(S; F), thus varying our bound on performance.
These results improve the ability of the ASF to
model machine learning problems that naturally have
continuous measures of success, unlocking the poten-
tial to further the body of existing ASF research. There
remain many opportunities for extension. One possible
application of this framework is that it can be used for
an information theory-based analysis of auto-ML algo-
rithms by giving us a framework to better understand
the performance of this domain of machine learning
algorithms. Strengthening this theoretical framework
will give researchers the tools to analyze learning algo-
rithms with a naturally continuous measure of success.
REFERENCES
Ghofrani, F., Helfroush, M. S., Danyali, H., and Kazemi, K.
(2014). Improving the performance of machine learn-
ing algorithms using fuzzy-based features for medical
x-ray image classification. In Journal of Intelligent &
Fuzzy Systems, volume 6, pages 3169–3180.
Gottwald, S. (2005). Mathematical aspects of fuzzy sets
and fuzzy logic: Some reflections after 40 years. In
Mathematical aspects of fuzzy sets and fuzzy logic:
Some reflections after 40 years, volume 156, pages
357–364. 40th Anniversary of Fuzzy Sets.
H
¨
ullermeier, E. (2005). Fuzzy methods in machine learn-
ing and data mining: Status and prospects. In Fuzzy
methods in machine learning and data mining: Sta-
tus and prospects, volume 156, pages 387–406. 40th
Anniversary of Fuzzy Sets.
Lauw, J., Macias, D., Trikha, A., Vendemiatti, J., and
Monta
˜
nez, G. D. (2020). The Bias-Expressivity Trade-
off. In Rocha, A. P., Steels, L., and van den Herik, H. J.,
editors, Proceedings of the 12th International Confer-
ence on Agents and Artificial Intelligence, ICAART
2020, Volume 2, Valletta, Malta, February 22-24, 2020,
pages 141–150. SCITEPRESS.
Mitchell, T. M. (1982). Generalization as search. Artificial
Intelligence, 18(2):203–226.
Monta
˜
nez, G. D. (2017). The Famine of Forte: Few Search
Problems Greatly Favor Your Algorithm. In 2017
IEEE International Conference on Systems, Man, and
Cybernetics (SMC), pages 477–482. IEEE.
Monta
˜
nez, G. D., Bashir, D., and Lauw, J. (2021). Trad-
ing Bias for Expressivity in Artificial Learning. In
Rocha, A. P., Steels, L., and van den Herik, J., edi-
tors, Agents and Artificial Intelligence, pages 332–353,
Cham. Springer International Publishing.
Monta
˜
nez, G. D., Hayase, J., Lauw, J., Macias, D., Trikha,
A., and Vendemiatti, J. (2019). The Futility of Bias-
Free Learning and Search. In 32nd Australasian Joint
Conference on Artificial Intelligence, pages 277–288.
Springer.
Ramalingam, R., Dice, N. E., Kaye, M. L., and Monta
˜
nez,
G. D. (2022). Bounding Generalization Error Through
Bias and Capacity. In 2022 International Joint Confer-
ence on Neural Networks (IJCNN), pages 1–8.
Resti, Y., Irsan, C., Amini, M., Yani, I., Passarella, R., and
Zayantii, D. A. (2022). Performance improvement
of decision tree model using fuzzy membership func-
tion for classification of corn plant diseases and pests.
In Performance Improvement of Decision Tree Model
using Fuzzy Membership Function for Classification
of Corn Plant Diseases and Pests, volume 7, page
284–290.
Williams, J., Tadesse, A., Sam, T., Sun, H., and Monta
˜
nez,
G. D. (2020). Limits of Transfer Learning. The Sixth
International Conference on Machine Learning, Opti-
mization, and Data Science (LOD 2020).
APPENDIX
Theorem 5.1.
For fixed
k ∈ R
≥0
, fixed information
resource
f
, decomposable, non-negative satisfaction
metric
φ
, and minimum acceptable per-query satisfac-
tion q
min
, we define
τ
k
= {s ∈ R
|Ω|
|
|Ω|
∑
i=1
s
i
= k}, and
τ
q
min
= {s ∈ τ
k
| φ(s, F) ≥ q
min
}.
Then
µ(τ
q
min
)
µ(τ
k
)
≤
p
q
min
where
p
is the per-query expected
satisfaction under uniform random sampling and
µ
is
Lebesgue measure.
ICAART 2024 - 16th International Conference on Agents and Artificial Intelligence
564