when it reaches 50, LLM cannot balance the
constraints and goals effectively. Instead, it is
recommended to use linear programming or genetic
algorithms for specific calculations and optimization.
Figure 8 shows how the four optimization objectives
of the three methods change as the number of elective
patients increases.
Figure 8: Four objectives of the three methods.
The LLM ceases to provide allocation plans once
the patient count exceeds 40. When the number of
elective patients is less than 150, the effects of LLM-
NSGA and NSGA-II on f2, f3, and f4 are same, but
the f1 of NSGA-II is higher. This indicates that both
methods can schedule surgeries for all patients on
their expected dates, but they differ in how patients
are ordered within the ORs. LLM-NSGA arranges
patients more efficiently, resulting in lower overtime
costs. When the number of patients exceeds 150,
LLM-NSGA finds better allocation plans, rejecting
fewer patients. Although this may lead to more
patients not having surgeries on their expected dates,
the peak number of ICU beds required is also reduced.
5 CONCLUSIONS
In this work, we explore how LLM can directly
provide solutions for small-scale surgery scheduling
problems and can also serve as evolutionary
optimizers, where the LLM generates new solutions
based on the current population, providing high-
quality solutions for large-scale cases. Nonetheless,
LLM still has limitations in handling relatively large
problems. By adjusting the prompts given to LLM, it
may be possible for LLM to solve large-scale
problems step by step based on the prompts.
ACKNOWLEDGEMENTS
This work is partially supported by HarmonicAI -
Human-guided collaborative multi-objective design
of explainable, fair and privacy-preserving AI for
digital health distributed by European Commission
(Call: HORIZON-MSCA-2022-SE-01-01, Project
number: 101131117 and UKRI grant number
EP/Y03743X/1)
The authors sincerely acknowledge the financial
support (n°23 015699 01) provided by the Auvergne
Rhône-Alpes region.
REFERENCES
Altanany, M. Y., Badawy, M., Ebrahim, G. A., & Ehab, A.
(2024). Modeling and optimizing linear projects using
LSM and Non-dominated Sorting Genetic Algorithm
(NSGA-II). Automation in Construction, 165, 105567.
Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K.,
... & Xie, X. (2024). A survey on evaluation of large
language model. ACM Transactions on Intelligent
Systems and Technology, 15(3), 1-45.
Chen, A., Dohan, D., & So, D. (2024). Evoprompting:
Language models for code-level neural architecture
search. Advances in Neural Information Processing
Systems, 36.
Ge, Y., Hua, W., Mei, K., Tan, J., Xu, S., Li, Z., & Zhang,
Y. (2024). Openagi: When llm meets domain experts.
Advances in Neural Information Processing Systems,
36.
Gundawar, A., Verma, M., Guan, L., Valmeekam, K.,
Bhambri, S., & Kambhampati, S. (2024). Robust
Planning with LLM-Modulo Framework: Case Study in
Travel Planning. arxiv preprint arxiv:2405.20625.
Harane, P. P., Unune, D. R., Ahmed, R., & Wojciechowski,
S. (2024). Multi-objective optimization for electric
discharge drilling of waspaloy: A comparative analysis
of NSGA-II, MOGA, MOGWO, and MOPSO.
Alexandria Engineering Journal, 99, 1-16.
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M.,
Dementieva, D., Fischer, F., ... & Kasneci, G. (2023).
ChatGPT for good? On opportunities and challenges of
large language model for education. Learning and
individual differences, 103, 102274.
Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2024).
The benefits, risks and bounds of personalizing the
alignment of large language model to individuals.
Nature Machine Intelligence, 1-10.
Liu, Z., He, X., Tian, Y., & Chawla, N. V. (2024, May).
Can we soft prompt LLMs for graph learning tasks?. In
Companion Proceedings of the ACM on Web
Conference 2024 (pp. 481-484).
Qi, S., Cao, Z., Rao, J., Wang, L., ao, J., & Wang, X. (2023).
What is the limitation of multimodal llms? a deeper
look into multimodal llms through prompt probing.
Information Processing & Management, 60(6), 103510.
0 100 200 300
Number
0
1
2
3
4
10
4
0 100 200 300
Number
0
50
100
0 100 200 300
Number
0
5
10
15
20
0 100 200 300
Number
0
10
20
30
LLM NSGA-II LLM-NSGA