
LLM. Future experiments can extend to closed-source
LLMs such as GPT to further validate the approach.
In conclusion, integrating Swarm Intelligence Al-
gorithms with LLMs has proven to be a powerful
method for prompt optimization, offering substan-
tial performance gains with reduced resource con-
sumption. This work lays the foundation for further
advancements in efficient LLM optimization tech-
niques.
ACKNOWLEDGEMENTS
We would like to extend our thanks to Guo Qingyan,
who collaborated on the paper (Guo et al., 2023b)
with Microsoft and provided essential insights and
guidance on the feasibility of extending this project
to further ventures.
REFERENCES
Alva-Manchego, F., Martin, L., Bordes, A., Scarton, C.,
Sagot, B., and Specia, L. (2020). Asset: A dataset
for tuning and evaluation of sentence simplification
models with multiple rewriting transformations. arXiv
preprint arXiv:2005.00481.
Bharambe, U., Ramesh, R., Mahato, M., and Chaudhari,
S. (2024). Synergies between natural language pro-
cessing and swarm intelligence optimization: A com-
prehensive overview. In Advanced Machine Learn-
ing with Evolutionary and Metaheuristic Techniques,
pages 121–151.
Bharti, V., Biswas, B., and Shukla, K. (2022). Swarm intel-
ligence for deep learning: Concepts, challenges and
recent trends. In Advances in Swarm Intelligence:
Variations and Adaptations for Optimization Prob-
lems, pages 37–57. Springer International Publishing.
Gliwa, B., Mochol, I., Biesek, M., and Wawer, A.
(2019). Samsum corpus: A human-annotated dia-
logue dataset for abstractive summarization. arXiv
preprint arXiv:1911.12237.
Guo, Q., Wang, R., Guo, J., Li, B., Song, K., Tan, X.,
Liu, G., Bian, J., and Yang, Y. (2023a). Connecting
large language models with evolutionary algorithms
yields powerful prompt optimizers. arXiv preprint
arXiv:2309.08532.
Guo, Q., Wang, R., Guo, J., Li, B., Song, K., Tan, X.,
Liu, G., Bian, J., and Yang, Y. (2023b). Connecting
large language models with evolutionary algorithms
yields powerful prompt optimizers. arXiv preprint
arXiv:2309.08532.
Janga Reddy, M. and Nagesh Kumar, D. (2020). Evolution-
ary algorithms, swarm intelligence methods, and their
applications in water resources engineering: a state-
of-the-art review. H2Open Journal, 3(1):135–188.
Lester, B., Al-Rfou, R., and Constant, N. (2021). The power
of scale for parameter-efficient prompt tuning. arXiv
preprint arXiv:2104.08691.
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig,
G. (2023). Pre-train, prompt, and predict: A system-
atic survey of prompting methods in natural language
processing. ACM Computing Surveys, 55(9):1–35.
Ma, R., Wang, X., Zhou, X., Li, J., Du, N., Gui, T.,
Zhang, Q., and Huang, X. (2024). Are large language
models good prompt optimizers? arXiv preprint
arXiv:2402.02101.
Pang, B. and Lee, L. (2005). Seeing stars: Exploiting class
relationships for sentiment categorization with respect
to rating scales. arXiv preprint cs/0506075.
Rajwar, K., Deep, K., and Das, S. (2023). An exhaustive
review of the metaheuristic algorithms for search and
optimization: Taxonomy, applications, and open chal-
lenges. Artificial Intelligence Review, 56(11):13187–
13257.
Selvaraj, S. and Choi, E. (2021). Swarm intelligence algo-
rithms in text document clustering with various bench-
marks. Sensors, 21(9):3196.
Wang, X., Li, C., Wang, Z., Bai, F., Luo, H., Zhang, J.,
Jojic, N., Xing, E., and Hu, Z. (2023). Prompta-
gent: Strategic planning with language models en-
ables expert-level prompt optimization. arXiv preprint
arXiv:2310.16427.
Yang, C., Wang, X., Lu, Y., Liu, H., Le, Q., Zhou, D., and
Chen, X. (2023). Large language models as optimiz-
ers. arXiv preprint arXiv:2309.03409.
Zhou, Y., Muresanu, A., Han, Z., Paster, K., Pitis, S.,
Chan, H., and Ba, J. (2022). Large language mod-
els are human-level prompt engineers. arXiv preprint
arXiv:2211.01910.
SwarmPrompt: Swarm Intelligence-Driven Prompt Optimization Using Large Language Models
93