
Langchain4j, exploring Python-based ecosystems for
broader tool support, and optimizing model perfor-
mance through compression techniques. In conclu-
sion, this work demonstrated the feasibility and value
of integrating LLMs into the swine certification pro-
cess, offering a practical solution for a highly special-
ized domain. Future enhancements to the assistant are
expected to refine its capabilities further, empowering
veterinarians and technicians with accurate, efficient,
and user-friendly tools for managing certification pro-
cedures. This study underscores the broader poten-
tial of LLMs to revolutionize domain-specific appli-
cations, paving the way for innovation in public health
and regulatory compliance.
ACKNOWLEDGEMENTS
This research is supported by FUNDESA (project
’Combining Process Mapping and Improvement with
BPM and the Application of Data Analytics in the
Context of Animal Health Defense and Inspection
of Animal Origin Products in the State of RS -
UFSM/060496) and FAPERGS, grant n. 24/2551-
0001401-2. The research by Vinícius Maran is par-
tially supported by CNPq, grant 306356/2020-1 -
DT2.
REFERENCES
ABPA (2024). Relatório anual 2024. In Relatório Anual
2024. Associação Brasileira de Proteína Animal.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., et al.
(2021). On the opportunities and risks of foundation
models. arXiv preprint arXiv:2108.07258.
Brown, T., Mann, B., Ryder, N., Subbiah, M., et al.
(2020). Language models are few-shot learners. In
Advances in Neural Information Processing Systems,
volume 33, pages 1877–1901. Curran Associates, Inc.
Descovi, G., Maran, V., Ebling, D., and Machado, A.
(2021). Towards a blockchain architecture for animal
sanitary control. In ICEIS (1), pages 305–312.
Devlin, J. (2018). Bert: Pre-training of deep bidirec-
tional transformers for language understanding. arXiv
preprint arXiv:1810.04805.
Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V.,
DePristo, M., Chou, K., Cui, C., Corrado, G. S.,
Thrun, S., and Dean, J. (2019). A guide to deep learn-
ing in healthcare. Nature medicine, 25(1):24–29.
Gupta, S., Ranjan, R., and Singh, S. N. (2024). A Com-
prehensive Survey of Retrieval-Augmented Genera-
tion (RAG): Evolution, Current Landscape and Future
Directions.
Howard, J. and Ruder, S. (2018). Universal language model
fine-tuning for text classification. arXiv preprint
arXiv:1801.06146.
Lewis, P., Perez, E., Piktus, A., Petroni, F., et al. (2020).
Retrieval-augmented generation for knowledge-
intensive NLP tasks. In Proceedings of the 34th
International Conference on Neural Information
Processing Systems, NIPS ’20, Red Hook, NY, USA.
Curran Associates Inc.
Mallinger, K., Corpaci, L., Neubauer, T., Tikász, I. E., et al.
(2024). Breaking the barriers of technology adoption:
Explainable ai for requirement analysis and technol-
ogy design in smart farming. Smart Agricultural Tech-
nology, 9:100658.
OLLAMA (2024). Ollama documentation. Acesso em: 07
dez. 2024.
Peters, F. et al. (2018). Design and implementation of a
chatbot in the context of customer support.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., et al.
(2021). Learning transferable visual models from nat-
ural language supervision. In International conference
on machine learning, pages 8748–8763. PMLR.
Radford, A., Wu, J., and Child, R. (2019). Language mod-
els are unsupervised multitask learners. OpenAI blog,
1(8):9.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., et al. (2020).
Exploring the limits of transfer learning with a unified
text-to-text transformer. Journal of machine learning
research, 21(140):1–67.
Reimers, N. and Gurevych, I. (2019). Sentence-bert: Sen-
tence embeddings using siamese bert-networks.
Reynolds, L. and McDonell, K. (2021). Prompt program-
ming for large language models: Beyond the few-shot
paradigm. In Extended abstracts of the 2021 CHI con-
ference on human factors in computing systems, pages
1–7.
Shi, Y., Wang, X., Borhan, M. S., Young, J., Newman, D.,
Berg, E., and Sun, X. (2021). A review on meat qual-
ity evaluation methods based on non-destructive com-
puter vision and artificial intelligence technologies.
Food science of animal resources, 41(4):563.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux,
M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro,
E., Azhar, F., et al. (2023). Llama: Open and ef-
ficient foundation language models. arXiv preprint
arXiv:2302.13971.
Zhang, J., Krishnaa, R., Awadallah, A. H., and Wang, C.
(2023). Ecoassistant: Using llm assistant more afford-
ably and accurately. arXiv preprint arXiv:2310.03046.
Zhong, H., Xiao, C., Tu, C., Zhang, T., et al. (2020). How
does NLP benefit legal system: A summary of legal
artificial intelligence. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 5218–5230, Online. Association
for Computational Linguistics.
ICEIS 2025 - 27th International Conference on Enterprise Information Systems
972