INCOSE (2023a). Guide to the Systems Engineering Body
of Knowledge. International Council on Systems En-
gineering, San Diego, CA, 2.9 edition.
INCOSE (2023b). INCOSE systems engineering handbook.
John Wiley & Sons, Nashville, TN, 5 edition.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E.,
Bang, Y. J., Madotto, A., and Fung, P. (2023). Sur-
vey of Hallucination in Natural Language Generation.
ACM Comput. Surv., 55(12).
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford,
C., Chaplot, D. S., de las Casas, D., Bressand, F.,
Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R.,
Lachaux, M.-A., Stock, P., Scao, T. L., Lavril, T.,
Wang, T., Lacroix, T., and Sayed, W. E. (2023). Mis-
tral 7B. ArXiv, abs/2310.06825.
Kaddour, J., Harris, J., Mozes, M., Bradley, H., Raileanu,
R., and McHardy, R. (2023). Challenges and
Applications of Large Language Models. ArXiv,
abs/2307.10169.
Koc¸, H., Erdo
˘
gan, A. M., Barjakly, Y., and Peker, S. (2021).
UML Diagrams in Software Engineering Research: A
Systematic Literature Review. Proceedings, 74(1).
Ling, C., Zhao, X., Lu, J., Deng, C., Zheng, C., Wang, J.,
Chowdhury, T., Li, Y., Cui, H., Zhang, X., Zhao, T.,
Panalkar, A., Cheng, W., Wang, H., Liu, Y., Chen,
Z., Chen, H., White, C., Gu, Q., Pei, J., and Zhao, L.
(2023). Domain Specialization as the Key to Make
Large Language Models Disruptive: A Comprehen-
sive Survey. ArXiv, abs/2305.18703.
Liu, M., Ene, T.-D., Kirby, R., Cheng, C., Pinckney, N.,
Liang, R., Alben, J., Anand, H., Banerjee, S., Bayrak-
taroglu, I., Bhaskaran, B., Catanzaro, B., Chaudhuri,
A., Clay, S., Dally, B., Dang, L., Deshpande, P.,
Dhodhi, S., Halepete, S., Hill, E., Hu, J., Jain, S.,
Khailany, B., Kokai, G., Kunal, K., Li, X., Lind, C.,
Liu, H., Oberman, S., Omar, S., Pratty, S., Raiman,
J., Sarkar, A., Shao, Z., Sun, H., Suthar, P. P., Tej, V.,
Turner, W., Xu, K., and Ren, H. (2023). ChipNeMo:
Domain-Adapted LLMs for Chip Design. ArXiv,
abs/2311.00176.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Ef-
ficient Estimation of Word Representations in Vector
Space. In Bengio, Y. and LeCun, Y., editors, 1st In-
ternational Conference on Learning Representations,
ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013,
Workshop Track Proceedings.
Myers, D., Mohawesh, R., Chellaboina, V. I., Sathvik,
A. L., Venkatesh, P., Ho, Y.-H., Henshaw, H., Al-
hawawreh, M., Berdik, D., and Jararweh, Y. (2023).
Foundation and large language models: fundamentals,
challenges, opportunities, and social impacts. Cluster
Computing.
OpenAI (2023). GPT-4 Technical Report. ArXiv,
abs/2303.08774.
Radford, A. and Narasimhan, K. (2018). Improving
Language Understanding by Generative Pre-Training.
Preprint.
Ratner, N., Levine, Y., Belinkov, Y., Ram, O., Magar, I.,
Abend, O., Karpas, E., Shashua, A., Leyton-Brown,
K., and Shoham, Y. (2023). Parallel context win-
dows for large language models. In Rogers, A.,
Boyd-Graber, J., and Okazaki, N., editors, Proceed-
ings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 6383–6402, Toronto, Canada. Association for
Computational Linguistics.
Righetti, P., de Juana Gamo, J., and Sancho, F. (2020).
Metop-c deployment and start of three-satellite opera-
tions. The Aeronautical Journal, 124(1276):902–916.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava,
P., Bhosale, S., Bikel, D., Blecher, L., Ferrer, C. C.,
Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu,
J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal,
N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H.,
Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I.,
Korenev, A., Koura, P. S., Lachaux, M.-A., Lavril,
T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet,
X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y.,
Poulton, A., Reizenstein, J., Rungta, R., Saladi, K.,
Schelten, A., Silva, R., Smith, E. M., Subramanian,
R., Tan, X. E., Tang, B., Taylor, R., Williams, A.,
Kuan, J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan,
A., Kambadur, M., Narang, S., Rodriguez, A., Sto-
jnic, R., Edunov, S., and Scialom, T. (2023). Llama
2: Open Foundation and Fine-Tuned Chat Models.
ArXiv, abs/2307.09288.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I.
(2023). Attention Is All You Need. In Proceedings of
the 31st Conference on Neural Information Process-
ing Systems (NIPS 2017). Association for Computing
Machinery.
APPENDIX
To better illustrate the methodology’s objective crite-
rion, Figure 5 presents a flowchart of the steps fol-
lowed to compute the cosine similarity values be-
tween the design assistant’s output using an off-
the-shelf LLM and the golden standard, as well as
between the design assistant’s output leveraging an
LLM+OPM integration and the golden standard. The
flowchart also illustrates the metrics that are calcu-
lated to compare the cosine similarity values and
benchmark the improvement attained by the integra-
tion of the space mission’s OPM system model.
MBSE-AI Integration 2024 - Workshop on Model-based System Engineering and Artificial Intelligence
344