
H
¨
andler, T. (2023). A taxonomy for autonomous llm-
powered multi-agent architectures. In KMIS, pages
85–98.
Hatalis, K., Christou, D., Myers, J., Jones, S., Lambert, K.,
Amos-Binks, A., Dannenhauer, Z., and Dannenhauer,
D. (2023). Memory matters: The need to improve
long-term memory in llm-agents. In Proceedings of
the AAAI Symposium Series, volume 2, pages 277–
280.
Hu, S., Lu, C., and Clune, J. (2024). Automated design of
agentic systems.
Ilevbare, I. M., Probert, D., and Phaal, R. (2013). A re-
view of triz, and its benefits and challenges in practice.
Technovation, 33(2-3):30–37.
Jiang, S. and Luo, J. (2024). Autotriz: Artificial ideation
with triz and large language models.
Kone, M. T., Shimazu, A., and Nakajima, T. (2000). The
state of the art in agent communication languages.
Knowledge and Information Systems, 2:259–284.
Kostka, A. and Chudziak, J. A. (2024). Synergizing logical
reasoning, long-term memory, and collaborative intel-
ligence in multi-agent llm systems. In Pacific Asia
Conference on Language, Information and Computa-
tion (PACLIC 38), Tokyo, Japan. In press.
LangGraph (2023). LangGraph. https://langchain-ai.
github.io/langgraph/. Accessed: 10-01-2025.
Loh, H. T., He, C., and Shen, L. (2006). Automatic classifi-
cation of patent documents for triz users. World Patent
Information, 28(1):6–13.
Luing, N. S. S., Toh, G. G., and Chau, G. H. (2024). Appli-
cation of triz for gantry crane improvement. In Jour-
nal of Physics: Conference Series, volume 2772, page
012004. IOP Publishing.
Mahto, D. (2013). Concepts, tools and techniques of prob-
lem solving through triz: A review. International
Journal of Innovative Research in Science, Engineer-
ing and Technology, 2(7).
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M.,
Socher, R., Amatriain, X., and Gao, J. (2024). Large
language models: A survey.
Moehrle, M. G. (2005). What is triz? from conceptual ba-
sics to a framework for research. Creativity and inno-
vation management, 14(1):3–13.
Nassar, N. and AbouRizk, S. (2016). Introduction to tech-
niques for resolving project performance contradic-
tions. Journal of Construction Engineering and Man-
agement, 142(8):04016027.
Oppenlaender, J. (2022). The creativity of text-to-image
generation. In Proceedings of the 25th international
academic mindtrek conference, pages 192–202.
Orloff, M. A. (2006). TRIZ. Springer.
Orr
`
u, G., Piarulli, A., Conversano, C., and Gemignani, A.
(2023). Human-like problem-solving abilities in large
language models using chatgpt. Frontiers in artificial
intelligence, 6:1199350.
Ouyang, S., Zhang, J. M., Harman, M., and Wang, M.
(2024). An empirical study of the non-determinism
of chatgpt in code generation. ACM Transactions on
Software Engineering and Methodology.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang,
P., and Bernstein, M. S. (2023). Generative agents:
Interactive simulacra of human behavior.
Sehested, C. and Sonnenberg, H. (2010). Lean innovation:
a fast path from knowledge to value. Springer Science
& Business Media.
Serugendo, G. D. M., Gleizes, M.-P., and Karageorgos, A.
(2005). Self-organization in multi-agent systems. The
Knowledge engineering review, 20(2):165–189.
Sumers, T. R., Yao, S., Narasimhan, K., and Griffiths, T. L.
(2024). Cognitive architectures for language agents.
Sun, R. (2024). Can a cognitive architecture fundamen-
tally enhance llms? or vice versa? arXiv preprint
arXiv:2401.10444.
Van Harmelen, F., Lifschitz, V., and Porter, B. (2008).
Handbook of knowledge representation. Elsevier.
Wawer, M., Chudziak, J. A., and Niewiadomska-
Szynkiewicz, E. (2024). Large language models and
the elliott wave principle: A multi-agent deep learn-
ing approach to big data analysis in financial markets.
Applied Sciences, 14(24).
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B.,
Xia, F., Chi, E., Le, Q., and Zhou, D. (2023). Chain-
of-thought prompting elicits reasoning in large lan-
guage models.
Wu, Q., Bansal, G., Zhang, J., Wu, Y., Li, B., Zhu, E., Jiang,
L., Zhang, X., Zhang, S., Liu, J., Awadallah, A. H.,
White, R. W., Burger, D., and Wang, C. (2023). Au-
togen: Enabling next-gen llm applications via multi-
agent conversation.
Wu, S., Oltramari, A., Francis, J., Giles, C. L., and Rit-
ter, F. E. (2024). Cognitive llms: Toward human-
like artificial intelligence by integrating cognitive ar-
chitectures and large language models for manufactur-
ing decision-making. Neurosymbolic Artificial Intelli-
gence.
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan,
K., and Cao, Y. (2023). React: Synergizing reasoning
and acting in language models.
Yuan, R., Lin, H., Wang, Y., Tian, Z., Wu, S., Shen, T.,
Zhang, G., Wu, Y., Liu, C., Zhou, Z., Ma, Z., Xue, L.,
Wang, Z., Liu, Q., Zheng, T., Li, Y., Ma, Y., Liang, Y.,
Chi, X., Liu, R., Wang, Z., Li, P., Wu, J., Lin, C., Liu,
Q., Jiang, T., Huang, W., Chen, W., Benetos, E., Fu,
J., Xia, G., Dannenberg, R., Xue, W., Kang, S., and
Guo, Y. (2024). Chatmusician: Understanding and
generating music intrinsically with llm.
Zamfirescu-Pereira, J., Wong, R. Y., Hartmann, B., and
Yang, Q. (2023). Why johnny can’t prompt: how
non-ai experts try (and fail) to design llm prompts. In
Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems, pages 1–21.
Zhao, Z., Lee, W. S., and Hsu, D. (2023). Large language
models as commonsense knowledge for large-scale
task planning. In Oh, A., Naumann, T., Globerson,
A., Saenko, K., Hardt, M., and Levine, S., editors,
Advances in Neural Information Processing Systems,
volume 36, pages 31967–31987. Curran Associates,
Inc.
TRIZ Agents: A Multi-Agent LLM Approach for TRIZ-Based Innovation
207