Game-Theoretic LLM: Agent Workflow for Negotiation Games

Abstract

This paper investigates the rationality of large language models (LLMs) in strategic decision-making contexts, specifically within the framework of game theory. We evaluate several state-of-the-art LLMs across a spectrum of complete-information and incomplete-information games. Our findings reveal that LLMs frequently deviate from rational strategies, particularly as the complexity of the game increases with larger payoff matrices or deeper sequential trees. To address these limitations, we design multiple game-theoretic workflows that guide the reasoning and decision-making processes of LLMs. These workflows aim to enhance the models' ability to compute Nash Equilibria and make rational choices, even under conditions of uncertainty and incomplete information. Experimental results demonstrate that the adoption of these workflows significantly improves the rationality and robustness of LLMs in game-theoretic tasks. Specifically, with the workflow, LLMs exhibit marked improvements in identifying optimal strategies, achieving near-optimal allocations in negotiation scenarios, and reducing susceptibility to exploitation during negotiations. Furthermore, we explore the meta-strategic considerations of whether it is rational for agents to adopt such workflows, recognizing that the decision to use or forgo the workflow constitutes a game-theoretic issue in itself. Our research contributes to a deeper understanding of LLMs' decision-making capabilities in strategic contexts and provides insights into enhancing their rationality through structured workflows. The findings have implications for the development of more robust and strategically sound AI agents capable of navigating complex interactive environments.

Department students and members are invited to meet with Dr. Hua after the presentation. Sign up for your small-group appointment here.


Wenyue Hua is a computer science postdoctoral researcher at the University of California, Santa Barbara working primarily on large language models (LLMs) and LLM-based agents. She received her PhD from Rutgers University-New Brunswick under the supervision of professor Yongfeng Zhang. Her research focuses on the safety and efficiency of LLM agents, multi-agent interaction, and LLM reasoning. She has served as a research scientist intern at Amazon Web Services and Microsoft Research. She has published multiple papers at top natural language processing and machine learning conferences and journals such as the International Conference on Learning Representations (ICLR), the Conference on Neural Information Processing Systems (NeurIPS), the Association for Computational Linguistics (ACL), the Conference on Empirical Methods in Natural Language Processing (EMNLP), the Conference of the European Chapter of the ACL (EACL), and Transactions of the Association for Computational Linguistics (TACL).