0
Instruction-Driven Game Engines on Large Language Models
Eg5qYDS38TmsVJmGYD6WV6l3Q3wARwQR7Q8sJF5swn0
File Type
PDF
Entry Count
93
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

The Instruction-Driven Game Engine (IDGE) project aims to democratize gamedevelopment by enabling a large language model (LLM) to follow free-form gamerules and autonomously generate game-play processes. The IDGE allows users tocreate games by issuing simple natural language instructions, which significantlylowers the barrier for game development. We approach the learning processfor IDGEs as a Next State Prediction task, wherein the model autoregressivelypredicts in-game states given player actions. It is a challenging task because thecomputation of in-game states must be precise; otherwise, slight errors coulddisrupt the game-play. To address this, we train the IDGE in a curriculum mannerthat progressively increases the model’s exposure to complex scenarios.Our initial progress lies in developing an IDGE for Poker, a universally cherishedcard game. The engine we’ve designed not only supports a wide range of pokervariants but also allows for high customization of rules through natural languageinputs. Furthermore, it also favors rapid prototyping of new games from minimalsamples, proposing an innovative paradigm in game development

Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. Graphcodebert: Pre-training code representations with data flow. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL forum?id=jLoC4ez43PZ. Akshat Gupta. Are chatgpt and GPT-4 good poker players? A pre-flop analysis. CoRR, abs/2308.12466, 2023. doi: 10.48550/ARXIV.2308.12466. URL 2308.12466.
id: 70ef241e30f07cef278bd1fa63d3fe77 - page: 11
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL Neel Jain, Ping-yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Neftune: Noisy embeddings improve instruction finetuning. 11 CoRR, abs/2310.05914, 2023. doi: 10.48550/ARXIV.2310.05914. URL arXiv.2310.05914.
id: d0a5bccf35265c2a8e5dcaa30c3243fb - page: 11
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lelio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothee Lacroix, and William El Sayed. Mistral 7b. CoRR, abs/2310.06825, 2023. doi: 10.48550/ARXIV.2310.06825. URL Juho Kim. Pokerkit: A comprehensive python library for fine-grained multi-variant poker game simulations. CoRR, abs/2308.07327, 2023. doi: 10.48550/ARXIV.2308.07327. URL org/10.48550/arXiv.2308.07327.
id: 96a8a175cfeec6f3f406e5bc684116d0 - page: 12
Heinrich K uttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktaschel. The nethack learning environment. In Hugo Larochelle, MarcAurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL paper/2020/hash/569ff987c643b4bedf504efda8f786c2-Abstract.html. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention, 2023.
id: ae2741db4b8df510b6eea7c9ccd03f5b - page: 12
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "Eg5qYDS38TmsVJmGYD6WV6l3Q3wARwQR7Q8sJF5swn0", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "Eg5qYDS38TmsVJmGYD6WV6l3Q3wARwQR7Q8sJF5swn0", "level": 2}'