ΑΙhub.org
 

An approach for automatically determining the possible actions in computer game states


by
17 November 2023



share this:

Due to the great difficulty of thoroughly testing video game software by hand, it is desirable to have AI agents that can automatically explore different game functionalities. A key requirement of such agents is a model of the player actions that the agent can use to both determine the set of possible actions in different game states, as well as perform a chosen action on the game selected by the agent’s policy. The typical game engines that are in use today do not offer such a model of actions, leading existing work to either require human effort to manually define the action model or imprecisely guess the possible actions. In our work, we demonstrate how program analysis is an effective solution to this problem by developing a state-of-the-art analysis for the user input handling logic present in games that can automatically model game actions with a discrete action space.

Our key insight is that the possible actions of games correspond to the different execution paths that can be taken through the user input handling logic present in the game’s code. Our methodology first uses techniques such as dependency analysis and program slicing to identify the parts of code responsible for user input handling. Next, we designed a specialized symbolic execution that evaluates the input handling code with symbolic representations of the user input and game state, giving us a set of conditions under which the different game actions occur. This set of conditions is used to define a discrete action space for the game, where each action corresponds to distinct execution path. Finally, we proposed efficient analyses for determining the set of valid actions as the agent plays the game, as well as the set of relevant device inputs to simulate on the game in order to perform a chosen action.

We implemented a prototype of our action analysis for the Unity game engine, then used it to automate the specification of actions for two popular exploration strategies: simple random exploration, where agents select among the valid actions uniformly at random, and curiosity-driven reinforcement learning, where agents learn over time to prioritize actions more likely to lead to new states. Our key finding was that, for the majority of games in our data set, agents using the actions determined by our analysis achieved exploration performance matching or exceeding that of the ideal case of a manual annotation of the game actions, on average achieving better performance. This demonstrates a key advantage of the capability of the automated analysis to exhaustively consider all possible execution paths, therefore often identifying more combinations of valid inputs than the human annotation.

With the increasing importance of automated testing and analysis techniques for computer games, we believe our work provides a crucial component for the deployment of next generation game testing tools based on intelligent agents. However, even with our automated approach to identifying valid actions and their relevant device inputs, the exploration of large game state spaces remains difficult. The development of novel exploration strategies, refinements, and heuristics to be used with our analysis are important next steps to achieving better game testing agents.

Read the work in full

Automatically Defining Game Action Spaces for Exploration Using Program Analysis, Sasha Volokh, William G.J. Halfond, Proceedings of the Nineteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE2023).


This work won the best student paper award at the Nineteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE2023).



tags: ,


Sasha Volokh is a PhD Candidate in Computer Science at the University of Southern California.
Sasha Volokh is a PhD Candidate in Computer Science at the University of Southern California.




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence