The term General Game Playing (GGP) refers to a subfield of Artificial Intelligence which aims at developing agents able to effectively play many games from a particular class (finite, deterministic). It is also the name of the annual competition proposed by Stanford Logic Group at Stanford University, which provides a framework for testing and evaluating GGP agents. In this paper we present our GGP player which managed to win 4 out of 7 games in the 2012 preliminary round and advanced to the final phase. Our system (named MINI-Player) relies on a pool of playing strategies and autonomously picks the ones which seem to be best suited to a given game. The chosen strategies are combined with one another and incorporated into the Upper Confidence Bounds applied to Trees (UCT) algorithm. The effectiveness of our player is evaluated on a set of games from the 2012 GGP Competition as well as a few other, single-player games. The paper discusses the efficacy of proposed playing strategies and evaluates the mechanism of their switching. The proposed idea of dynamically assigning search strategies during play is both novel and promising.
In this article, we describe the SENSEI system. It helps players to improve their skills in popular eSports games. We discuss the main goals of the system and explain the associated challenges. We also present its conceptual architecture which aims at enabling full automation of the data acquisition and analytic processes. The system is expected to provide in-depth analytics of players’ performance and give practical advice regarding possible improvements. Thus its architecture allows players to provide feedback and manually label important concepts. Finally, we discuss our first case study – an advisory system for popular collectible card video games.