We summarize the sixth data mining competition organized at the Knowledge Pit platform in association with the Federated Conference on Computer Science and Information Systems series, titled Clash Royale Challenge: How to Select Training Decks for Win-rate Prediction. We outline the scope of this challenge and briefly present its results. We also discuss the problem of acquiring knowledge about new notions from video games through an active learning cycle. We explain how this task is related to the problem considered in the challenge and share results of experiments that we conducted to demonstrate usefulness of the active learning approach in practice.
In this paper, we present an efficient construction of the Game Description Language (GDL) interpreter. GDL is a first-order logic language used in the General Game Playing (GGP) framework. Syntactically, the language is a subset of Datalog and Prolog, and like those two, is based on facts and rules. Our aim was to achieve higher execution speed than anyone’s of the currently available tools, including other Prolog interpreters applied to GDL. Speed is a crucial factor of the state-space search methods used by most GGP agents, since the faster the GDL reasoner, the more game states can be evaluated in the allotted time. The cornerstone of our interpreter is the resolution tree which reflects the dependencies between rules. Our paradigm was to expedite any heavy workload to the preprocessing step in order to optimize the real-time usage. The proposed enhancements effectively maintain a balance between the time needed to build the internal data representation and the time required for data analysis during actual play. Therefore we refrain from using tree-based dictionary approaches such as TRIE to store the results of logical queries in favor of a memory-friendly linear representation and dynamic filters to reduce space complexity. Experimental results show that our interpreter outperforms the two most popular Prolog interpreters used by GGP programs: Yet Another Prolog (YAP) and ECLiPSe respectively in 22 and 26 games, out of the 28 tested. We give some insights into possible reasons for the edge of our approach over Prolog.
This paper summarizes the AAIA’17 Data MiningChallenge: Helping AI to Play Hearthstone which was held between March 23, and May 15, 2017 at the Knowledge Pit platform. We briefly describe the scope and background of this competition in the context of a more general project related to the development of an AI engine for video games, called Grail. We also discuss the outcomes of this challenge and demonstrate how predictive models for the assessment of player’s winning chances can be utilized in a construction of an intelligent agent for playing Hearthstone. Finally, we show a few selected machine learning approaches for modeling state and action values in Hearthstone. We provide evaluation for a few promising solutions that may be used to create more advanced types of agents, especially in conjunction with Monte Carlo Tree Search algorithms.
We describe a recruitment support system aiming to help recruiters in finding candidates who are likely to be interested in a given job offer. We present the architecture of that system and explain roles of its main modules. We also give examples of analytical processes supported by the system. In the paper, we focus on a data processing chain that utilizes domain knowledge for the extraction of meaningful features representing pairs of candidates and offers. Moreover, we discuss the usage of a word2vec model for finding concise vector representations of the offers, based on their short textual descriptions. Finally, we present results of an empirical evaluation of our system.
We investigate the impact of supervised prediction models on the strength and efficiency of artificial agents that use the Monte-Carlo Tree Search (MCTS) algorithm to play a popular video game Hearthstone: Heroes of Warcraft. We overview our custom implementation of the MCTS that is well-suited for games with partially hidden information and random effects. We also describe experiments which we designed to quantify the performance of our Hearthstone agent’s decision making.
We show that even simple neural networks can be trained and successfully used for the evaluation of game states. Moreover, we demonstrate that by providing a guidance to the game state search heuristic, it is possible to substantially improve the win rate, and at the same time reduce the required computations.
In this work we present a method for using Deep Q-Networks (DQNs) in multi-objective environments. Deep QNetworks provide remarkable performance in single objective problems learning from high-level visual state representations. However, in many scenarios (e.g in robotics, games), the agent needs to pursue multiple objectives simultaneously. We propose an architecture in which separate DQNs are used to control the agent’s behaviour with respect to particular objectives. In this architecture we introduce decision values to improve the scalarization of multiple DQNs into a single action. Our architecture enables the decomposition of the agent’s behaviour into controllable and replaceable sub-behaviours learned by distinct modules. Moreover, it allows to change the priorities of particular objectives post-learning, while preserving the overall performance of the agent. To evaluate our solution we used a game-like simulator in which an agent – provided with high-level visual input – pursues multiple objectives in a 2D world.
In this paper, we address the problem of safety monitoring in underground coal mines. In particular, we investigate and compare practical methods for the assessment of seismic hazards using analytical models constructed based on sensory data and domain knowledge. For our case study, we use a rich data set collected during a period of over five years from several active Polish coal mines. We focus on comparing the prediction quality between expert methods which serve as a standard in the coal mining industry and state-of-the-art machine learning methods for mining high-dimensional time series data. We describe an international data mining challenge organized to facilitate our study. We also demonstrate a technique which we employed to construct an ensemble of regression models able to outperform other approaches used by participants of the challenge. Finally, we explain how we utilized the data obtained during the competition for the purpose of research on the cold start problem in deploying decision support systems at new mining sites.
We summarize AAIA’18 Data Mining Competition organized at the Knowledge Pit platform. We explain the competition’s scope and outline its results. We also review several approaches to the problem of representing Hearthstone decks in a vector space. We divide such approaches into categories based on a type of the data about individual cards that they use. Finally, we outline experiments aiming to evaluate usefulness of various deck representations for the task of win-rates prediction.
The goal of General Game Playing (GGP) has been to develop computer programs that can perform well across various game types. It is natural for human game players to transfer knowledge from games they already know how to play to other similar games. GGP research attempts to design systems that work well across different game types, including unknown new games. In this review, we present a survey of recent advances (2011 to 2014) in GGP for both traditional games and video games. It is notable that research on GGP has been expanding into modern video games. MonteCarlo Tree Search and its enhancements have been the most influential techniques in GGP for both research domains. Additionally, international competitions have become important events that promote and increase GGP research. Recently, a video GGP competition was launched. In this survey, we review recent progress in the most challenging research areas of Artificial Intelligence (AI) related to universal game playing.
In this study we investigate methods for attribute clustering and their possible applications to the task of computation of decision reducts from information systems. We focus on high-dimensional datasets, that is, microarray data. For this type of data, the traditional reduct construction techniques either can be extremely computationally intensive or can yield poor performance in terms of the size of the resulting reducts. We propose two reduct computation heuristics that combine the greedy search with a diverse selection of candidate attributes. Our experiments confirm that by proper grouping of similar—in some sense interchangeable—attributes, it is possible to significantly decrease computation time, as well as to increase a quality of the obtained reducts (i.e., to decrease their average size). We examine several criteria for attribute clustering, and we also identify so-called garbage clusters, which contain attributes that can be regarded as irrelevant.