Request a demo We can arrange a demo to show you the capabilities of our products

Our products are used to:

_reduce the uncertainty of Machine Learning models,

_prioritize enterprise data efforts,

_support experts in the ML loop,

_improve the quality of ML models, especially in multi-class settings with complex ontologies,

_reduce data footprint and compactify ML models so as to be used by the Internet of Things applications,

_improve gaming experience via more challenging and realistic AI in games,

_create intelligent advisory systems from pre-compiled building blocks.

Start EU Projects Fogs of War

Fogs of War

Project title:
[EN] Development and validation in real conditions of a system for reasoning, beliefs management and coordination of computer agents under uncertainty and incomplete information for the aim of realistic behavior.
[PL] Opracowanie i weryfikacja w rzeczywistych warunkach rozwiązań służących wnioskowaniu, zarządzaniu przekonaniami oraz koordynacji komputerowych graczy w grach z ograniczoną informacją.

Application number: POIR.01.02.00-00-0207/20
Value of the project: 6 800 070.00 PLN
Donation: 4 798 078.00 PLN
Beneficiary: QED Software Sp. z o. o.
Project duration: 2021-03-01 - 2023-06-30
Project realised as a part of: GAMEINN

Fogs of War project focuses on Artificial Intelligence methods for games with imperfect information and uncertainty. We aim to develop varied approaches to problems such as:

  • Inference of missing information based on heterogenous observation data and multiple reasoning techniques
  • Rule-based reasoning - both design-driven and autonomous (learning-based)
  • Probabilistic reasoning
  • Simple modelling of perception and memory
  • Exploratory behaviors and belief revision based on changing observations of the game world
  • Decision making methods that operate directly on hidden information
  • Coordination and communication between agents in the presence of incomplete knowledge
  • Tools allowing to investigate the motivations of actions of AI agents - building explanations of particular decisions and justifying them based only on available information
  • The problem of measuring the believability / human-likeness of AI behavior in video games

During this project, we'll develop a suite of libraries, GUI tools and game engine plugins, allowing game developers to implement and easily debug intelligent NPCs that can deal with imperfect information in a structured manner and are able to make reasonable decisions without resorting to various cheats, like providing them with information that normally would be hidden from human players.

Measuring behavior believability is another important part of the project. We'll create Machine Learning models able to detect behavior sequences that normally wouldn't be performed by humans, given their information about the game world. Potential applications of these models range from detection of bots and cheaters to evaluation of AI quality.

Let’s set up a meeting!