Request a demo We can arrange a demo to show you the capabilities of our products

Our products are used to:

_reduce the uncertainty of Machine Learning models,

_prioritize enterprise data efforts,

_support experts in the ML loop,

_improve the quality of ML models, especially in multi-class settings with complex ontologies,

_reduce data footprint and compactify ML models so as to be used by the Internet of Things applications,

_improve gaming experience via more challenging and realistic AI in games,

_create intelligent advisory systems from pre-compiled building blocks.

product BrightBox Uncertainty & Error in Machine Learning | XAI 2.0

BrightBox is the second generation eXplainable Artificial Intelligence (XAI) software toolbox, aimed at taming uncertainty and identifying root causes of errors in modeling.

BrightBox is intended to be used by Data Science teams communicating with Business Owners, as a means to improve Machine Learning models on one hand, and bridge the gap in business understanding on the other.


01 Error & Uncertainty Diagnostics

Explaining the most probable reasons of model errors and uncertainty, utilizing proprietary machine learning diagnostics engine.

02 Fix Recommendations

A set of action items for fixing errors and reducing uncertainty of models, utilizing a proprietary machine learning diagnostics engine.

03 Review & Appraise Errors

Inspection of the most important erroneous cases and intelligent labelling of error severity.

04 True Cost Function Discovery

Data-driven insight into the true nature of the business problem via AI-boosted  errors analysis.

05 Fix What-if Scenarios

Trade-off analysis of model quality improvements assuming introduction of the suggested fixes.

06 + XAI 1.0.

Fairness & bias analysis, what-if scenarios  and other.

Project information

Development of the BrightBox system – a explainable AI class tool for improving the interpretability and predictability of learning methods and diagnostics of the correctness of learned AI / ML models.

Application number: MAZOWSZE/0198/19
Value of the project: 8 548 180,00 zł
Donation: 6 093 476,00 zł
Beneficiary: QED Software Sp. z o. o.
Project duration: 01.02.2020 – 31.07.2022
Project realised as a part of: „Ścieżka dla Mazowsza” contest

Project purpose: BrightBox – explainable AI class tools used to improve the interpretability and predictability of learning methods and diagnostics of correctness of learned AI / ML models. Its purpose will be to support:

  1. analysts and data science specialists creating AI / ML models,
  2. field experts validating AI / ML models,
  3. persons responsible for monitoring the operation of AI / ML models,
  4. end users employing AI / ML models in their work.

The software will provide: Explanation of decisions made by existing (learned) AI / ML models, indicating the reasons for making a particular decision and explaining the reasons for uncertainty while making decisions (explaining the risk of a model making a mistake). Conducting ‘what-if’ analyzes and indicating possible optimization of process control parameters monitored by AI / ML models, including the possibility of refining the input data so that the risk of error is minimized. Performing periodic or continuous diagnostics of errors made by learned AI / ML models, together with an indication of the most probable causes of errors. Designing more interpretable methods of teaching AI / ML models, including methods more resistant to noise in the input data and optimized to match the error scale measures that meet the requirements of field experts. The software will be useful in those practical areas of application of the AI / ML methods, where the ability to understand and explain the nature of their operation is required by law (or the introduction of such requirements under new regulations is planned). On the other hand, however, it should be emphasized that in many areas (such as cyber security, risk monitoring in industrial processes, telemedicine, etc.), the improvement of transparency and interpretability of AI / ML models is highly expected, regardless of the existence of any legal regulations. 

Let’s set up a meeting!