BrightBox

Illuminating the unknown

BrightBox provides methods for explaining, auditing and diagnosing machine learning models:
Explainability
To fully understand the causes and risks in AI-augmented processes we offer human interpretable explanations for both the decisions of a model, and the uncertainty of those decisions.
Auditing
A crucial component in understanding whether a model is stable, reliable and secure, and how its behaviour may change over time. BrightBox offers a set of auditing tools to help increase the credibility of machine learning for business.
Diagnostics
When a model is not working as expected, Brighbox offers a set of diagnostic tools to help identify the problem, and then pinpoint whether the problem lies in the data or the training methods.
Additional explainability added to the algorithm-driven decisions in your company solves several contemporary issues

THE CONSTANT NEED FOR NEW KNOWLEDGE AND BETTER UNDERSTANDING

BrightBox is extremely helpful in learning new facts, gathering information and thus acquiring new knowledge. Only truly explanatory systems can be useful in this aspect.

THE NEED FOR CREDIBILITY

Over the past few years, there has been a lot of controversy around systems based on AI / ML models, of which the results were dened as biased or discriminatory. BrightBox allows you to understand the specific result deduced by a chosen model.

THE CONSTANT NEED FOR NEW KNOWLEDGE AND BETTER UNDERSTANDING

BrightBox is extremely helpful in learning new facts, gathering information and thus acquiring new knowledge. Only truly explanatory systems can be useful in this aspect.

THE NEED FOR CREDIBILITY

Over the past few years, there has been a lot of controversy around systems based on AI / ML models, of which the results were dened as biased or discriminatory. BrightBox allows you to understand the specific result deduced by a chosen model.

THE NEED FOR BETTER CONTROL OVER AI/ML

Explainability is not only important to clarify a decision, but it can also help prevent a failure / emergency. Indeed, a better understanding of the operation of an AI-based system allows for knowledge of hitherto unknown gaps and vulnerabilities, which helps to quickly identify and correct errors in low-critical situations (debugging).

THE NEED FOR BETTER CONTROL OVER AI/ML

Explainability is not only important to clarify a decision, but it can also help prevent a failure / emergency. Indeed, a better understanding of the operation of an AI-based system allows for knowledge of hitherto unknown gaps and vulnerabilities, which helps to quickly identify and correct errors in low-critical situations (debugging).

THE NEED FOR BETTER EXPLAINABILITY

An AI / ML model that can be explained and understood is easier to improve. When users know why the system produced a specific output, they will also know how to make it better / more efficient. Thus XAI can be the basis for continuous iteration and improvement of AI / ML models.

THE NEED FOR BETTER EXPLAINABILITY

An AI / ML model that can be explained and understood is easier to improve. When users know why the system produced a specific output, they will also know how to make it better / more efficient. Thus XAI can be the basis for continuous iteration and improvement of AI / ML models.

Explainability is a pressing need for every industry

Healthcare

There are known examples of AI / ML-based systems, which made inappropriate decisions from a human point of view, e.g. not classifying an asthma patient during an attack as a patient requiring immediate medical assistance. In consequence in many places it was necessary to stop using such systems.

Justice system

AI algorithms are used e.g. to assess the likelihood of recidivism.

Transport and Logistics

Explainability is need in the context of dynamic development of autonomous car technologies.

Cybersecurity

The ability to understand the reasons for these and not other black box results will enable experts in cybersec companies to acquire new knowledge and improve their existing AI / ML tools.

Finance industry

There is need – oftentimes resulting directly from legal regulations – to explain the grounds for refusing a loan, or the need to trace the basis for a trader’s recommendation to perform a given operation (which can have huge financial consequences).

Do you want to know more about BrightBox? Contact us!
Piotr Biczyk
Chief Strategy Office
Do you want to work for BrightBox? Check job offers

Project information

Development of the BrightBox system – a explainable AI class tool for improving the interpretability and predictability of learning methods and diagnostics of the correctness of learned AI / ML models.

Application number: MAZOWSZE/0198/19
Value of the project: 8 548 180,00 zł
Donation: 6 093 476,00 zł
Beneficiary: QED Software Sp. z o. o.
Project duration: 01.01.2020 – 30.06.2022
Project realised as a part of: „Ścieżka dla Mazowsza” contest

Project purpose
BrightBox – explainable AI class tools used to improve the interpretability and predictability of learning methods and diagnostics of correctness of learned AI / ML models. Its purpose will be to support:
1) analysts and data science specialists creating AI / ML models,
2) field experts validating AI / ML models,
3) persons responsible for monitoring the operation of AI / ML models,
4) end users employing AI / ML models in their work.

The software will provide:

  • Explanation of decisions made by existing (learned) AI / ML models, indicating the reasons for making a particular decision and explaining the reasons for uncertainty while making decisions (explaining the risk of a model making a mistake).
  • Conducting ‚what-if’ analyzes and indicating possible optimization of process control parameters monitored by AI / ML models, including the possibility of refining the input data so that the risk of error is minimized.
  • Performing periodic or continuous diagnostics of errors made by learned AI / ML models, together with an indication of the most probable causes of errors.
  • Designing more interpretable methods of teaching AI / ML models, including methods more resistant to noise in the input data and optimized to match the error scale measures that meet the requirements of field experts.
  • The software will be useful in those practical areas of application of the AI / ML methods, where the ability to understand and explain the nature of their operation is required by law (or the introduction of such requirements under new regulations is planned). On the other hand, however, it should be emphasized that in many areas (such as cyber security, risk monitoring in industrial processes, telemedicine, etc.), the improvement of transparency and interpretability of AI / ML models is highly expected, regardless of the existence of any legal regulations.
🠕 Go up