Our products are used to:
_reduce the uncertainty of Machine Learning models,
_prioritize enterprise data efforts,
_support experts in the ML loop,
_improve the quality of ML models, especially in multi-class settings with complex ontologies,
_reduce data footprint and compactify ML models so as to be used by the Internet of Things applications,
_improve gaming experience via more challenging and realistic AI in games,
_create intelligent advisory systems from pre-compiled building blocks.
The current fascination with artificial intelligence is not something new. During the 60 years that have elapsed since the creation of the first machine to recreate the human thinking process, there have been periods of growth and decline in enthusiasm for this technology. The two “AI winters” that have happened since then proved to be an important lesson for the worlds of science and business alike.
Let me start with a short disclaimer — I am aware that the term artificial intelligence is a huge mental shortcut. For the purposes of this article, I use it as a convenient umbrella term that covers all the techniques that reproduce the way people reason and make decisions.
The term artificial intelligence was first used in 1956. Three years later, attempts were made to create a machine that would at least partially reproduce the functions of a human brain. This is how the General Problem Solver was created, that was supposed to work with one caveat — each task it would be requested to solve had to be expressed in the form of a mathematical and logical model understandable to a computer. This approach to artificial intelligence is called symbolic. Of course, the vast majority of problems could not be expressed in this way, and in the case of some others the barrier was combinatorial explosion, i.e. the rapidly increasing computational complexity of tasks relatively simple even for a human. Neither of these challenges have been fully overcome to this day.
The symbolic approach was followed by tools for the automatic translation of Russian documents, in which the American government began to invest in the 1950s. The enthusiasm of the authorities was further boosted by scientists, but after a few years everyone faced a lack of significant progress in this area. The ALPAC report published in 1966 stated that machine translations were more expensive, less accurate and slower than those performed by humans. The US National Research Council withheld all funding, which stopped the research for many years.
Another institution that spent millions of dollars on financing research into AI in the 1960s and 1970s was DARPA. The agency’s project of a voice controlled aircraft, however, ended in a big disappointment — even though the system was able to recognize commands in English, the words needed to be spoken in a specific order. In subsequent years, DARPA ceased funding for both this project and general research on AI.
Around the same time, the Lighthill report, commissioned by the British Parliament, critically assessed the state of research on AI in this country. The result of the report was a radical reduction in research funding in both the UK and other European countries. Along with the reduction of research spending in the USA, this led to the first AI winter.
The situation improved in the early 1980s, when the tap with research money was opened again. At that time, another approach to AI, called computational or subsymbolic, has gained major popularity. It consists in the creation of self-learning programs, that analyze the collected data in order to solve a problem, instead of just running a previously created algorithm (although, of course, the data analysis itself is carried out by such an algorithm). This approach was utilized by the first expert systems emerging at that time in large enterprises. The market was developing, and new companies were established to create both software and specialized computers. Predictions were made that computers would soon start replacing doctors in diagnosing patients.
A few years later, the entire market collapsed due to, among other reasons, serious software problems related to the limitations of the computational approach to AI. The expert systems did not learn and were “brittle” — it was common that when unusual input data were introduced, the program generated absurd results that would never have been invented by a human expert. The systems were also very difficult to update — the knowledge contained in them was recorded using meta-languages, i.e. programming code, which had to be constantly updated, maintained and tested. The 10-year project of the Japanese government aimed at creating a fifth-generation computer, capable of, among others, automatic translation and human reasoning, ended in a huge disappointment.
And so came the second AI winter, during which scientists still working in this field — for fear of ostracism by the academic community and business — intentionally stopped using the term artificial intelligence. Instead, they talked of machine learning, data analytics, intelligent systems, big data, and data science. At the same time, more and more elements constituting the AI domain were coming into use.
There are many indications that winter is behind us now. Worldwide, AI research is lavishly funded and the term itself has returned to widespread use. Around 2005, technologies related to neural networks, advanced statistical methods and fuzzy logic got new wind in their sails thanks to the significant increase in computing power, resulting from the popularization of processors capable of parallel computing.
At the same time, all these technologies were constantly developed and gradually implemented in business. Companies — without directly calling it AI — created models that predicted their sales or the likelihood of a loan not being paid back. There was also a demand for systems for managing networks of distributed agents, e.g. thousands of sensors installed in an industrial infrastructure.
Despite the development in the field of AI, our industry still faces challenges — some are the same as 50 years ago, others — completely new. The old challenges include:
These issues cannot be solved by scientists alone. How we want and allow our world to be changed by artificial intelligence is up to all of us — citizens, politicians, business representatives, media and non-governmental organizations. It only depends on us to decide how much the AI mechanisms will increase our comfort and security, and how much they will become the tools for society control.
I have deliberately refrained from mentioning the challenge of creating the so-called general artificial intelligence, thinking like humans do. For the vast majority of the AI environment this is not even a goal. We focus on improving AI methods specialized in narrow fields to solve specific problems. We develop methods of inference, data analysis, calculation, response to the environment or methods supporting emergent behavior in agent groups.
Although it is said that general artificial intelligence will be available in 20–30 years from now, we should bear in mind that similar statements have been heard for the past 60 years. Maybe this time we should cool our enthusiasm before it is cooled by another AI winter.