Two winters and a spring of artificial intelligence

Blog
AI

The current fascination with artificial intelligence is not something new. During the 60 years that have elapsed since the creation of the first machine to recreate the human thinking process, there have been periods of growth and decline in enthusiasm for this technology. The two “AI winters” that have happened since then proved to be an important lesson for the worlds of science and business alike.

Let me start with a short disclaimer — I am aware that the term artificial intelligence is a huge mental shortcut. For the purposes of this article, I use it as a convenient umbrella term that covers all the techniques that reproduce the way people reason and make decisions.

The challenges of 60 years ago and the first AI winter

The term artificial intelligence was first used in 1956. Three years later, attempts were made to create a machine that would at least partially reproduce the functions of a human brain. This is how the General Problem Solver was created, that was supposed to work with one caveat — each task it would be requested to solve had to be expressed in the form of a mathematical and logical model understandable to a computer. This approach to artificial intelligence is called symbolic. Of course, the vast majority of problems could not be expressed in this way, and in the case of some others the barrier was combinatorial explosion, i.e. the rapidly increasing computational complexity of tasks relatively simple even for a human. Neither of these challenges have been fully overcome to this day.

The symbolic approach was followed by tools for the automatic translation of Russian documents, in which the American government began to invest in the 1950s. The enthusiasm of the authorities was further boosted by scientists, but after a few years everyone faced a lack of significant progress in this area. The ALPAC report published in 1966 stated that machine translations were more expensive, less accurate and slower than those performed by humans. The US National Research Council withheld all funding, which stopped the research for many years.

Another institution that spent millions of dollars on financing research into AI in the 1960s and 1970s was DARPA. The agency’s project of a voice controlled aircraft, however, ended in a big disappointment — even though the system was able to recognize commands in English, the words needed to be spoken in a specific order. In subsequent years, DARPA ceased funding for both this project and general research on AI.

Around the same time, the Lighthill report, commissioned by the British Parliament, critically assessed the state of research on AI in this country. The result of the report was a radical reduction in research funding in both the UK and other European countries. Along with the reduction of research spending in the USA, this led to the first AI winter.

Expert systems and the second AI winter

The situation improved in the early 1980s, when the tap with research money was opened again. At that time, another approach to AI, called computational or subsymbolic, has gained major popularity. It consists in the creation of self-learning programs, that analyze the collected data in order to solve a problem, instead of just running a previously created algorithm (although, of course, the data analysis itself is carried out by such an algorithm). This approach was utilized by the first expert systems emerging at that time in large enterprises. The market was developing, and new companies were established to create both software and specialized computers. Predictions were made that computers would soon start replacing doctors in diagnosing patients.

A few years later, the entire market collapsed due to, among other reasons, serious software problems related to the limitations of the computational approach to AI. The expert systems did not learn and were “brittle” — it was common that when unusual input data were introduced, the program generated absurd results that would never have been invented by a human expert. The systems were also very difficult to update — the knowledge contained in them was recorded using meta-languages, i.e. programming code, which had to be constantly updated, maintained and tested. The 10-year project of the Japanese government aimed at creating a fifth-generation computer, capable of, among others, automatic translation and human reasoning, ended in a huge disappointment.

And so came the second AI winter, during which scientists still working in this field — for fear of ostracism by the academic community and business — intentionally stopped using the term artificial intelligence. Instead, they talked of machine learning, data analytics, intelligent systems, big data, and data science. At the same time, more and more elements constituting the AI domain were coming into use.

Hello spring and new challenges

There are many indications that winter is behind us now. Worldwide, AI research is lavishly funded and the term itself has returned to widespread use. Around 2005, technologies related to neural networks, advanced statistical methods and fuzzy logic got new wind in their sails thanks to the significant increase in computing power, resulting from the popularization of processors capable of parallel computing.

At the same time, all these technologies were constantly developed and gradually implemented in business. Companies — without directly calling it AI — created models that predicted their sales or the likelihood of a loan not being paid back. There was also a demand for systems for managing networks of distributed agents, e.g. thousands of sensors installed in an industrial infrastructure.

Despite the development in the field of AI, our industry still faces challenges — some are the same as 50 years ago, others — completely new. The old challenges include:

  • Model decay, i.e. a gradual deterioration of the results of the model’s operation, stemming from both data drift and concept drift. Data drift is a change in the characteristics of the input data, resulting from the changing reality, technology, or simply human habits. Concept drift, in turn, is a progressive change in the meaning of data — data that have once been marked may after some time require re-marking. This may result, for instance, from changing cultural codes or a different perception of the meaning of the data. For example: a changing understanding of “a healthy diet” (fat vs lean, salty vs unsalted), or a changing scope of what we think is high language and street language (utterances that we used to describe twenty years ago as slang can now be labeled as a “politician’s speech”).
  • The already mentioned brittleness of systems, i.e. the incomplete ability to recognize that the result generated based on non-standard data is absurd.
  • Acquisition and preparation of the right data, in the right quantity and quality, and with characteristics that correspond to real and expected relationships.
  • As for the new challenges that AI practitioners currently face, they can be divided into technical issues, and challenges related to the introduction of AI into the existing world order. The technical issues include:
  • Adversarial machine learning, i.e. dealing with attacks on algorithms. For example: someone finds out that systems installed in autonomous vehicles recognize road signs based on only a few characteristic points and starts to obscure these points. Similarly, in the field of cyber security, the arms race between defensive and offensive algorithms is already starting.
  • Explainability of algorithm operation. Only if system operators and ordinary people understand the basis on which AI makes decisions, will they be willing to trust it and expand its range of automated activities. It is also about ensuring that automatic decisions do not violate the rights and freedoms guaranteed by law.
  • No less important are the socio-cultural issues:
  • Transfer of cognitive errors and human prejudices by the AI systems, as well as the problem of making predictions based on historical data, which may lead to replication of the undesirable tendencies represented in these data.
  • Fragility of the whole society depending on its supporting systems. The dependence of large areas of economy on automatic decisions can result in unexpected effects in case of overlapping machine errors. In other words, we sacrifice our safety, understood as the resistance to shocks and disturbances, for the sake of greater efficiency and convenience.
  • Ethics of decisions made automatically. The more processes and decisions we automate, the more of them can be considered unethical (“should an autonomous car choose to kill a passenger or a pedestrian?”).
  • Impact of algorithms on human behavior. People aware of the existence and assumptions of an algorithm, begin to “overpower” it to achieve benefits for themselves, but causing adverse effects for others (e.g. a system assessing the quality of education causes teachers to focus on maximizing the indicators assessed by the system).
  • Social acceptance of the AI presence. Many people are reluctant to let AI into new areas of life. Especially if the AI is designed to be very close to completely human behavior. A good way to win them over is to use AI to remove barriers and enable previously inaccessible activities, e.g. in the sphere of supporting the elderly people.


These issues cannot be solved by scientists alone. How we want and allow our world to be changed by artificial intelligence is up to all of us — citizens, politicians, business representatives, media and non-governmental organizations. It only depends on us to decide how much the AI mechanisms will increase our comfort and security, and how much they will become the tools for society control.

I have deliberately refrained from mentioning the challenge of creating the so-called general artificial intelligence, thinking like humans do. For the vast majority of the AI environment this is not even a goal. We focus on improving AI methods specialized in narrow fields to solve specific problems. We develop methods of inference, data analysis, calculation, response to the environment or methods supporting emergent behavior in agent groups.

Although it is said that general artificial intelligence will be available in 20–30 years from now, we should bear in mind that similar statements have been heard for the past 60 years. Maybe this time we should cool our enthusiasm before it is cooled by another AI winter.