Artificial Intelligence: Past, Present…and what about the Future?

What is the dream of artificial intelligence researchers? In most cases they dream to realize a tricky scientific project: to understand the principles and functioning of the human mind so to reproduce it on a machine. The expression “Artificial Intelligence” (A.I.) accurately describes this dream by correctly translating the final goal.

The history of Artificial Intelligence does not follow a linear path and its backbone dates way back before than what we are used to imagine. In 1272, precisely.

Legend says that Spanish theologian Raymond Lull, while in a spiritual retreat, had a vision of basic principles that, properly combined together, could lead to the core principles of every science and every form of knowledge. Over many centuries, a number of philosophers, mathematicians and polymaths paved the way to the theoretical model of a computing machine able to perform symbolic calculations on an infinite tape. It’s 1936 and the Turing machine was invented.

In 1955, the term Artificial Intelligence was officially coined. John McCarthy, professor at Dartmouth College, organized a six weeks summer research workshop titled “A proposal for the Dartmouth Summer Research Project on Artificial Intelligence” in which he invited a dozen of colleagues with very different backgrounds, but a common interest for formal logic, computation, and thinking models.  Invited members came from research fields like mathematics, engineering, psychology, neurology. The ultimate purpose was to devise the theory of an artificial brain. Many researchers predicted that a machine as intelligent as a human being would exist in no more than a generation and they were financed millions of dollars to make this vision come true.

What happened next?

The First A.I. Winter.

The first A.I. winter started in the early 70s, almost ten years later than the Dartmouth Summer Research Project took place. At the time, leading researchers and scientists strongly believed that A.I. would have soon outmatched humans in many everyday life tasks and would have taken most of their jobs. Eventually, it became clear that machines were still far from being this level of progress, so the Advanced Research Projects Agency (ARPA), the research arm of the U.S. Defense Department (DARPA) and the primary funder of A.I. research and development cut their funding on AI research. The money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems rather than to undirected research.

Another reason as why A.I. projects and promises failed can be ascribed to the fact that at the time the dominating form of creating software was still rule-based programming, thus, developers had to explicitly specify all rules that defined the behavior of a computer program.

The Rise and the (Re-)Fall 

In the 80s the appearance of Expert Systems triggered a second Artificial Intelligence boom. Expert Systems were focused on solving domain-specific problems rather than trying to emulate the general problem-solving functionalities of the human brain: they restricted themselves to a small domain of specific knowledge and it made it simple for programs to be built and then modified once they were in place.

Investments came back in the Artificial Intelligence research field. The Japanese Ministry of International Trade and Industry earmarked 850 million dollars for the Fifth Generation Computer Project with the purpose to write programs and build machines which could interpret pictures, translate languages and reason as humans. Just some years later, in 1984, Marvin Minsky, pioneer of the A.I. industry, predicted a collapse of the Artificial Intelligence market. In 1987 the A.I. bubble burst.

A period of reduced funding, interest and research took place from 1987 to 1993.

The 21st Century Boom

In the first decades of the 21st century the idea of neural networks started to become reality as access to big data and affordable computers made machine learning applicable to many problems. A machine learning solution doesn’t consist of a human-built, interpretable routine that intelligently explores a fixed number of cases and comes to one of many possible predefined conclusions, rather, it uses the methods of statistics to learn a new method, based on which a number can be predicted or a category indicated. At the same time, deep learning researches were giving great results, too: by helping computers perform tasks that were hard to codify with static rules, such as computer vision and voice recognition.

Soon high-tech companies triggered another hype cycle for the A.I. industry. They talked about the advantages of ML and DL, sometimes in misleading ways which caused both excitement and fear about A.I. The term Artificial Intelligence, which had been long denigrated because of the many unrealized promises, started to be used popular again thanks to media outlets and a renewed interest in the field.

Nowadays we use A.I. and machine learning in every aspect of our life: ride sharing apps, autopilots in commercial flights, facial filters on social networks and also e-mail categorization depend on them. Yet, other hyped projects such as self-driving cars or chatbots able to engage in meaningful conversations are still far from being completely functional. VP & Chief AI Scientist at Facebook, Yann LeCun, declared that “AI has gone through a number of AI winters because people claimed things they couldn’t deliver”. So, are we headed towards a halt in the development of Artificial Intelligence?

We don’t know yet but, if you are interested to delve into the topic, download BaxEnergy’s “AI for executives and policy makers” below.

BaxEnergy’s A.I. for executives and policy makers.

Recent Post

Categories