From the blog

Artificial Intelligence, a brief history

Let’s give a simple Definition

Artificial intelligence (AI) is a wide branch of computer science focused on building smart machines capable of performing tasks that typically require human intelligence. 

Etymology

The term artificial intelligence was introduced in 1950’s and was coined by the authors (John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon) of a proposal for a seminar to study the subject, held at Dartmouth College, USA, in 1956. McCarthy was reflecting the belief held at the time that the rather primitive computers available in the 1950’s would never develop to the point where they would exhibit ‘real’ intelligence. The thinking then was, and this view is still held by many, that human reasoning was real and machine reasoning was a lesser artificial version. Current work, especially that which is developing neural networks, is showing that such computer programs work in essentially the same way as human brains; the difference between the two being one of scale and complexity rather than of kind.

Early Times 1956-1974

AI research in the 1950s explored topics like problem solving and symbolic methods. The programs developed in the years after the Dartmouth Workshop were, to most people, simply astonishing. Computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Researchers expressed an intense optimism predicting that a fully intelligent machine would be built in less than 20 years. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. 

AI winter 1974-1993

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to scale up the difficulty of the problems they faced.

One of the major bottlenecks was the limited computer power. There was not enough memory or processing speed to accomplish anything truly useful. For example, Ross Quillian‘s successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in memory. 

One other setback was Moravec’s paradox: Proving theorems and solving geometry problems is comparatively easy for computers, but a supposedly simple task like recognising a face or crossing a room without bumping into anything is extremely difficult. 

The business community’s fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. Big part of the hype was due to some important progress made in 1982 by the physicist John Hopfield. He was able to prove that a form of neural network (now called a Hopfield Net) could learn and process information in a completely new way. Around the same time, Geoffrey Hinton and David Rumelhart popularised a method for training neural networks called backpropagation.

Despite this progress the initial optimism had raised expectations impossibly high, and when the promised results failed to materialise, funding for AI disappeared.  

AI, the dormant giant of 90’s and 2000’s

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. The super computer was a specialized version of a framework produced by IBM, and was capable of processing 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.

In 2003, DARPA produced intelligent personal assistants, long before Siri, Alexa or Cortana were household names. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. 

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

Deep Learning and Big Data made AI as we know it

AI has become increasingly popular in recent years thanks to deep learning breakthroughs, bigger volumes of digital data, and increasing computing power and storage. 

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. For example, deep learning is used to classify images, recognize speech, detect objects and describe content. Systems such as Siri and Cortana are powered, in part, by deep learning. Amazon and Netflix have popularized the notion of a recommendation system with a good chance of knowing what you might be interested in next, based on past behavior.

New classes of neural networks (NN) have been developed that fit well for applications like text translation and image classification. We have a lot more data available to build neural networks with many deep layers, including streaming data from the Internet of Things (IoT), textual data from social media, notes and investigative transcripts. 

Computational advances of distributed cloud computing and graphics processing units have put incredible computing power at our disposal. This level of computing power is necessary to train deep algorithms.

From Science Fiction to Reality

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Artificial Intelligence is, at a cultural level, already met with fear and disdain, but at a practical level, we’ve already embraced it as a common facet of our everyday lives. As the technology continues to advance, so too will our perceptions of it, but that doesn’t necessarily mean we should do away with that fear altogether.

After walking through this page of history we’ve experienced the hype and the disappointment, the interest followed by ignorance, the struggle followed by progress, but one question remains yet unanswered: will the breakthrough made by Deep Learning finally depict the dreams of AI fathers or another challenge will prevent the expectations to materialise? 

Share with your friends

subscribe to synaptiq

Newsletter

By subscribing to our newsletter, you provide us your personal data. We will use your personal data (e-mail address) only with the purpose of keeping in touch and sending updates about our product. You may unsubscribe at any time. By joining our newsletter you acknowledge that your personal data will be processed as stated in the Privacy Policy.