From the blog

A Brief History of Artificial Intelligence

Artificial Intelligence (AI) has come a long way since its introduction in the 1950s. From early problem-solving programs to modern deep learning and big data advancements, AI has faced cycles of breakthroughs and setbacks. This article explores AI’s history, key milestones, and its ongoing evolution toward shaping the future.

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is a wide branch of computer science focused on building smart machines capable of performing tasks that typically require human intelligence.

Who coined the term “artificial intelligence”?

The term artificial intelligence was introduced in the 1950s and coined by the authors (John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon) of a proposal for a seminar to study the subject, held at Dartmouth College, USA, in 1956.

McCarthy reflected the belief held at the time that the rather primitive computers available in the 1950s would never develop to the point where they would exhibit ‘real’ intelligence. The thinking then was, and this view is still held by many, that human reasoning was real and machine reasoning was a lesser artificial version. 

However, current work, particularly in the development of neural networks, suggests that such computer programs function similarly to human brains; the difference between the two is one of scale and complexity rather than of kind.

The Early Times of AI (1956-1974)

AI research in the 1950s explored topics like problem-solving and symbolic methods. The programs developed in the years following the Dartmouth Workshop were simply astonishing to most people. Computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Researchers expressed intense optimism predicting that a fully intelligent machine would be built within 20 years. 

In the 1960s, the US Department of Defense took an interest in this field and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. 

 

Challenges and Setbacks of AI (1974-1993)

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to scale up the difficulty of the problems they faced.

  1. One major bottleneck was the limited computer power. There was not enough memory or processing speed to accomplish anything truly useful. For example, Ross Quillian‘s successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in memory. 
  2. One other setback was Moravec’s paradox: Proving theorems and solving geometry problems is comparatively easy for computers, but a supposedly simple task like recognising a face or crossing a room without bumping into anything is extremely difficult. 
  3. The business community’s fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. Big part of the hype was due to some important progress made in 1982 by the physicist John Hopfield. He was able to prove that a form of neural network (now called a Hopfield Net) could learn and process information in a completely new way. Around the same time, Geoffrey Hinton and David Rumelhart popularised a method for training neural networks called backpropagation.

Despite this progress, the initial optimism had raised expectations impossibly high, and when the promised results failed to materialise, funding for AI disappeared.  

AI’s Progress and Early Successes

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes.

Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s.

Some major AI breakthroughs include:

  • 11 May 1997: Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. The super computer was a specialized version of a framework produced by IBM, and was capable of processing 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.
  • 2003: DARPA produced intelligent personal assistants, long before Siri, Alexa or Cortana were household names.
  • 2005: a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.
  • 2007:  a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. 

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

The Rise of Deep Learning and Big Data

AI has become increasingly popular in recent years thanks to deep learning breakthroughs, bigger volumes of digital data, and increasing computing power and storage. 

What is Big Data?

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models.

What is Deep Learning?

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. For example, deep learning is used to classify images, recognize speech, detect objects and describe content. Systems such as Siri and Cortana are powered, in part, by deep learning. Amazon and Netflix have popularized the notion of a recommendation system with a good chance of knowing what you might be interested in next, based on past behavior.

AI’s Growth with More Data and Computing Power

New classes of neural networks (NN) have been developed that fit well for applications like text translation and image classification. We have a lot more data available to build neural networks with many deep layers, including streaming data from the Internet of Things (IoT), textual data from social media, notes and investigative transcripts. 

Computational advances of distributed cloud computing and graphics processing units have put incredible computing power at our disposal. This level of computing power is necessary to train deep algorithms.

From Science Fiction to Reality

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Artificial Intelligence is, at a cultural level, already met with fear and disdain, but at a practical level, we’ve already embraced it as a common facet of our everyday lives. As the technology continues to advance, so too will our perceptions of it, but that doesn’t necessarily mean we should do away with that fear altogether.

After walking through this page of history we’ve experienced the hype and the disappointment, the interest followed by ignorance, the struggle followed by progress, but one question remains yet unanswered: will the breakthrough made by Deep Learning finally depict the dreams of AI fathers or another challenge will prevent the expectations to materialise? 

AI Revolutionizing Cancer Treatment

AI has already made significant strides in various industries, and healthcare is no exception. One of the most impactful applications is in medical imaging, where our AI-based contouring software, Mediq RT, is revolutionizing cancer treatment. By quickly and accurately mapping tumors in CT and MRI scans, Mediq automates a time-consuming process, allowing doctors to focus on treatment planning and patient care. With the power of machine learning, it speeds up diagnoses and helps clinics treat more patients efficiently.

This innovation demonstrates how AI is not just shaping the future—it is actively transforming lives today.

 

Share with your friends

subscribe to synaptiq

Newsletter

By subscribing to our newsletter, you provide us your personal data. We will use your personal data (e-mail address) only with the purpose of keeping in touch and sending updates about our product. You may unsubscribe at any time. By joining our newsletter you acknowledge that your personal data will be processed as stated in the Privacy Policy.