AI sets out from Hanover (USA) to conquer the entire world

AI has come a long way

AI sets out from Hanover (USA) to conquer the entire world

Behind the hype

The idea of AI predates the first computers. The initial euphoria soon gave way to a longer period of disillusionment. But AI technologies are now poised to play an important role in more and more areas of life. Which minds shaped this progress? What technologies are working in the background? What are the factors that will shape its development?

1920

Robots

The word ‘robot’ is derived from the Slavic robota, which means ‘serf labour’. Czech author Karel Ĉapek first used the word to refer to the human-like machines in his play ‘Rossum’s Universal Robots’.

1950

Turing Test

British mathematician Alan Turing developed a test to determine a machine’s ability to exhibit intelligent behaviour.

1951

Neural Network Learning Machine

SNARC, the world’s first neural network, only had 40 synapses. The machine simulated the behaviour of lab rats and could calculate the fastest path through a maze.

1956

Artificial Intelligence

Artificial intelligence was first founded as a research discipline at an academic conference held at Dartmouth College in Hanover, New Hampshire.

1960

Ability to learn

The Mark I Perceptron developed by U.S. psychologist and computer scientist Frank Rosenblatt was based on Rosenblatt’s concept that the machine could learn new skills by trial and error. By so doing, he laid the cornerstone for the neural network.

1966

Chatbot

Computer scientist Joseph Weizenbaum developed a computer program that acted in the manner of a psychotherapist. It responded to keywords and often answered with questions and generalisations.

1970s

AI winter

The U.S. government cut funding for AI because little success had been achieved. In 1973, Sir James Lighthill, a professor, argued machines would never be able to exceed the level of an ‘experienced amateur’ in chess.

1972

Medicine

MYCIN is an expert system written in the programming language Lisp at Stanford University in 1972. It was used in diagnosing and treating infections with antibiotics.

1982

Speech recognition

James and Janet Baker developed a prototype for speech recognition software at the end of the 1970s and early 1980s, which would form the basis for the system later on. The Bakers founded Dragon Systems in May 1982.

1986

Voice computer

By entering sample sentences and phonetic strings, T. J. Sejnowski and C. Rosenberg taught their program NETtalk how to speak. It can read and pronounce words and apply what it has learned to unknown words.

1997

Chess

Chess champion Garry Kasparov was defeated by the IBM's Deep Blue. The computer was able to analyse up to 200 million potential moves in a single second.

2011

Jeopardy!

Watson, a computer program, competed against two contestants who had won large amounts of money in the show Jeopardy!. It understands natural-language questions and searches for answers in a database.

2017

Siri

Apple's software can recognise natural language and answer questions. Siri was initially capable of understanding English, German and French. It can now understand more than 20 languages.

2018

Driver of growth

The McKinsey Global Institute (MGI) forecasts that AI has the potential to deliver additional global economic activity of $13 trillion by the year 2030.

Why AI?

Intelligence is a tricky thing

Opinions differ vastly when it comes to defining artificial intelligence. However, in day-to-day business operations, the perfect definition of AI is not what’s important. It’s about the right application. We consider AI systems to be systems that can automatically or independently take decisions and respond to input such as images, the written word or even spoken language.

AI technologies are feverishly toiling in the background, often without the user being aware of them: from route planning and automatic translations, to floods of messages on social media.

Why now?

Data, storage, algorithms – The three pillars of AI

Nearly all of the data that were ever created in human history have been produced within the last few years – whether by human, machine, sensor or website input. At the same time, it is becoming increasingly economical to store these data. The storage costs for data are falling tremendously, and the areas of computational power and algorithms continue to see advances. Special graphics processing units (GPUs) and methods such as deep learning reduce the time and effort required for developing new applications.


"Have you ever suffered from the fact that, despite your enormous intelligence, you depend on people to carry out your tasks?"
"Not at all. I like to work with people"

Computer HAL 9000 (2001: A Space Odyssey)

Machine learning

Learning and enabling others to learn

It is not possible to have intelligence without learning or modelling patterns. And it’s no different in the case of artificial intelligence. Machine learning (ML) is the ability to automatically learn a model by using data. There are the following types of ML: Experts determine the correct decision for the process in supervised learning by providing a set of training data each time. In unsupervised learning the system analyses the data based on their similarities without needing experts to input training data. Reinforcement learning is when experts consolidate processes, which learn by direct feedback and not through the input of training examples.

  • Detecting anomalies

    Searches for patterns or data points that are important in the context of the application and are generated by disturbances or other influences. Frequently, these patterns are only produced by data sequences over time or as a function of environmental variables.

  • Image recognition

    Image recognition uses algorithms to attempt to identify objects in images and assign these to a category. It enables systems to observe their surroundings.

  • Chatbots

    Text-based dialogue systems that consist of text input and output. They are based on analyses and production of natural language.

  • Pattern recognition

    An umbrella term for different application areas of ML that deal with the interpretation of recurring patterns. Examples are image recognition and speech-to-text.

  • Natural Language Generation (NLG)

    Text generation creates audio signals to allow the system to transmit information back to the user via speech. ML methods are used here, making it possible to use stylistic devices, emphasis and suchlike.

  • Natural Language Processing (NLP)

    Systems recognise the connections and meaning of the spoken word. Language is not unambiguous. It is characterised by stylistic devices that are easily misinterpreted. This means communication with machines is becoming more and more natural.

  • Speech-To-Text (STT)

    Applications convert the spoken word to written text. To be able to do so, the system needs to suppress background noise and recognise words despite a variety of pronunciation possibilities.


Our AI topics at a glance

Are you interested in other topics related to Artificial Intelligence? Here it is all about your AI activities and how you can achieve your goals with AI.

Find out how AI can support you in your work, what potential lies behind the technology and which topics concern those responsible.

Learn more


Do you have any questions?

Artificial intelligence is not a replacement for human discussion

Are you wondering what possibilities AI can open up in your company? Would you like to learn more about its applications and the technology? We cannot give you a pack full of stock responses, but we can share with you our specialist knowledge, our curiosity about your company and our passion for technology.

We would be delighted to talk to you.

Contact

Save this page. Remove this page.