de en

What is AI?

From household help to doomsday scenario - there’s hardly a topic where public perception, state of research and reality seem so incongruent as with artificial intelligence. Reason enough to shed some light onto this subject with a series of articles.

The aim is to explain in a comprehensible and approachable manner - how AI really works, what it can (and what it cannot) do now and what can be expected in the future.

The Big Why: Four Reasons to Understand AI Correctly

AI is currently THE hot tech topic. As with most hyped topics, there’s a strong contrast between public coverage and the actual facts. Speaking of AI, this contrast is especially harsh, for a number of reasons:

  • Prejudices: The term AI is heavily biased by movies, books, TV shows and computer games
  • Lack of knowledge: There is still a vast public ignorance concerning information technology - on which AI is based
  • Attention economy: Ours is a time dominated by emotional and sensational headlines - in this kind of environment, passing on information objectively has become even more difficult
  • Marketing: AI is an important new business area - especially the industry’s top dogs with their massive marketing budgets don’t want to have an objective debate but rather enforce the magical, science-fiction-like character of AI

So while the average citizen has at least a basic understanding of the combustion engine and can accordingly follow the “dieselgate” scandal more informedly, even IT professionals often have no real idea of how a neural network really works. This creates false and sometimes dangerous prejudices about what the current state of AI is and where its dangers and risks lie. This can also be counterproductive in planning and executing IT projects: Unrealistic ideas about the possibilities of a technology inevitably lead to problems. On the other hand, opportunities are not fully tapped if one does not know what is possible.

Is this AI? - A small quiz

Before you talk about a complex topic like AI, one should first try to find a definition of the topic at hand. So let’s kick things off with a small quiz. Which of the following applications is AI for you?

  1. A recommender system. e.g. Amazon offering you additional products or YouTube suggesting new videos
  2. A math software able to solve complex differential equations
  3. Face recognition, e.g. Facebook linking faces in a photo to the associated accounts
  4. An image classification that detects whether the animal in a photo is a dog or a cat
  5. The ghosts from PacMan
  6. An unbeatable Tic-Tac-Toe program
  7. IBM’s Deep Blue, the first software to beat a human chess world champion
  8. DeepMind’s AlphaGo, the first software to beat a human Go world champion
  9. A credit agencies scoring system
  10. Apple’s Siri
  11. Amazon’s Alexa
  12. IBM’s Watson Software, which has beaten the best human Jeopardy players

Obviously, this was a trick question. Each item is an example for artificial intelligence - even the ghosts in PacMan are an example for an (albeit very, very simple) game AI. We like to ask audiences this question when giving a talk about AI fundamentals. It’s interesting to see how much the opinions can differ individually. It also shows how the popular perception of AI has gradually but distinctly changed over the decades. In the nineties, for example, nobody would have doubted that a chess computer that beats a world champion is an AI.Today it’s an entirely different story as the quote from a office neighbour reveals:

I’m talking about real AI, like the Google stuff, not the chess computers from the ’90s.

The idea of what AI is therefore constantly changes due to technical development and, in particular, habituation. Above all, there is a difference between the definition used by professionals and public understanding on the other hand. As a result, people talk past each other and are quoted out of context.

What is AI? - Attempt at a definition

But what is the definition of an AI? There is no official, ISO standardized specification, but a solid starting point is the definition given by the standard work on artificial intelligence. According to “Artificial Intelligence - A Modern Approach”, a system is an AI if it meets one or more of the following four criteria:

human rational
thought I. It thinks like a human II. It thinks rationally
action III. It behaves like a human IV. It behaves rationally

We will come back to this definition repeatedly in the future articles, but the following observation is already interesting: There are two axes along which the definition runs:

  1. Thought vs. action: What is more important? Should the action or rather the “thinking process” meet the expectations? If we want practical results, the focus is on action. But if it is about “penetrating” the nature of thinking and of the human mind, it is on thought.
  2. Rationality vs. humanity: Man does not always act strictly rationally (in fact, he rarely does). Do we expect an AI to imitate man or a rational ideal of the mind? Both can be desirable, depending on the context.

This complex definition is intentionally formulated rather freely - as there is a multitude of algorithms labelled AI and a wide spectrum of opinions about what AI really is. The broad definition also shows how difficult it is to define AI at all.

At what point does artificial intelligence begin?

The state of research and the insights into human intelligence have always been subject to constant change and cultural influences. The same applies, of course, to the goals of AI research. At the beginning of the computer age, the idea of a computer in itself was basically perceived as an AI - this can be seen in old SciFi movies: It goes without saying that the computer speaks and finds solutions on it’s own. When the spaceship captain asks the on-board computer “Computer, what are our options?”, this SciFi cliché perfectly reflects the perception of AI at the time. Finding exacting mathematical solutions to complex problems - at that time this was a task only particularly intelligent and educated folks could accomplish. A machine capable of doing this as well must therefore be “intelligent”, too. But it quickly became clear that systems of equations are (comparatively) easy for computers. Distinguishing cats from dogs on pictures or answering the spaceship captains requests, on the other hand, are incredibly complex challenges. This is a dilemma for AI researchers: Whenever they have solved a problem, they understand how the computer does it - and it no longer counts as a “real” AI. When Kasparov lost to Deep Blue in 1997, it was perfectly clear that this was state-of-the-art AI. After all, chess was a highly complex game requiring a lot of rational thought and intuition. 20 years later even IT students sometimes have a problem considering a chess program as AI - although it definitely is. This problem of the moving target is summarized John McCarthy to the following - somewhat frustrated - statement:

As soon as it works, no-one calls it AI anymore.

Nowadays we have reached the point where a computer actually can distinguish cats from dogs and is, among other things, gradually mastering human language - in fact a huge leap forwards compared to the early days of computers. Since problems like these have been unsolvable for years, it naturally makes people think that this now finally is “real” AI (whatever that is). Some warn against the pending apocalypse, others predict the social collapse, caused by robots replacing us and making all human labor redundant. Meanwhile all these fears appear rather far-fetched when you consider the technical reality. What we are witnessing right now is just another level of digitization. There is no AI as seen in SciFi movies, with common sense knowledge, solving universal problems and acting autonomously. There is only a new paradigm of software development that can solve new, previously unsolvable problems. Like any new technology, this offers great new prospects and, of course, risks. In order to use the possibilities correctly and to counter the risks sensibly, however, education is necessary. As Arthur C. Clarke so beautifully put it:

Any sufficiently advanced technology is indistinguishable from magic.

Right now, magical thinking is still predominant when it comes to AI. In order to change that, we want to explain in the forthcoming parts of the series, what AI actually is and how it works. We will start with introducing basic concepts of AI: What defines a “strong” or a “weak” AI? What is the difference between “classical AI” and Deep Learning? Knowing the differences, strength and weaknesses of both approaches is crucial for seeing the current (and probably future) progress.

Here is the second part of the series “Understanding AI”:"Symbolic AI, Neural Networks and Deep Learning", in which you will learn how to distinguish strong AI from weak AI and how to classify Deep Learning and Neural Networks.