CONTACT

7 common misconceptions about artificial intelligence

Punchkick Interactive
  • Punchkick Interactive
  • November 10, 2014
7 common misconceptions about artificial intelligence

Artificial intelligence, or AI, is a field that has captivated our imagination for decades. However, the field has had a rocky past with several periods of reduced research funding because of failure to deliver on grandiose expectations. Now, once again, the field is garnering significant attention as technology giants invest billions into promising new AI research that seeks to build intelligent machines based on an understanding of how cognition works in the human brain. Yet major misconceptions about AI still persist among the general public, as well as within academia and the technology industry. We sought to clear up some of these misconceptions about AI below.

Turing test underway of University of Reading

Misconception 1: if you can fake it, you can make it

In 1950, Alan Turing described a thought experiment that sought to answer the question of whether a machine can think. Rather than delve into the ambiguous philosophical aspects of this question, Turing instead proposed an existence test to answer it. In a hypothetical exercise he called “The Imitation Game,” he proposed that if a computer could act indistinguishably to a human observer from the way that a human thinker acts, then the computer is, in fact, thinking. This test, known as the Turing Test, has since served as the benchmark for evaluating proclaimed artificially intelligent machines. As a result, in designing such machines, most AI engineers have focused squarely on mimicking human-like behavior rather than instead trying to define what intelligence itself is and how it can be modeled. The result is an end product that is highly contrived and does not possess the adaptability, resilience, or flexibility of the human brain.

robotics-artificial-intelligence-misconceptions

Misconception 2: good at one thing is good enough

This is actually a symptom of the behavior-centric approach that has dominated AI since its earliest days. Many AI engineers approach the problem of building an intelligent machine by clearly defining a specific task that needs to be performed and then thinking about how to design software to solve that particular problem. Though highly quantitative statistical techniques may be applied, successful resultant technologies have proven to be very brittle, working only within the very constrained set of criteria for which they were designed. If we are to build machines with real intelligence, we must begin with a theory of how the human brain performs cognition and model this process. Such an intelligent machine should be able to adapt to a variety of disparate tasks in same manner that a human is able to. The underlying assumption here is whether you are walking down a flight of stairs, catching a ball, or even predicting the word that comes at the end of this sentence, your brain is essentially doing the same thing as it processes and responds to incoming information.

i-robot-artificial-intelligence-misconceptions

Misconception 3: AI needs a robot body

It’s hard to think of examples of artificial intelligence without conjuring up images of the entertaining, helpful, and sometimes evil humanoid robots that science fiction has so successfully ingrained in our imaginations. So prevalent is the association between these two fields that many academic institutions combine AI and robotics studies into a single program. It’s easy to understand how machines that move and physically interact with the world in a humanoid body would benefit from cognitive functions that could allow them to adapt to new information. However, robotics is by no means the most useful application for AI. If we are able to build intelligent machines that successfully replicate the brain’s cognitive functions, there would be no reason to limit the machines to the physical capabilities of a human or to limit its input sources to the the five senses we are familiar with. Examples of far more useful applications include an intelligent weather prediction system that receives a variety of meteorological data from around the world, or an intelligent security system that receives input from various cameras and sensors distributed throughout a building.

ai-supercomputer-artificial-intelligence-misconceptions

Misconception 4: modern computers aren’t powerful enough for AI

For decades, we have heard the familiar refrain that truly intelligent machines are around the corner, and that it is just a matter of sufficiently compact and powerful computers becoming available. This perpetual stalling tactic—whether based in a desire to ensure funding or in genuine ignorance—assumes that traditional AI methods have been on the right track all along. The behavior-centric framing of intelligence, stemming from an over-reliance on the Turing Test as its sole metric, has been the chief reason for the field’s limited success over the last several decades, despite drastic improvements in computing power over the same period. AI researchers have recently realized that if we want to build machines that can interact with the world with the same degree of adaptability, resilience, and flexibility as humans, we must first try to understand and then model the brain’s cognitive functions. As of 2014, the world’s fastest supercomputer can perform over 11 trillion operations per second. By contrast, an individual neuron can fire and reset itself about 200 times per second. Yet a human can perform tasks within a second that these supercomputers cannot perform within the same amount of time or are unable to perform at all. This tells us that whatever the brain is doing to perform these tasks, it is going about it in a fundamentally different way than today’s most powerful machines are.

ai-quantum-computer-quantum-computing-artificial-intelligence-misconceptions

Misconception 5: for true AI, we need a quantum computer

This misconception requires a brief explanation of what a quantum computer is, so bear with us. Classical (or digital) computers require data to be encoded into binary digits, which always possess one of two definite states, indicated by 1 or 0. By contrast, a quantum computer uses data encoded into qubits, which can exist in “superpositions” of states. This means that qubits can represent 1s, 0s, or some variable gray area in between. While quantum computing is still in its infancy, such a machine would theoretically be able to efficiently solve certain types of mathematical problems that are computationally infeasible for a digital computer. So what does all this have to do with AI? Actually, not much. Some AI researchers believe that digital computers are inherently incapable of replicating the flexibility and adaptability of the human brain because they can only represent data with exact fidelity. It may be that a quantum computer, if ever built, could provide significant advantages over a digital computer in modeling intelligence—but it would be shortsighted to assume that this has been the primary barrier to success. As of yet, there is no reason to believe that by modeling the brain’s cognitive functions using digital computers, we will not be able to produce machines with real intelligence.

bicentennial-man-artificial-intelligence-misconceptions

Misconception 6: a thinking machine has to feel

Recent endeavors to build machines with real intelligence focus on replicating the human brain’s cognitive functions, which deal with working memory, reasoning, and problem solving. The neurocircuitry that underlies these brain functions is believed to be distinct from that which controls emotional and motivational processes. Therefore, any human-like characteristics that such intelligent machines exhibit will be limited to those associated with cognitive functions. Even if engineers attempted to build an intelligent machine that modeled the brain’s emotional and motivational functions, it would be very difficult to do so, since these brain functions are considered significantly more complex and are still very poorly understood. Therefore, the notion of an intelligent machine that spontaneously develops Machiavellian tendencies is completely baseless.

singularity-transcendence-artificial-intelligence-misconceptions

Misconception 7: the “singularity” will make robots our masters

In the context of computing, the singularity is the idea that once AI surpasses human intelligence, it will begin a runaway effect of new generations of ever-smarter machines that forever alters human history—or, perhaps, ends it. Whereas the concept of a machine which exceeds human abilities in various cognitive functions is certainly possible, the issue here is that the notion of describing a machine as “more intelligent than a human” in general terms is quite convoluted. It is quite likely that when we successfully build machines that replicate cognitive functions, they will exceed human brains in terms of speed and memory capacity. However, will that necessarily mean that these machines will also possess the drive, inspiration, and creativity to build even more advanced intelligent machines? This question, upon which the foundation of the “singularity” rests, cannot be answered easily. But for now, those concerned about rogue AI machines can rest assured—there will always be a place for human ingenuity, even when computers outthink us at math and science.

Connect

Let’s Build Together

Thanks for reaching out!

We will be in touch shortly.