by Charles Simon

Artificial intelligence really isn’t all that intelligent

feature
Mar 29, 20227 mins
AnalyticsArtificial IntelligenceMachine Learning

Narrow AI applications such as Google Search and Amazon Alexa are great at solving specific problems, but only as long as you stick to the script.

a digital brain
Credit: Thinkstock

From self-driving cars to dancing robots in Super Bowl commercials, artificial intelligence (AI) is everywhere. The problem with all of these AI examples, though, is that they’re not really intelligent. Rather, they represent narrow AI – an application that can solve a specific problem using artificial intelligence techniques. And that is very different from what you and I possess.

Humans (hopefully) display general intelligence. We are able to solve a wide range of problems and learn to work out those problems we haven’t previously encountered. We are capable of learning new situations and new things. We understand that physical objects exist in a three-dimensional environment and are subject to various physical attributes, including the passage of time. The ability to replicate human-level thinking abilities artificially, or artificial general intelligence (AGI), simply does not exist in what we today think of as AI. 

That’s not to take anything away from the overwhelming success AI has enjoyed to date. Google Search is an outstanding example of AI that most people regularly use. Google is capable of searching volumes of information at an incredible speed to provide (usually) the results the user wants near the top of the list.

Similarly, Google Voice Search allows users to speak search requests. Users can say something that sounds ambiguous and get a result back that is properly spelled, capitalized, punctuated, and, to top it off, usually what the user meant. 

How does it work so well? Google has the historical data of trillions of searches, and which results the user chose. From this, it can predict which searches are likely and which results will make the system useful. But there is no expectation that the system understands what it is doing or any of the results it presents.

This highlights the requirement for a huge amount of historical data. This works pretty well in search because every user interaction can create a training set data item. But if the training data needs to be manually tagged, this is an arduous task. Further, any bias in the training set will flow directly to the result. If, for example, a system is developed to predict criminal behavior, and it is trained with historical data that includes a racial bias, the resulting application will have a racial bias as well.

Personal assistants such as Alexa or Siri follow scripts with numerous variables and so are able to create the impression of being more capable than they really are. But as all users know, anything you say that is not in the script will yield unpredictable results.

As a simple example, you can ask a personal assistant, “Who is Cooper Kupp?” The phrase “Who is” triggers a web search on the variable remainder of the phrase and will likely produce a relevant result. With many different script triggers and variables, the system gives the appearance of some degree of intelligence while actually doing symbol manipulation. Because of this lack of underlying understanding, only 5% of people say they never get frustrated using voice search.

A massive program like GPT3 or Watson has such impressive capabilities that the concept of a script with variables is entirely invisible, allowing them to create an appearance of understanding. Their programs are still looking at input, though, and making specific output responses. The data sets at the heart of the AI’s responses (the “scripts”) are now so large and variable that it is often difficult to notice the underlying script – until the user goes off script. As is the case with all of the other AI examples cited, giving them off-the-script input will generate unpredictable results. In the case of GPT-3, the training set is so large that eliminating the bias has thus far proven impossible.

The bottom line? The fundamental shortcoming of what we today call AI is its lack of common-sense understanding. Much of this is due to three historical assumptions:

  • The principal assumption underlying most AI development over the past 50 years was that simple intelligence problems would fall into place if we could solve difficult ones. Unfortunately, this turned out to be a false assumption. It was best expressed as Moravec’s Paradox. In 1988, Hans Moravec, a prominent roboticist at Carnegie Mellon University, stated that it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or when playing checkers, but difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility. In other words, often the difficult problems turn out to be simpler and the apparently simple problems turn out to be prohibitively difficult.
  • The next assumption is that if you built enough narrow AI applications, they would grow together into a general intelligence. This also turned out to be false. Narrow AI applications don’t store their information in a generalized form so it can be used by other narrow AI applications to expand the breadth. Language processing applications and image processing applications can be stitched together, but they cannot be integrated in the way a child effortlessly integrates vision and hearing.
  • Lastly, there has been a general feeling that if we could just build a machine learning system big enough, with enough computer power, it would spontaneously exhibit general intelligence. This hearkens back to the days of expert systems that attempted to capture the knowledge of a specific field. These efforts clearly demonstrated that it is impossible to create enough cases and example data to overcome the underlying lack of understanding. Systems that are simply manipulating symbols can create the appearance of understanding until some “off-script” request exposes the limitation.

Why aren’t these issues the AI industry’s top priority? In short, follow the money.

Consider, for example, the development approach of building capabilities, such as stacking blocks, for a three-year-old. It is entirely possible, of course, to develop an AI application that would learn to stack blocks just like that three-year-old. It is unlikely to get funded, though. Why? First, who would want to put millions of dollars and years of development into an application that executes a single feature that any three-year-old can do, but nothing else, nothing more general?

The bigger issue, though, is that even if someone would fund such a project, the AI is not displaying real intelligence. It does not have any situational awareness or contextual understanding. Moreover, it lacks the one thing that every three-year-old can do: become a four-year-old, and then a five-year-old, and eventually a 10-year-old and a 15-year-old. The innate capabilities of the three-year-old include the capability to grow into a fully functioning, generally intelligent adult.

This is why the term artificial intelligence doesn’t work. There simply isn’t much intelligence going on here. Most of what we call AI is based on a single algorithm, backpropagation. It goes under the monikers of deep learning, machine learning, artificial neural networks, even spiking neural networks. And it is often presented as “working like your brain.” If you instead think of AI as a powerful statistical method, you’ll be closer to the mark.

Charles Simon, BSEE, MSCS, is a nationally recognized entrepreneur and software developer and the CEO of FutureAI. Simon is the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit https://futureai.guru/Founder.aspx.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.