Piaget Is Based on Building Blocks, While Pavlov Is More of a Response

 

There are two approaches when it comes to using artificial intelligence (AI) to understand or interpret human language, and they can be explained based upon the cognitive theories of Russian physiologist Ivan Pavlov or Swiss psychologist Jean Piaget.

Pavlovian approaches are akin to stimulus response, while Piaget’s cognitive theory of learning is based upon using cumulative knowledge.

Is your natural language Pavlovian or Piaget? Let’s explore.

First, you need to understand that disparate machine learning models are created on a specific corpus, resulting in binary outcomes. The model is trained to recognize a very specific pattern of speech, and when the model matches, a conclusion is reached.

If subsequent independent models are created, they have no knowledge of each other. I want you to think about Facebook. In a universe with billions of conversations taking place, each conversation has one or many themes. Imagine each theme as a planet in your universe and the closer the planets are together, the more similar the theme.

For example, conversations about soccer and football would be closer to each other and conversations about fishing and fashion design would be far apart. Traditional AI approaches are Pavlovian and each conversation is a different universe.

But what happens when multiple models detect the same pattern and create false positives?

There is no way to tune the models, compare the intersection of results, or create finer granularity of the themes. As the models increase, or the universes grow, the problem becomes curvilinear.

Simply, bells ring and dogs drool.

Piaget’s cognitive theory of learning is based upon building block experiences. A child might mistake a pea for a toy. This pea may look like a ball, but it doesn’t bounce and I can eat it.

Therefore, it is not a toy and it is food.

Natural language interpretation requires the Piaget approach to use incremental learning that contains a model of human language as a universe. When two competing interpretations(false positives) exist, they can be detected, analyzed, and end in new planets being created that distinguish new themes.

Now, let’s go back to Facebook with the billions of conversations each looking like a planet in the universe.

Earlier, I stated that the fishing conversational themes and fashion conversational themes would be far apart. What if the conversation was about comfortable fishing vests? It is both fashion and fishing.

In a whole language model approach, this would be like two colliding planets that have a percentage of their landscape that is common.

We now have the ability to both see and dissect the intersection and create new themes of “comfort fishing” and “fashion.” Think of “fishing” as the planet and “comfort” and “fashion” as moons around the planet.

It’s the difference between food and a toy. The Pavlovian approach cannot accommodate this learning.

The Piaget approach allows for infinite incremental learning creating new galaxies, planets, and moons.

A global language model drastically reduces false positives, provides intersection analysis, and creates an environment for incremental learning. It gets smarter as it gains more experiences.

Discrete models are costly, time consuming, and result in high false positive rates as the number of models increases. When choosing a product or strategy, you must consider the benefits of a whole model approach over discrete modeling. Are you building intelligence or creating single use models? Piaget or Pavlov?

 

An entrepreneur and technologist, Dr. Russell Couturier has successfully launched and sold several companies and holds 11patents for security, artificial intelligence and machine learning inventions. He is technical advisor and co-founder of Topos Labs and Distinguished Engineer at IBM. He spends his spare time teaching sustainable logging practices. His email address is [email protected]