Einstein spent some lost time theorizing the existence of a gravitational cosmological constant, based upon the idea of a static universe. Of course, it was later discovered that the universe is actually expanding, rendering any perceived “constant” prone to endless errors and diminishing accuracy. However, while this failure is cited as one of Einstein’s notable blunders, it was the cognitive dissonance of his thinking that enabled the next generational leap in our understanding of our expanding universe.

A similar scenario is unfolding with artificial intelligence (AI), or more specifically, a subset of AI called natural language processing (NLP). NLP enables the automated analysis, classification, and interpretation of human language. For example, using NLP it is possible to train computers to recognize that a news article is about terrorism, or that a research study is focused on the topic of childhood obesity, or that a Tweet contains hate speech. But like our expanding universe, language is not static, and there is no magical constant. Written language can vary wildly in length, and concepts can change as one progresses through the text, diluting any label proportionate to the length of the piece. This is especially true of conversational content, such as customer service calls, tele-health sessions or social media threads.

Traditional NLP tools are designed to treat these texts as single “documents”, assigning them a single classification label. In short, the current NLP paradigm is falling short because it is based on the false idea of a static language with universal constants. To quote IBM CTO and Cybersecurity Thought Leader Dr. Russell Couturier, “The curve has an initial asymmetrical asymptote that projects accuracy to zero as the length of the analytic enlarges.” In other words, the longer the text or conversation, the less accurate results tend to be. It’s clear that in order to take the next generational leap, NLP must level-up to accommodate the reality that human language is dynamic. Of course, this is easier said than done. Even the exciting new platform GPT-3 uses a more traditional NLP approach, which requires a gargantuan data set (175 billion parameters from 45 terabytes of data) to produce its impressive results.

“While OpenAI’s GPT-3 is an excellent exercise in attribution scale for predicting a small linear progression of language,” Couturier continued, “it treats language as a universal constant similar to Einstein’s gradational theories of a static universe. Simply put, the math works great for a few sentences but lacks any predictable model based upon the contextual change of the conversation.”

By embracing the dynamic language paradigm, AI can move beyond simply identifying what a conversation is about, and start predicting where a conversation is going. This natural language trajectory tracking capability has huge implications for a wide range of market segments and use cases including cybersecurity, DoD, mental health care, law enforcement, public safety, finance and customer care, to name a few. Imagine the ability to analyze text to spot people who are trending toward self-harm, radicalization, or violence; or perhaps even detection of early onset Alzheimer’s.

This deep learning approach applies an infinite number of windowing techniques to the language to determine changes in theme, sentiment, location, tone, subject, mental/emotional state, and any other semantic label that is applied. It is akin to the way people solve a jig saw puzzle. Group similar elements by color and/or shape, starting with the edge pieces, which establish a starting framework or “theme”. Then connect the grouped elements together until a clear picture emerges and the image can be properly interpreted.

“GPT-3 is a helpful and necessary evolution in NLP analytics, but amounts to little more than mathematical wizardry that accelerates the need for tracking natural language trajectories,” added Couturier. “It’s an exciting and impressive starting point, but far from the finish line, as the creators themselves have stated.”

When you are exploring technologies to analyze, classify, label, and prioritize data, be sure to ask about capabilities like accuracy compared to document length, windowing, classification analogs, chunking (intelligent parsing), and contextual trending.

Contextual classification is the next big thing in the automated comprehension of natural language. Do you agree? Can you see the value in using AI to predict the directionality of conversational text in the market or your business? I’d love to hear your thoughts, ideas and feedback as we explore this exciting new shift in NLP.

 

Stephenson is the co-founder and CEO of Topos Labs, a Boston-based cognitive computing company. Connect with him on LinkedIn, Twitter or Email.