Machine Learning (ML) is an extremely powerful tool that, despite becoming more mainstream in recent months, has been distinctly out of reach for the average person. For my fellow low-techies, ML is an artificial intelligence (AI) technology that identifies and memorizes patterns within data. ML can be trained to recognize pictures with dogs in them, for example, or to discern the sentiment of a social media post. But while it’s pretty straight forward to understand at a high level what ML does, the programming and implementation of it traditionally requires highly skilled engineers. Speaking as a liberal arts major with no formal technology education or training, thankfully this is changing.

Those who have worked with ML understand that it is a highly technical and involved process, often requiring years of experience and familiarity with several programming tools. Building and optimizing ML models can take a lot of time, especially if you don’t already have the data needed to train and test your models. Additionally, model build times often last several days, and even weeks, depending on the nature and size of the models, and updating existing models involves feeding in even more data. Heaven help you when you realize that the original question you were trying to answer was slightly off, requiring you to start all over from scratch.

Into my life came Gracie…

I’m fortunate that my first job out of college was with Topos Labs, the creators of a no-code cognitive computing platform called Gracie. Gracie enables professionals like me, a Philosophy major, to build machine learning models that interpret and classify unstructured text with a high degree of accuracy, without any programming. The only skill required is the ability to read. Models that I and other non-technical colleagues (‘Dictioneers’) have built range from document type classifications (W2, obituary, RFP, etc.) to more abstract concepts such as opaque language, patient frailty, and urgency.

The secret to Gracie’s rapid adaptability and accuracy lies in what we call ‘human-augmented machine learning’. As the human in this equation, I know when something is opaque or what a W2 looks like. This knowledge, and the ability to use a web browser, are all the skills I need to tune Gracie on virtually any domain of interest. As the machine in this equation, Gracie analyzes texts according to the materials and simple human guidance used to train her, and the results are curated to refine her understanding. The marriage of ML and human guidance allows users to drastically cut down on the amount of training text required, while patented technologies make the build times almost instantaneous and the entire process intuitive. The end result is a balanced system of input and output where both Gracie and the user agree on what constitutes ‘yes’ and ‘no’.

If Gracie is any indication, I sense that we are at the very early stages of the democratization of AI. This should be great news to my fellow Humanities and Liberal Arts majors.