2022 Trends in Semantic Technologies: Humanizing Artificial Intelligence – insideBIGDATA

Not long ago, semantic technologies were considered a taboo, almost esoteric branch of data management that few people talked about or openly admitted to using.

Today, with the burgeoning popularity of knowledge graphs (ubiquitous in solutions spanning everything from data preparation to analytics) and the increasing ascendance of neuro-symbolic Artificial Intelligence (which couples…….

npressfetimg-4527.png

Not long ago, semantic technologies were considered a taboo, almost esoteric branch of data management that few people talked about or openly admitted to using.

Today, with the burgeoning popularity of knowledge graphs (ubiquitous in solutions spanning everything from data preparation to analytics) and the increasing ascendance of neuro-symbolic Artificial Intelligence (which couples AI’s knowledge foundation with its statistical foundation), semantic technologies are actively sought for a sundry of use cases across industries.

The most cogent—and suitable for these capabilities—involves almost any form of natural language technologies for deployments as varied as implementing workflows with Cognitive Processing Automation to applications of conversational AI.

According to expert.ai CTO Marco Varone, “A lot of things are happening in the semantic language understanding space. Many more things have happened in the last three, four years than in the previous 10 to 15. In the last few years the change has been from experiments in semantics and language to real projects.”

What’s most significant about these projects is they frequently involve simplifying multiple aspects of AI pertaining to anything related to Natural Language Processing. Moreover, by utilizing the semantic inferencing approach that’s foundational to symbolic AI deployments, organizations are creating an effect that’s as profound as it is undeniable.

What they’re doing is making AI itself more human-like, explainable, and dependable in production settings, thereby spurring this pivotal series of technologies to the next phase of its evolution and its enterprise utility.

“For many, the idea is the next technology will be so smart that it will be able to learn and somehow manage itself,” Varone reflected. “This is not possible and companies have finally understood that you need human-in-the-loop.”

Human-in-the-Loop

The precept of human-in-the-loop is one of the means by which enterprise AI is becoming more humanlike via semantic approaches. People are instrumental to the business rules that form the basis of machine reasoning at the core of the symbolic AI method semantic technologies underpin.

Moreover, humans are indispensable to AI approaches solely involving AI’s knowledge foundation, those involving its connectionist foundation exemplified by machine learning, as well as those predicated on intertwining these two for neuro-symbolic AI applications. “Human-in-the-loop will shape many things in the coming year because it means you need to organize your processes in a way that humans can always add the final part of the value that only humans can do,” Varone explained.

Human Experts, Machine Reasoning

There are two principal ways humans are directly responsible for the underlying worth of symbolic reasoning for natural language technology use cases. The first involves subject matter experts “enriching the knowledge graph, which is a super trend for working,” Varone disclosed. Knowledge graphs might be compiled for any number of domains including regulations, legal matters, or products; human expertise is pivotal for populating these applications with the most relevant, curated knowledge. To that end, the second way humans fortify deployments of semantic inferencing is by assembling the vocabularies, taxonomies, thesauri, and rules on which these intelligent systems reason for applications like text analytics.

“You need to have your expert person that can put the knowledge, that can use the abstraction capability of the human person that can really decide which are the important things and which are the things that are only noise,” Varone mentioned. Text analytics applications are pivotal for surmounting the unstructured data divide in numerous areas including understanding market forces in finance or retail, researching new solutions in pharma and healthcare, and fortifying security for various intelligence agencies. “With text analytics knowledge navigation you have a big amount of information collected internally, externally, and a combination of the two, and you want to extract information to help your knowledge workers,” Varone specified.

Humanizing Machine Learning

The human expertise that’s central to creating the previously mentioned tools—knowledge graphs, rules, taxonomies, and glossaries—for exploiting AI’s semantic knowledge base for natural language technologies is equally applicable to statistical AI deployments. In particular, humans have come to play a vital role in everything from the creation of advanced machine learning models to the efficacy of their ongoing performances. The main ways subject matter expertise can positively affect these connectionist techniques include:

  • Training Data: Data scientists and predictive modelers should frequently consult with subject matter experts when refining models with additional training data. “Even to give more data to train your models, you need a person to say this is a valuable and available source of information so use it, or this is not good because of all of this noise,” Varone noted. The intimacy of the knowledge pertaining to their domains that experts have, which may escape the notice of data scientists, is critical for delivering the best training data.
  • Bias: Detecting, rectifying, and eliminating model bias is foundational to upholding models to responsible AI standards. “Statistical models can learn biases very quickly,” Varone admitted. “They can learn wrong things and if you don’t have an expert it can take a lot of time to spot. If you have an expert in the loop you can immediately spot when something’s wrong or not relevant.”
  • Accuracy: Ultimately, employing experts to validate the outputs of advanced machine learning models inherently boosts their accuracy by monitoring events such as model drift, for instance. “Experts need to be part of any language knowledge process,” Varone posited. “Because then you can be sure what you’re getting is top quality and…you’ll get better results in the end and spend less resources.”

Human-Led Composite AI

The optimal way to conserve resources, increase efficiency, and perfect the output of AI with semantic techniques is to pair machine learning and symbolic reasoning with what Varone characterized as a “hybrid approach”. Such hybridization is part of the notion of composite AI Gartner introduced, in which organizations invoke a plethora of AI methodologies to generate these ideal outcomes. There are numerous ways to use AI’s reasoning and learning capacities to enhance the processes for machine understanding of language. Labeling the training data for supervised learning deployments is one of the prime inhibitors of this approach. Varone cited an example in which, for this purpose, firms might consult an expert who says “yes, you need to [annotate data], but it will take 30 days to do so.”

However, by employing that subject matter expert to devise business rules for annotating the necessary training data, “we can do it in three days,” Varone concluded, which expedites time to value. There are also instances in which organizations can utilize machine learning methods to refine or populate the knowledge base upon which to create symbolic AI rules. Supervised learning methods are typically included in these efforts, although Varone hinted at the efficacy of “creating or enriching a knowledge graph in an unsupervised mode.” Regardless of which approach is used, human involvement is crucial for succeeding with these AI opportunities for processing language, although the reliance on connectionist and rules-based approaches might not be equal. “Why you see more and more interest in the hybrid approach is because mixing symbolic is super efficient in terms of resources; it’s a thousand times more efficient,” Varone observed.

Managing AI with Semantics

Semantic technologies and tenets are some of the most effective ways to oversee enterprise use cases of AI for natural language technologies. These capabilities have been formed by human curated knowledge, which is why the notion of human-in-the-loop is so prominent in contemporary times.

By extending this concept to statistical AI approaches or those involving a composite of AI’s reasoning and learning capabilities, firms are able to expedite their deployments, better them, and make them conform all the more to the human standards by which their underlying value is ultimately judged. “Human-in-the-loop is finally understood by everybody that people will be needed in the future,” Varone summarized. “You can’t do without people. You need to give people the best tools, yes. But, people must be in the loop.”

The underlying systems people supervise can’t help but gain from this development—as they indeed are.

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Source: https://insidebigdata.com/2022/01/21/2022-trends-in-semantic-technologies-humanizing-artificial-intelligence/