Freddy Lecue | On The Role Of Knowledge Graphs In Explainable Machine Learning
20m
Machine Learning (ML), as one of the key drivers of Artificial Intelligence, has demonstrated disruptive results in numerous industries. However one of the most fundamental problems of applying ML, and particularly Artificial Neural Network models, in critical systems is its inability to provide a rationale for their decisions. For instance a ML system recognizes an object to be a warfare mine through comparison with its similar observations. No human-transposable rationale is given, mainly because common sense knowledge or reasoning is out-of-scope of ML systems. We present how knowledge graphs could be applied to expose more human-understandable machine learning decisions, and present an asset, combining ML and knowledge graphs to expose a human-like explanation when recognizing an object of any class in a knowledge graph of 4,233,000 resources.
Freddy Lecue of Thales Canada, presents what he and his team have been working on. After trying to use machine learning on their project, Freddy explains some of the issues that come with implementing the AI in gathering the information and making decisions based on that information and one of the facts is that sometimes the rationale isn't correct when the AI makes a decision.
Freddy also explains the implementation of the critical system and how users or people are even able to trust a system to make decisions with no rationale is critical to the need of a trustworthy graph. And for this Freddy says knowledge incooperated techniques with underlying knowledge to validate any outcome that we get is one of those solutions. #knowledgegraphs #knowledgegraphconference #knowledgegraphsmachinelearning