Freddy Lecue

Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) at Thales in Montreal – Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis – France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior to joining Accenture he was a research scientist, the lead investigator in large-scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008.

2021 Talk: On the role of knowledge graphs in explainable machine learning

Machine Learning (ML), as one of the key drivers of Artificial Intelligence, has demonstrated disruptive results in numerous industries. However, one of the most fundamental problems of applying ML, and particularly Artificial Neural Network models, in critical systems is its inability to provide a rationale for their decisions. For instance, a ML system recognizes an object to be a warfare mine through comparison with its similar observations. No human-transposable rationale is given, mainly because common sense knowledge or reasoning is out-of-scope of ML systems. We present how knowledge graphs could be applied to expose more human-understandable machine learning decisions, and present an asset, combining ML and knowledge graphs to expose a human-like explanation when recognizing an object of any class in a knowledge graph of 4,233,000 resources.

 

2019 Talk: On the role of Knowledge Graphs for the adoption of Machine Learning Systems in Industry

 

Despite a surge of innovation focusing on Machine Learning-based AI systems, major industries remain puzzled about its impact at scale. This is particularly valid in context of critical systems, as the need of robustness, trust and in particular explanation are strongly required for mass adoption. Pure Machine Learning based approaches have emerged, but fail to address the core principle of explainability i.e., how to communicate knowledge which explains a decision in a human-comprehensible way. Knowledge graphs have shown fit-for-purpose characteristics to tackle the problem of explainability in Machine Learning. We will review how knowledge graphs could be adopted to progress explaination of AI systems, and ultimately adoption of AI at scale.