|Location||Room 5 (Virtual)|
|Date||May 3, 2022|
|Time||9:00 AM to 10:30 AM|
This workshop will address uses cases and techniques for reasoning with imperfect knowledge graphs, i.e., including knowledge that is uncertain, incomplete and inconsistent. Deductive reasoning will have limited usefulness to extend imperfect knowledge graphs, as it assumes knowledge that is wholly truthful – i.e., perfect knowledge. This workshop investigates the use of plausible reasoning over knowledge graphs including imperfect knowledge, where mathematical proof is replaced by plausible arguments for and against a given premise – in other words, everyday reasoning! The foundations for this were laid by Alan Collins and his colleagues back in the 1980’s, building upon earlier work on semantic networks, and far earlier work in Ancient Greece in respect to ideas about argumentation. We plan to focus primarily, but not exclusively, on use cases for healthcare, and will discuss technical approaches for plausible reasoning in all of its forms, including, but not limited to, causal, argumentation, case-based, abductive, and qualitative reasoning and fuzzy logic. We also welcome demonstrations and evaluations of proof-of-concept implementations.
This workshop will take place online, and consist of a mix of short talks, demonstrations and round-table discussion around use cases, requirements and techniques for extending imperfect knowledge graphs. We invite position statements which will be used to guide the design of the workshop agenda.
Knowledge graphs provide a flexible means to integrate a wide variety of information sources, including models describing concepts, their properties and inter-relationships. When it comes to dealing with incomplete, uncertain and inconsistent knowledge, it is often impractical to apply formal logic or statistical approaches, although imperfect knowledge is frequently the case in everyday life. Plausible reasoning has been studied since the days of Ancient Greece, e.g., Carneades and his guidelines for argumentation, followed by a long line of philosophers. Today, it is widely used in court cases, where the prosecution and defence make plausible arguments for and against the innocence of the accused.
Plausible reasoning can be used with causal models to provide explanations or predictions, and is also an important part of natural language understanding, e.g. finding the most plausible interpretation of an utterance involves selecting which meaning for each word best fits the context as inferred from nearby words and prior knowledge. Plausible reasoning moves from plausible premises to conclusions that are less plausible, but nonetheless rational, and is based on the way things generally go in familiar situations. Plausibility is reinforced when listeners have examples in their own minds. Examples can be used to confirm or refute reasoning. Questions can be used to probe reasoning at a deeper level, as well as to seek evidence that strengthens or weakens the argument.
Plausible reasoning further includes fuzzy reasoning, where a system is considered to be in a mix of different states at the same time, and qualitative reasoning, which deals with qualitative modelling of physical processes. Plausible reasoning is related to Bayesian inference, which provides a statistical basis for dealing with uncertainties when the corresponding statistics are available. In the absence of such statistics, plausible reasoning relies on qualitative equivalents in respect to prior knowledge and conditional likelihoods.
Alan Collins and colleagues in the 1980’s developed a core theory of plausible reasoning based upon analysis of recordings of peoples’ reasoning. Collins and Michalski found that:
- There are several categories of inference rules that people commonly use to answer questions.
- People weigh the evidence that bears on a question, both for and against.
- People are more or less certain depending on the certainty of the premises, the certainty of the inferences, and whether different inferences lead to the same or opposite conclusions.
- Facing a question for which there is an absence of directly applicable knowledge, people search for other knowledge that could help given applicable inferences.
Here are some examples of health related use cases involving imperfect knowledge:
- Computer-aided diagnosis: a clinician will typically deal with an incomplete patient picture, and, using plausible reasoning such as abduction, generalization, and analogy, they narrow down the many options to only several.
- Computer-interpretable guidelines: a clinical action (e.g., prescription of drug) will have an intended effect, but with a certain belief (or likelihood). Effects of drugs can also have probability distributions in time, as per pharmacology studies. Given a workflow of tasks, each affecting the patient in different ways and with different likelihoods, planning a “care path” involves searching for an optimal path towards a target state.
- Incomplete health KG: missing causal associations in health KG, such as between diagnosis and treatments, can be found via plausible reasoning over curated, large-scale medical knowledge. For instance, using medical hierarchies and relations, one can find the most specific body part to which both diagnosis and treatment apply.
- Literature-based discovery: given a literature-based KG, built with relations and concepts extracted from clinical literature (using NLP) is typically imperfect; by applying plausible reasoning (e.g., using word embeddings such as TransE), the identification missing relations may lead to the discovery of previously unknown links in the medical literature.
We welcome short descriptions of health related use cases, such as above, as well as use cases for other sectors that can help to identify requirements for extending KG for dealing with imperfect knowledge.
An example of a potential notation for plausible knowledge and its use for reasoning can be seen in the web-based demo at: https://www.w3.org/Data/demos/chunks/reasoning/. This work was inspired by Collins’s theory of plausible reasoning, and points to opportunities for further work on richer ways to reason, including support for Daniel Kahneman’s System 1 and 2 thinking.