This page has been archived and holds the information for KGC 2020 Workshops and Tutorials. For the most up-to-date information, visit https://www.knowledgegraph.tech/program/.
[ List of Workshops and Tutorials ]
Workshops are stand-alone sub events of the conference. They have separate calls for papers and their own program and organizing committee.
Tutorials are learning sessions including both lecture style and hands-on sessions. Each tutorial will be for half a day unless specified.
Note: All times are EST
You can checkout the Conference Schedule here.
Please check out the workshop page for more details: https://knowledgegraphsocialgood.pubpub.org/
Vivek Khetan, AI Researcher and Technology R&D Specialist – Accenture Labs
Colin Puri, Technology Evangelist and R&D Principal – Accenture Labs
Lambert Hogenhout, Chief Data Analytics, Partnerships and Technology Innovation
Statement of Objective
The United Nations (UN) supports 17 Sustainable Development Goals (SDGs) covering topics from poverty to healthcare, education, and beyond. This workshop will gather a community from which to discuss ongoing industry work, academic research efforts, and include a collaborative ideation exercise. The workshop will cover topics from the 17 Sustainable Development Goals (SDGs) supported by the United Nations.
The workshop will provide a speaking slot to participants based on accepted paper submissions length 2-4 pages, deadline 1st of April 12:00 am EST, submissions shall be in Springer Lecture Notes in Computer Science format. We will be accepting unpublished recent work and works in progress in the field in the context of SDGs. Participants can use provided data sets for a basis of their work or their own data sets. All submissions will be done electronically through PubPub. The accepted submissions will be published on the workshop website.
In addition, our workshop will build off the ideas presented in a collaborative ideation exercise that will comprise an onsite group braining storming about the SDGs. We encourage people to work in advance of the exercise for a lively discussion (e.g. creating PoC’s, etc.) as part of a forum for all to participate to define and discuss solutions to the SDGs that revolve the following key technical areas:
- Extract knowledge representation from unstructured/structured data sources
- Semantic representation of the SDGs
- Discover missing links across and within SDGs, how one SDG impacts another
- Reason over time with SDG data
- Managing data and models of SDGs as they change
We invite all to the workshop to participate in discussions of knowledge representation solutions for climate, education, finance and economic growth, healthcare, etc. in the context of the various topic areas of the SDGs. Workshop participants will be exposed to real data and given opportunity to converse with the leading field experts of that same data.
If you are working in the areas that align with the SDGs, this workshop is a great opportunity to come and meet the practitioners of similar fields, present your work, get access to UN data, brain storm with other participants to find the best solutions (e.g. for climate, education, finance and economic growth, healthcare, etc. ). Additional details shall follow on the workshop website.
Date: May 4, 2020 12:00PM – 2:00PM
Please check out the workshop page for more details: https://suitclub.ischool.utexas.edu/PHKG2020/index.html#home
Abstract: Electronic health records (EHRs) have become a popular source of observational health data for learning insights that could inform the treatment of acute medical conditions. Their utility for learning insights for informing preventive care and management of chronic conditions however, has remained limited. For this reason, the addition of social determinants of health (SDoH)  and ‘observations of daily living’ (ODL)  to the EHR have been proposed. This combination of medical, social, behavioral and lifestyle information about the patient is essential for allowing medical events to be understood in the context of one’s life and conversely, allowing lifestyle choices to be considered jointly with one’s medical context; it would be generated by both patients and their providers and potentially useful to both for decision-making.
We propose that the personal health knowledge graph is a semantic representation of a patient’s combined medical records, SDoH and ODLs. While there are some initial efforts to clarify what personal knowledge graphs are  and how they may be made specific for health [4, 5], there is still much to be determined with respect to how to operationalize and apply such a knowledge graph in life and in clinical practice. There are challenges in collecting, managing, integrating, and analyzing the data required to populate the knowledge graph, and subsequently in maintaining, reasoning over, and sharing aspects of the knowledge graph. Importantly, we recognize that it would not be fruitful to design a universal personal health knowledge graph, but rather, to be use-case driven. In this workshop, we aim to gather health practitioners, health informaticists, knowledge engineers, and computer scientists working on defining, building, consuming, and integrating personal health knowledge graphs to discuss the challenges and opportunities in this nascent space.
 Crews DC Ross D Adler N Diez Roux AV, Katz M. Social and behavioral information in electronichealth records: New opportunities for medicine and public health.American Journal of PreventiveMedicine, 49:980–3, 2015.
 Uba Backonja, Katherine Kim, Gail R Casper, Timothy Patton,Edmond Ramly, and Patricia FlatleyBrennan. Observations of daily living: putting the “personal” in personal health records.AmericanMedical Informatics Association, 2012, 2012.
 Krisztian Balog and Tom Kenter. Personal knowledge graphs: A research agenda. InProceedings ofthe 2019 ACM SIGIR International Conference on Theory of Information Retrieval, pages 217–220, 2019
 Amelie Gyrard, Manas Gaur, Saeedeh Shekarpour, Krishnaprasad Thirunarayan, and Amit Sheth.Personalized health knowledge graph.http://knoesis.org/sites/default/files/personalized-asthma-obesity,20(2814):29, 2018.
 Tania Bailoni, Mauro Dragoni, Claudio Eccher, Marco Guerini, and Rosa Maimone. Perkapp: Acontext aware motivational system for healthier lifestyles. In2016 IEEE International Smart CitiesConference (ISC2), pages 1–4. IEEE, 2016
Date: May 5, 2020 9:00AM – 5:30PM
Presenter: Eric Little, PhD – CEO LeapAnalysis
Knowledge graphs have proven to be a highly useful technology for connecting data of various kinds into complex, logic-based models that are easily understood by both humans and machines. Their descriptive power rests in their ability to logically describe data as sets of connected assertions (triples) at the metadata level. However, knowledge graphs have suffered from problems of scale when used against large data sets with lots of instance data. This has by-and-large hampered their adoption at enterprise scale. In the meantime, big data systems (using statistics) have matured which can handle instance data at massive scale – but these systems often lack in expressive power. They rely on indexing which is often incomplete for solving advanced analytical problems. LeapAnalysis is a new product that married these 2 worlds together by utilizing graph technologies for metadata, but leaves all instance data in its native source. This allows the knowledge graph to stay small in size and computationally tractable, even at high scale in environments with billions of pieces of instance-level data. LeapAnalysis utilizes API connectors that can translate graph-based queries (from the knowledge graph) into other data formats (e.g., CSV, Relational, Tabular, etc.) to fetch the corresponding instance data from source systems without the expensive step of migrating or transforming the data into the graph. Data stays truly federated and the knowledge graph is virtualized across those sources. Machine Learning algorithms read the schema of the data source and allow users to quickly align those schemas to their reference model (in the knowledge graph). Using this technique, graph-based SPARQL queries can be run against a wide range of data sources natively and produce extremely fast query response times with instance data coming in as fragments from multiple sources all in one go.
Date and time: May 4, 2020 1:30PM – 5:30PM
Presenter: Giovanni Tummarello, Ph.D
Abstract: A knowledge Graph in real world is often composed by more than just records linked together. One might have unstructured data (text), large amount of transactions and streaming logs and more. When it comes to analysis, it is often the case that a single approach (E.g. a graph visualization tool) is not enough and one would benefit from the convergence of multiple approaches such as BI style dashboards, full text search capability and textual content analysis, time series tools, geospatial tools (on top of course of link analysis).
The Siren platform, community edition, is a free tool to do such kind of analysis. In this tutorial we’ll take a hand on approach starting from CSVs will produce an interesting mixed knowledge graph once we have it we’ll ask interesting questions.
Text will be processed via NLP and linked to records, a data model will be created along with rich analytics dashboards. Associative navigation will be explained and we’ll use it to answer some complex queries. With link analysis we’ll get answers to more questions on the same dataset which could not be answered with dashboards alone.
Date and time: May 4, 2020 9:00AM – 1:00PM
Presenters: Elias Kärle, Umutcan Simsek, and Dieter Fensel (STI Innsbruck, University of Innsbruck)
Abstract: Building and hosting a Knowledge Graph requires some effort and a lot of experience in semantic technologies. Turning this Knowledge Graph into a useful resource for problem solving requires even more effort. An important consideration is to provide cost-sensitive methods to build a Knowledge Graph that is a useful resource for various applications: “There are two main goals of Knowledge Graph refinement: (a) adding missing knowledge to the graph, i.e., completion, and (b) identifying wrong information in the graph, i.e. error detection.” [Paulheim, 2017] This tutorial is targeting the process from knowledge creation over knowledge hosting, knowledge curation to knowledge deployment – applied to a Knowledge Graph using schema.org and domain specific extensions of schema.org as an ontology. The tutorial will be based on a book the lecturers co-authored: “Knowledge Graphs – Methodology, Tools and Selected UseCases” [Fensel et al., 2020] and is an extended and adapted version of a tutorial the lecturers gave at SEMANTICS2019.
Date and time: May 4, 2020 9:00AM – 1:00PM
Presenter: Juan Sequeda, DataWorld
Abstract: Knowledge Graphs are fulfilling the vision of creating intelligent systems that integrate knowledge and data at large scale. We observe the adoption of Knowledge Graphs by the Googles of the world. However, not everybody is a Google. Enterprises still struggle to understand their relational databases which consist of thousands of tables, tens of thousands of attributes and how the data all works together. How can enterprises adopt Knowledge Graphs successfully to integrate data, without boiling the ocean?
This tutorial will be hands-on and will focus on two parts: design and building. In the design part of the tutorial, we will present an agile methodology to create knowledge graph schemas (i.e. ontologies) and mappings to the relational databases. Furthermore, we will cover different types of relational database to knowledge graph mapping patterns.
In the building part of the tutorial, we will apply the methodology and create the knowledge graph with data coming from relational databases.
The content of this tutorial is applicable to knowledge graphs being built either with Property Graph or RDF Graph technologies.
The audience will take away concrete steps on how to effectively start designing and building knowledge graphs that can be widely useful within their enterprise.
Date and time: May 5, 2020 9:00AM – 1:00PM
Presenters: Vassil Momtchev, Ontotext
Abstract: The enterprise knowledge graphs help modern organizations to preserve the semantic context of abundant accessible information. They become the backbone of enterprise knowledge management and AI technologies with the ability to differentiate things versus strings. Still, beyond the hype of repackaging the semantic web standards for enterprise, few practical tutorials are demonstrating how to build and maintain an enterprise knowledge graph.
This tutorial helps you learn how to build an enterprise knowledge graph beyond the RDF database and SPARQL with GraphQL protocol. Overcome critical challenges like exposing simple to use interface for data consumption to users who may be unfamiliar with information schemas. Control information access by implementing robust security. Open the graph for updates, but preserve its consistency and quality.
You will pass step by step process to (1) start a knowledge graph from a public RDF dataset, (2) generate GraphQL API to abstract the RDF database, (3) pass a quick GraphQL crash course with examples (4) develop a sample web application. Finally, we will discuss other possible directions like extending the knowledge graph with machine learning components, extend the graph with additional services, add monitoring dashboards, integrate external systems.
The tutorial is based on Ontotext GraphDB and Platform products and requires basic RDF and SPARQL knowledge.
Date and time: May 5, 2020 1:30PM – 5:30PM
Presenter: Dr. Gavin Mendel-Gleason and Cheukting Ho (DataChemist)
Abstract: A hands-on tutorial that will introduce logic knowledge graphs via TerminusDB to those beginning or looking to develop their knowledge graph journey. We will introduce
TerminusDB’s underlying data store – the delta encoding has git-like features and allows gitstyle operations on data. The Tutorial will begin by covering the theoretical side of the topics and move into practical implementation of those ideas in the second part. It will introduce logic knowledge graphs and argue that this is the better way to build a knowledge graph. It will then introduce the TerminusDB data store. The tutorial will contextualise the development of the store and explain the git-like features of the database. Finally in this section, we will situate TerminusDB store in the context of modern DataOps – this will provide an industrial practicality for those in attendance. In the second part of this tutorial, we will introduce participants to TerminusDB and the web object query language (WOQL). We will provide a suitable complex data set and provide guidance in building a knowledge graph with TerminusDB. Participants will then learn then learn how to use WOQL to build complex queries. Participants will be exposed to the TerminusDB logic knowledge graph and have an understanding of how to query the graph using WOQL by the end of the tutorial.
Date and time: May 5, 2020 9:00AM – 1:00PM
Presenter: Souripriya Das, Matthew Perry, and Eugene I. Chong (Oracle)
Abstract: Modeling your data as a graph has a significant advantage: The schema does not need to be explicitly defined or specified ahead of time. Thus, you can add data to your graph without being constrained by any schema. One of the less recognized problems with data addition to a graph, however, is the potential for loss of backward compatibility with regard to queries designed before the changes are made to the data. Use of RDF Quads (W3C RDF1.1 Recommendation 25-FEB-2014) as your graph data model would allow schema evolution caused by data addition to your graph to preserve backward compatibility of pre-existing queries.
Date and time: May 5, 2020 1:30PM – 5:30PM