Skip to main content
Connecting the Knowledge Ecosystem Founded in 2019 at Columbia University, The Knowledge Graphs Conference is emerging as the premiere source of learning around knowledge graph technologies. We believe knowledge graphs are an underutilized yet essential force for solving complex societal challenges like climate change, democratizing access to knowledge and opportunity, and capturing business value made possible by the AI revolution.
KGC bridges the gap between industry, which is increasingly recognizing the necessity of integrated data, and academia, where semantic technologies have been developing for over twenty years. Our events, education, content, and community efforts facilitate meaningful exchange between diverse groups, and increase awareness, development and adoption of this powerful technology.
Conference – bridging the gap between research and industry
We organize workshops and tutorials to progress a number of Tech4Good themes, targeting objectives such as the United Nations Sustainable Development Goals and the development of a COVID-19 vaccine. At our most recent conference, 530 attendees participated, representing over thirty industries across forty-two countries. Speakers ranged from Bell Labs pioneer John Sowa to Morgan Stanley, AstraZeneca, and leading academics from Europe and USA. A variety of workshops and tutorials were also given, including several on tech4good themes–from the UN SDGs to personal health graphs and fake news.
KGC Vision and Values
Our goal is to build the community and become a leading source of learning around knowledge graphs.
We will achieve this by engaging and convening industry leaders and innovators, across sectors.
We will focus on the diversity of perspectives:
Professional Diversity: Industry practitioners, Business Users, Faculty, Scientists, Students
Gender & Age diversity
We will gather, share and publish content to increase learning.
We will build the community online and in-person through our content, meetups and conferences.
Live stream preview
Explainable AI using Background Knowledge
One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons. It seems evident that accurate interpretations thereof would provide insights into the question what a deep learning system has internally detected as relevant on the input, thus lifting some of the black box character of deep learning systems.
The state of the art on this front indicates that hidden node activations appear to be interpretable in a way that makes sense to humans, at least in some cases. Yet, systematic automated methods that would be able to first hypothesize an interpretation of a hidden neuron activations, and then verify it, are mostly missing.
In this presentation, we provide such a method and demonstrate that it provides meaningful interpretations. It is based on using large-scale background knowledge – a class hierarchy of approx. 2 million classes curated from the Wikipedia Concept Hierarchy – together with a symbolic reasoning approach called concept induction based on description logics that was originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer through a hypothesis and verification process.