This is the Github organization where we fork all the GitHub repositories featured in our survey paper,
To cite this work, please use the following:
@ARTICLE{10603423,
author={DeLong, Lauren Nicole and Mir, Ramon Fernández and Fleuriot, Jacques D.},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={Neurosymbolic AI for Reasoning Over Knowledge Graphs: A Survey},
year={2024},
volume={},
number={},
pages={1-21},
keywords={Cognition;Artificial intelligence;Artificial neural networks;Surveys;Taxonomy;Knowledge graphs;Semantics;Graph neural networks (GNNs);hybrid artificial intelligence (AI);knowledge graphs (KGs);neurosymbolic AI;representation learning},
doi={10.1109/TNNLS.2024.3420218}}
🔍 Our blog post, summarizing this paper with a crime scene analogy, is now on AIHub.
🧬 An abridged, biomedical version of this survey was presented as a (non-archival) workshop paper at ICML 2023's Workshop on Knowledge and Logical Reasoning in the Era of Data-driven Learning.
🤔 In this paper, we survey the methods which combine symbolic methods with deep learning to accomplish reasoning tasks over graph structures. This hybrid style is called neurosymbolic AI 🤖 , a relatively new type of AI.
We break these methods down into the following hierarchy:
Logically-Informed Embedding Approaches are the most straightforward: they augment a graph with logical inference, and then feed the augmented graph into a Knowledge Graph embedding method to learn patterns and make predictions:
The methods under Learning with Logical Constraints essentially restrict the training of some sort of graph embedding method with logic. They typically accomplish this by imposing some sort of regularizing transformation on the embedding space or by imposing a penalty term onto the loss function whenever logical rules are violated:
Finally, methods which Learn Rules for Graph Reasoning often learn rule confidences, or weights, using an iterative, back-and-forth method. In many of these cases, the model interchangeably trains a graph embedding method and performs logical inference. The results of the embedding method are used to update the weights of the rule base, and the results of logical inference are used to guide the embedding method. Some of these methods even alter the pool of rules so that each iteration has a different rule set:
To read more, our paper is currently on arxiv.
Feel free to contact us for the following maintenance:
➕ If you would like us to add GitHub repository containing a published method for neurosymbolic reasoning on graph structures
🔄 If you'd like us to fork a more recent version of your repository than we have here