You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The standard corpus for distantly supervised relationship extraction is the New York Times (NYT) corpus, published in
10
-
[Riedel et al, 2010](http://www.riedelcastro.org//publications/papers/riedel10modeling.pdf).
9
+
**Capturing discriminative attributes (SemEval 2018 Task 10)** is a binary classification task where participants were asked to identify whether an attribute could help discriminate between two concepts. Unlike other word similarity prediction tasks, this task focuses on the semantic differences between words.
11
10
12
-
This contains text from the [New York Times Annotated Corpus](https://catalog.ldc.upenn.edu/ldc2008t19) with named
13
-
entities extracted from the text using the Stanford NER system and automatically linked to entities in the Freebase
14
-
knowledge base. Pairs of named entities are labelled with relationship types by aligning them against facts in the
15
-
Freebase knowledge base. (The process of using a separate database to provide label is known as 'distant supervision')
11
+
e.g. red(attribute) can be used to discriminate apple (concept1) from banana (concept2) -> label 1
16
12
17
-
Example:
18
-
> **Elevation Partners**, the $1.9 billion private equity group that was founded by **Roger McNamee**
|**SVM** with GloVe |**None**|**0.76**|[SUNNYNLP at SemEval-2018 Task 10: A Support-Vector-Machine-Based Method for Detecting Semantic Difference using Taxonomy and Word Embedding Features](https://aclweb.org/anthology/S18-1118)|[Author's](https://github.com/Yermouth/sunnynlp)|
29
+
|**SVM** with ConceptNet, Wikipedia articles and WordNet synonyms | None | 0.74 |[Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge](https://aclweb.org/anthology/S18-1162)|[Author's](https://github.com/LuminosoInsight/semeval-discriminatt)|
30
+
|**MLP** combining information from various DSMs, PMI, and ConceptNet | None | 0.73 |[THU NGN at SemEval-2018 Task 10: Capturing Discriminative Attributes with MLP-CNN model](https://aclweb.org/anthology/S18-1157)||
31
+
|**Gradient boosting** with co-occurrence count features and JoBimText features | None | 0.73 |[BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes](https://aclweb.org/anthology/S18-1163)||
32
+
| LexVec, word co-occurrence, and ConceptNet data combined using **maximum entropy classifier**| None | 0.72 |[UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions](https://aclweb.org/anthology/S18-1153)|[Author's](https://github.com/dpaperno/DiscriminAtt)|
33
+
| Composes explicit **vector spaces** from WordNet Definitions, ConceptNet and Visual Genome |**Fully Explainable**|**0.69**|[Identifying and Explaining Discriminative Attributes](https://arxiv.org/abs/1909.05363)|[Author's](https://github.com/ab-10/Hawk)|
34
+
|**Word2Vec** cosine similarities of WordNet glosses Transp. (No expl.) | Transp. (No expl.) | 0.69 |[Meaning space at SemEval-2018 Task 10: Combining explicitly encoded knowledge with information extracted from word embeddings](https://aclweb.org/anthology/S18-1154)|[Author's](https://github.com/cltl/meaning_space)|
35
+
| Use of Wikipedia and ConceptNet Transp. (No expl.) | Transp. (No expl.) | 0.69 |[ELiRF-UPV at SemEval-2018 Task 10: Capturing Discriminative Attributes with Knowledge Graphs and Wikipedia](https://aclweb.org/anthology/S18-1159)||
| HRERE (Xu et al., 2019) | 84.9 | 72.8 |[Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction](https://arxiv.org/abs/1903.10126)|[HRERE](https://github.com/billy-inn/HRERE)|
30
-
| PCNN+noise_convert+cond_opt (Wu et al., 2019) | 81.7 | 61.8 |[Improving Distantly Supervised Relation Extraction with Neural Noise Converter and Conditional Optimal Selector](https://arxiv.org/pdf/1811.05616.pdf)||
31
-
| Intra- and Inter-Bag (Ye and Ling, 2019) | 78.9 | 62.4 |[Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions](https://arxiv.org/pdf/1904.00143.pdf)|[Code](https://github.com/ZhixiuYe/Intra-Bag-and-Inter-Bag-Attentions)|
32
-
| RESIDE (Vashishth et al., 2018) | 73.6 | 59.5 |[RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information](http://malllabiisc.github.io/publications/papers/reside_emnlp18.pdf)|[RESIDE](https://github.com/malllabiisc/RESIDE)|
33
-
| PCNN+ATT (Lin et al., 2016) | 69.4 | 51.8 |[Neural Relation Extraction with Selective Attention over Instances](http://www.aclweb.org/anthology/P16-1200)|[OpenNRE](https://github.com/thunlp/OpenNRE/)|
34
-
| MIML-RE (Surdeneau et al., 2012) | 60.7+ | - |[Multi-instance Multi-label Learning for Relation Extraction](http://www.aclweb.org/anthology/D12-1042)|[Mimlre](https://nlp.stanford.edu/software/mimlre.shtml)|
35
-
| MultiR (Hoffman et al., 2011) | 60.9+ | - |[Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations](http://www.aclweb.org/anthology/P11-1055)|[MultiR](http://aiweb.cs.washington.edu/ai/raphaelh/mr/)|
36
-
| (Mintz et al., 2009) | 39.9+ | - |[Distant supervision for relation extraction without labeled data](http://www.aclweb.org/anthology/P09-1113)||
37
+
### FewRel
37
38
39
+
The Few-Shot Relation Classification Dataset (FewRel) is a different setting from the previous datasets. This dataset consists of 70K sentences expressing 100 relations annotated by crowdworkers on Wikipedia corpus. The few-shot learning task follows the N-way K-shot meta learning setting. It is both the largest supervised relation classification dataset as well as the largest few-shot learning dataset till now.
38
40
39
-
(+) Obtained from results in the paper "Neural Relation Extraction with Selective Attention over Instances"
41
+
The public leaderboard is available on the [FewRel website](http://www.zhuhao.me/fewrel/).
40
42
41
-
### SemEval-2010 Task 8
43
+
### Multi-Way Classification of Semantic Relations Between Pairs of Nominals (SemEval 2010 Task 8)
42
44
43
45
[SemEval-2010](http://www.aclweb.org/anthology/S10-1006) introduced 'Task 8 - Multi-Way Classification of Semantic
44
46
Relations Between Pairs of Nominals'. The task is, given a sentence and two tagged nominals, to predict the relation
@@ -75,7 +77,6 @@ reported here are the highest achieved by the model using any external resources
75
77
76
78
<aname="footnote">*</a>: It uses external lexical resources, such as WordNet, part-of-speech tags, dependency tags, and named entity tags.
77
79
78
-
79
80
#### Dependency Models
80
81
81
82
| Model | F1 | Paper / Source | Code |
@@ -88,6 +89,38 @@ reported here are the highest achieved by the model using any external resources
| HRERE (Xu et al., 2019) | 84.9 | 72.8 |[Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction](https://arxiv.org/abs/1903.10126)|[HRERE](https://github.com/billy-inn/HRERE)|
115
+
| PCNN+noise_convert+cond_opt (Wu et al., 2019) | 81.7 | 61.8 |[Improving Distantly Supervised Relation Extraction with Neural Noise Converter and Conditional Optimal Selector](https://arxiv.org/pdf/1811.05616.pdf)||
116
+
| Intra- and Inter-Bag (Ye and Ling, 2019) | 78.9 | 62.4 |[Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions](https://arxiv.org/pdf/1904.00143.pdf)|[Code](https://github.com/ZhixiuYe/Intra-Bag-and-Inter-Bag-Attentions)|
117
+
| RESIDE (Vashishth et al., 2018) | 73.6 | 59.5 |[RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information](http://malllabiisc.github.io/publications/papers/reside_emnlp18.pdf)|[RESIDE](https://github.com/malllabiisc/RESIDE)|
118
+
| PCNN+ATT (Lin et al., 2016) | 69.4 | 51.8 |[Neural Relation Extraction with Selective Attention over Instances](http://www.aclweb.org/anthology/P16-1200)|[OpenNRE](https://github.com/thunlp/OpenNRE/)|
119
+
| MIML-RE (Surdeneau et al., 2012) | 60.7+ | - |[Multi-instance Multi-label Learning for Relation Extraction](http://www.aclweb.org/anthology/D12-1042)|[Mimlre](https://nlp.stanford.edu/software/mimlre.shtml)|
120
+
| MultiR (Hoffman et al., 2011) | 60.9+ | - |[Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations](http://www.aclweb.org/anthology/P11-1055)|[MultiR](http://aiweb.cs.washington.edu/ai/raphaelh/mr/)|
121
+
| (Mintz et al., 2009) | 39.9+ | - |[Distant supervision for relation extraction without labeled data](http://www.aclweb.org/anthology/P09-1113)||
122
+
123
+
(+) Obtained from results in the paper "Neural Relation Extraction with Selective Attention over Instances"
91
124
92
125
### TACRED
93
126
@@ -106,12 +139,4 @@ _no_relation_ type).
106
139
| C-GCN + PA-LSTM (Zhang et al. 2018) |**68.2**|[Graph Convolution over Pruned Dependency Trees Improves Relation Extraction](http://aclweb.org/anthology/D18-1244)|[Offical](https://github.com/qipeng/gcn-over-pruned-trees)|
107
140
| PA-LSTM (Zhang et al, 2017) | 65.1 |[Position-aware Attention and Supervised Data Improve Slot Filling](http://aclweb.org/anthology/D17-1004)|[Official](https://github.com/yuhaozhang/tacred-relation)|
108
141
109
-
110
-
111
-
# FewRel
112
-
113
-
The Few-Shot Relation Classification Dataset (FewRel) is a different setting from the previous datasets. This dataset consists of 70K sentences expressing 100 relations annotated by crowdworkers on Wikipedia corpus. The few-shot learning task follows the N-way K-shot meta learning setting. It is both the largest supervised relation classification dataset as well as the largest few-shot learning dataset till now.
114
-
115
-
The public leaderboard is available on the [FewRel website](http://www.zhuhao.me/fewrel/).
0 commit comments