Reading the Manual: Event Extraction as Definition Comprehension


Reading the Manual: Event Extraction as Definition Comprehension

@inproceedings{chen-etal-2020-reading,
    title = "Reading the Manual: Event Extraction as Definition Comprehension",
    author = "Chen, Yunmo  and
      Chen, Tongfei  and
      Ebner, Seth  and
      White, Aaron Steven  and
      Van Durme, Benjamin",
    booktitle = "Proceedings of the Fourth Workshop on Structured Prediction for NLP",(EMNLP的SPNLP workshop)
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}

Background(要解决的问题)

  1. 人工事件抽取和机器事件抽取之间的存在一些区别。人工进行事件抽取通常是给个Annotation Manual以及几个examples,人去抽取。而机器事件抽取则是给定大量examples,机器自己去学习,缺少了Annotation Manual这一环

Contributions(创新点)

  1. 通过为每一个event type构造**bleached statement引入Annotation Manual中的外部知识,采用cloze style MRC(not traditional!)**来求解event extraction;
  2. 为解决一个Argument Role可能存在Multiple-Span(即多个答案)的问题,提出了一个Multiple-Span Selector
  3. 能够运用到few-shot和zero-shot上;

Terminology

1. A Bleached Statement

漂白的陈述?hhh,这个名字有点高大上。以ACE 2005数据集中的LIFE:BE-BORN事件作为一个例子,它对应的bleached statement是:

填充bleached statement之后得到:

本论文的工作就是给定 bleached statementtext,得到填充后的bleached statement。

Method

0. An Example of the Method(Overview)

如上图可见,采用的是逐步求精(refine step by step)的方式,一步一步地填充上bleached statement。

1. Multiple Argument Selector

本论文的问题场景与traditional MRC存在些许不同:

  1. traditional MRC的Query是问句,而本文是cloze-style problem;
  2. traditional MRC的answer span通常只有1个。而在本文中,对于一个Argument Role,可能存在多个Argument,需要考虑Multiple Argument的情况。

论文通过使用sequence tagging的方式来解决这个问题,使用的sequence tagging的方法是CRF,tagging schema是BIO

下面通过一个实例来理解文章CRF的运行机制:

对于前面提到的example中的text。当进行到Round 2的时候,placeholder是someone else,此时,将text中的每一个token作为query,placeholder中的每一个token作为key和value。得到placeholder的attentive representation:

然后使用下面拼接起来的特征作为CRF中potential function的输入:

potential function就是一个Multi-layer Feed-forward neural network:

2. Trigger Identification

Trigger能够被看作是一个特殊的argument,所以论文采用前文提到的argument selection model用于trigger identification,而placeholder是除去argument placeholder之后的bleached statement中的所有单词,例如下面划线部分:

3. Training Data Generation

argument extractor的输入是$(S,I,T)$形式的三元组。在训练的过程中,每一个refined bleached statement使用gold bleached statement,而不是前一步的predicted bleached statement。(这个训练方式好像是常规操作,之前看的Joint Model也是这么训练的)

4. Negative Sampling

对trigger identification进行负采样来扩充数据集。对于每一个example,构建负样本的方法是选择其$\alpha$%的负样本事件类型。(毕竟只有trigger identification可以这样做)

Experiments

Dataset

ACE 2005~

Results

由于是探索性的实验,所以结果并不是很好,实验结果如下:

Error Analysis

文章对于argument extractor的error analysis倒是挺有趣的:

  1. Relative clauses。例如论文模型抽取Mosul作为argument,而gold answer却是*Mosul, where U.S. troops killed 17 people in clashes earlier in the week.*这两个本质上是一样的,但是由于clause的存在,导致了error;
  2. Counts。例如gold answer是300 billion yen,而论文模型抽取是300 billion
  3. Durations。例如gold answer是lasted two hours,而论文模型抽取是two hours

从上面的error analysis可以看出,没有entity mention,单纯基于start-end span的抽取方式会存在semantic相同,但是span精度出现问题。

Thinking

有价值的参考文献

  1. Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In Proc. AAAI, pages 6851–6858.
  2. Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Sebastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proc. ACL, pages 2160–2170.
  3. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proc. ACL, pages 1340–1350.(QA for relation extraction)
  4. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning.
    In Proc. ACL, pages 2895–2905.(Cloze style MRC for relation extraction)
  5. Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In Proc. NAACL, pages 858–867.(远古时期,使用序列标注解决抽取式阅读理解的模型)
  6. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, pages 282–289.(将如何将CRF运用到序列标注的经典论文)

不足之处

  1. 采用refine bleached statement step by step的方式,间接规定了论元抽取是有顺序的。这里面并没有什么逻辑,而且抽取在前的argument可以为后继argument提供信息,而后继argument却不能影响抽取在前的argument,似乎并不是很好;

不懂的地方

  1. 对于论文中提到的Potential Function of CRF,之前没有了解,里面有很多学问:

    The potential function at each position of the input sequence in a neural CRF is typically decomposed into an emission function (of the current label and the vector representation of the current word) and a transition function (of the previous and current labels)

    https://www.aclweb.org/anthology/2020.findings-emnlp.236.pdf

  2. 不太明白identify trigger之后,进行的ANCHOTTRIGGER(S,t)是什么意思?也不知道trigger如何影响argument的抽取。

可以借鉴的地方

  1. Trigger based method,根据Trigger去划分数据集,而且一定要引入Trigger的信息,甚至可以引入Trigger的位置信息,因为Trigger的位置对于论元的抽取非常重要。
  2. 可以作为文章的对比实验!
  3. 论文采用的是CRF来标注Argument Span,我们之前都是用Start-End指针标注的方式,不知道这两者对实验的结果会有什么影响?
  4. 引入新的预训练语言模型。

文章作者: CarlYoung
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 CarlYoung !
 上一篇
What the role is vs. What plays the role: Semi-supervised Event Argument Extraction via Dual Question Answering What the role is vs. What plays the role: Semi-supervised Event Argument Extraction via Dual Question Answering
论文名字@article{zhou2021role, title={What the role is vs. What plays the role: Semi-supervised Event Argument Extraction
2021-05-02 CarlYoung
下一篇 
Unsupervised Label-aware Event Trigger And Argument Classification Unsupervised Label-aware Event Trigger And Argument Classification
Unsupervised Label-aware Event Trigger And Argument Classification@article{zhang2020unsupervised, title={Unsupervised
2021-04-27
  目录