Touché @ ArgMining21 Frame Identification
- Task: Given an argument and a number of candidate frames, choose the best fitting one.
- Input: Arguments and candidate frames for a set of selected topics.
- Submission: TBD
- Register: TBD
Framing is the process of selecting a subset from the set of available information, based on a certain guiding principle, and aiming for an effect on the target audience. In argumentation, framing is the process of emphasizing a specific aspect of a controversial topic to convince the audience. For example, when discussing the legalization of drugs, various aspects (called "frames" here) such as health, economics, or ethics may be emphasized. An explicit handling of framing (be it by its uncovering or by its introduction) is necessary for various computational argumentation tasks such as persuasiveness prediction and analysis, argument summarization, and argument search.
In this shared task, we seek systems that are able to choose the "primary frame" of a given argument from a set of provided candidate frames. We encourage all participants of this shared task to submit their work in the 'Argument Mining' workshop at EMNLP. The accepted submissions will be included in the workshop proceeding.
Given an argument (for a specific topic) and a set of candidate frames. The task is to choose the primary frame of this argument from those that are available for the topic.
The training data is a subset of the Webis-argument-framing corpus. The corpus contains 12 326 arguments that have been extracted from the debate platform 'debatepedia.com'. Each argument is labeled with its primary frame, where the labels (the names of the frames) are derived from the platform's debate structure. The corpus covers 465 controversial topics with altogether 1623 different primary frames. The dataset used in the shared task contains 3380 arguments on 331 topics and covers the 100 most frequently used frames.
Statistics about the frame distribution in the dataset
- The most frequent frame is 'economics' (used in 50 arguments)
- The mean number of arguments per frame is 34
- The standard deviation of arguments per frame 13
- The least occurring frame is 'border security' (used in 18 arguments)
The test data includes a subset of the frames that are covered in the training set. For each argument in the test set, four candidate frames are given (one of them is the primary).
The evaluation of the submitted systems will be based on F1 score measure. You will be able to self-evaluate your software using the TIRA service. You can find the user guide here. While results of the automatic metrics are shared on the leaderboard throughout the duration of the task, those of the manual evaluation will be shared later. This is primarily due to the cost constraint of selecting only the top-performing models on the automatic metrics for the subsequent manual evaluation via crowdsourcing.
Will be updated shortly.