[Previous message][Next message][Back to index]
[Commlist] cfp: 1st Workshop on Novel Evaluation Approaches for Text Classification Systems on Social Media
Fri Mar 25 13:25:20 GMT 2022
Call for Papers - Deadline extended
1st Workshop on Novel Evaluation Approaches for Text Classification
Systems on Social Media
Co-located with ICWSM 2022, 6 June 2022, Hybrid format - Atlanta,
Georgia (US) and online
https://neatclass-workshop.github.io/
The automatic or semiautomatic analysis of textual data is a key
approach to analyse the massive amounts of user-generated content
online, from the identification of sentiment in text and topic
classification to the detection of abusive language, misinformation or
propaganda. However, the development of such systems faces a crucial
challenge. Static benchmarking datasets and performance metrics are the
primary method for measuring progress in the field, and the publication
of research on new systems typically requires demonstrating an
improvement over state-of-the-art approaches in this way. Yet, these
performance metrics can obscure critical failings in current models.
Improvements in metrics often do not reflect improvements in the
real-world performance of models. There is clearly a need to rethink
performance evaluation for text classification and analysis systems to
be usable and trustable.
If unreliable systems achieve astonishing scores with traditional
metrics, how do we recognise progress when we see it? The goal of the
Workshop on Novel Evaluation Approaches for Text Classification Systems
on Social Media (NEATCLasS) is to promote the development and use of
novel metrics for abuse detection, hate speech recognition, sentiment
analysis and similar tasks within the community, to better be able to
measure whether models really improve upon the state of the art, and to
encourage a wide range of models to be tested on these new metrics.
Recently there have been attempts to address the problem of benchmarks
and metrics that do not represent performance well. For example, in
abusive language detection, there are both static datasets of
hard-to-detect examples (Röttger et al. 2021) and dynamic approaches for
generating such examples (Calabrese et al. 2021). On the platform
DynaBench (Kiela et al. 2021), benchmarks are dynamic and constantly
updated with hard-to-classify examples, avoiding overfitting a
predetermined dataset. However, these approaches only capture a tiny
fraction of issues with benchmarking. There is still much work to do.
For the first edition of the workshop on Novel Evaluation Approaches for
Text Classification Systems (NEATCLasS) we welcome submissions
discussing such new evaluation approaches, introducing new or refining
existing ones, promoting the use of novel metrics for abuse detection,
sentiment analysis and similar tasks within the community. Furthermore,
the workshop will promote discussion on the importance, potential and
danger of disagreement in tasks that require subjective judgements. This
discussion will also focus on how to evaluate human annotations, and how
to find the most suitable set of annotators (if any) for a given
instance and task. The workshop will solicit, among others, research
papers about
* Issues with current evaluation metrics and benchmarking datasets
* New evaluation metrics
* User-centred (qualitative or quantitative) evaluation of social media
text analysis tools
* Adaptations and translations of novel evaluation metrics for other
languages
* New datasets for benchmarking
* Increasing data quality in benchmarking datasets, e.g., avoidance of
selection bias, identification of suitable expert human annotators for
tasks involving subjective judgements
* Systems that facilitate dynamic evaluation and benchmarking
* Models that perform better at hard-to-classify instances and novel
evaluation metrics such as AAA, DynaBench and HateCheck
* Bias, error analysis and model diagnostics
* Phenomena not captured by existing evaluation metrics (such as models
making the right predictions for the wrong reason)
* Approaches to mitigating bias and common errors
* Alternative designs for NLP competitions that evaluate a wide range of
model characteristics (such as bias, error analysis, cross-domain
performance)
* Challenges of downstream applications (in industry, computational
social science, computational communication science, and others) and
reflections on how these challenges can be captured in evaluation metrics
Format and Submissions
The workshop will take place as a full-day meeting on 6 June.
Participants will be invited to trial an innovative format for paper
presentations: presenters will be given 5 minutes to describe their
research questions and hypotheses, and a group discussion will start
after that. Then, presenters will be given 5 more minutes to describe
their method and results, followed by a new group discussion about the
interpretation and implications of such results. In the afternoon there
will be collaborative group activities to bring researchers together and
collect ideas for new evaluation approaches and future work in the
field. We will discuss how we can organise competitions when there are
multiple evaluation metrics and benchmarking datasets are dynamic.
We invite research papers (8 pages), position and short papers (4
pages), and demo papers (2 pages). Submissions must be original and
should not have been published previously or be under consideration for
publication while being evaluated for this workshop. Submissions will be
evaluated by the program committee based on the quality of the work and
its fit to the workshop themes. All submissions should be double-blind
and a high-resolution PDF of the paper should be uploaded to the
EasyChair submission site (link below) before the paper submission
deadline. All papers must be submitted, and formatted in AAAI
two-column, camera-ready style. Authors of accepted papers will have the
opportunity to publish their papers through workshop proceedings by the
AAAI Press. Submission instructions will be uploaded to the workshop web
page in due course: https://neatclass-workshop.github.io/
Timeline
* Submission link: https://easychair.org/conferences/?conf=neatclass2022
* UPDATED Papers submission deadline: April 10, 2022
* UPDATED Paper acceptance notification: April 29, 2022
* UPDATED Final camera-ready paper due: May 6, 2022
* Workshop Day: June 6, 2022
All deadlines are 11:59pm AOE (anywhere on earth).
Organisers
Björn Ross, University of Edinburgh
Roberto Navigli, Sapienza University of Rome
Agostina Calabrese, University of Edinburgh
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336. Is e buidheann carthannais
a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh
SC005336.
---------------
The COMMLIST
---------------
This mailing list is a free service offered by Nico Carpentier. Please use it responsibly and wisely.
--
To subscribe or unsubscribe, please visit http://commlist.org/
--
Before sending a posting request, please always read the guidelines at http://commlist.org/
--
To contact the mailing list manager:
Email: (nico.carpentier /at/ vub.ac.be)
URL: http://nicocarpentier.net
---------------
[Previous message][Next message][Back to index]