ΑΙhub.org
 

EMNLP 2021 in tweets


by
26 November 2021



share this:

emnlp 2021
The Conference on Empirical Methods in Natural Language Processing (EMNLP 2021) took place from the 7th to the 11th of November both in Punta Cana and online. If you did not have time to check the papers and the keynotes at the main conference, here are the livetweeted keynotes and papers sorted by language.

Keynotes

Where next? Towards multi-text consumption via three inspired research lines

The Language System in the Human Brain

LT4All!? Rethinking the Agenda

 

Papers

Brazilian Portuguese livetweets

Transformer Feed-Forward Layers Are Key-Value Memories

CIDEr-R: Robust Consensus-based Image Description Evaluation

CLIPScore: A Reference-free Evaluation Metric for Image Captioning

Machine-in-the-Loop Rewriting for Creative Image Captioning

English livetweets

Grammatical Profiling for Semantic Change Detection

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?

We Need to Talk About Train-dev-test Splits

AVodaDo: Strategy fr Adapting Vocabulary to Downstream Domain

Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent

HypMix: Hyperbolic Interpolative Data Augmentation

On Homophony and Rényi Entropy

The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning

Competency Problems: On Finding and Removing Artifacts in Language Data

Filling the Gaps in Ancient Akkadian Texts: A Masked Language Modelling Approach

Coarse2Fine: Fine-Grained Text Classification on Coarsely-grained Annotated Data

Information-theoretic Characterization of Fusion

AligNART: Non-autoregressive Neural Machine Translation by Learning to Estimate Alignment and Translate

IndoBERTweet: A Pretrained LM for Indonesian Twitter w/ Effective Domain-Specific Vocabulary Initialization

MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks

Multimodal Pretraining Unmasked

HypMix: Hyperbolic Interpolative Data Augmentation

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

COVR: A Test-Bed for Visually Grounded Compositional Generalization with Real Images

On Pursuit of Designing Multi-model Transformer for Video Grounding

Inflate and Shrink: Enriching and Reducing Interactions for Fast Text-Image Retrieval

Robust Open-Vocabulary Translation from Visual Text Representations

Boosting Cross-lingual Transfer via Self-learning with Uncertainty Estimation

It Is Not As Good As You Think!

A Generative Framework for Simultaneous Machine Translation

Controlling Machine Translation for Multiple Attributes with Additive Interventions

BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation

Multilingual Unsupervised Neural Machine Translation with Denoising Adapters

Indonesian livetweets

Disentangling Representations of Text by Masking Transformers

Aligning Faithful Interpretations with their Social Attributes

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

Idiosyncratic but not Arbitrary: Learning Idiolects in Online Registers Reveals Distinctive yet Consistent Individual Styles

Multi-domain Multilingual Question Answering

IndoNLI: A Natural Language Inference Dataset for Indonesian

MindCraft: Theory of Mind Modelling for Situated Dialogue in Collaborative Tasks

UNKs Everywhere: Adapting Multilingual Language Models to New Scripts

Finally, here is an interesting selection by Iftitahu Nimah




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence