ΑΙhub.org
 

EMNLP 2021 in tweets


by
26 November 2021



share this:

emnlp 2021
The Conference on Empirical Methods in Natural Language Processing (EMNLP 2021) took place from the 7th to the 11th of November both in Punta Cana and online. If you did not have time to check the papers and the keynotes at the main conference, here are the livetweeted keynotes and papers sorted by language.

Keynotes

Where next? Towards multi-text consumption via three inspired research lines

The Language System in the Human Brain

LT4All!? Rethinking the Agenda

 

Papers

Brazilian Portuguese livetweets

Transformer Feed-Forward Layers Are Key-Value Memories

CIDEr-R: Robust Consensus-based Image Description Evaluation

CLIPScore: A Reference-free Evaluation Metric for Image Captioning

Machine-in-the-Loop Rewriting for Creative Image Captioning

English livetweets

Grammatical Profiling for Semantic Change Detection

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?

We Need to Talk About Train-dev-test Splits

AVodaDo: Strategy fr Adapting Vocabulary to Downstream Domain

Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent

HypMix: Hyperbolic Interpolative Data Augmentation

On Homophony and Rényi Entropy

The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning

Competency Problems: On Finding and Removing Artifacts in Language Data

Filling the Gaps in Ancient Akkadian Texts: A Masked Language Modelling Approach

Coarse2Fine: Fine-Grained Text Classification on Coarsely-grained Annotated Data

Information-theoretic Characterization of Fusion

AligNART: Non-autoregressive Neural Machine Translation by Learning to Estimate Alignment and Translate

IndoBERTweet: A Pretrained LM for Indonesian Twitter w/ Effective Domain-Specific Vocabulary Initialization

MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks

Multimodal Pretraining Unmasked

HypMix: Hyperbolic Interpolative Data Augmentation

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

COVR: A Test-Bed for Visually Grounded Compositional Generalization with Real Images

On Pursuit of Designing Multi-model Transformer for Video Grounding

Inflate and Shrink: Enriching and Reducing Interactions for Fast Text-Image Retrieval

Robust Open-Vocabulary Translation from Visual Text Representations

Boosting Cross-lingual Transfer via Self-learning with Uncertainty Estimation

It Is Not As Good As You Think!

A Generative Framework for Simultaneous Machine Translation

Controlling Machine Translation for Multiple Attributes with Additive Interventions

BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation

Multilingual Unsupervised Neural Machine Translation with Denoising Adapters

Indonesian livetweets

Disentangling Representations of Text by Masking Transformers

Aligning Faithful Interpretations with their Social Attributes

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

Idiosyncratic but not Arbitrary: Learning Idiolects in Online Registers Reveals Distinctive yet Consistent Individual Styles

Multi-domain Multilingual Question Answering

IndoNLI: A Natural Language Inference Dataset for Indonesian

MindCraft: Theory of Mind Modelling for Situated Dialogue in Collaborative Tasks

UNKs Everywhere: Adapting Multilingual Language Models to New Scripts

Finally, here is an interesting selection by Iftitahu Nimah




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.




            AIhub is supported by:



Related posts :



Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence