The Conference on Empirical Methods in Natural Language Processing (EMNLP 2021) took place from the 7th to the 11th of November both in Punta Cana and online. If you did not have time to check the papers and the keynotes at the main conference, here are the livetweeted keynotes and papers sorted by language.
✍️Live Notes of EMNLP 2021 #EMNLP2021 Keynote by Ido Dagan on 3 directions that #NLProc should pursue: https://t.co/LLeBjcffOP @emnlpmeeting
— Zhijing Jin@NAACL2024 Mexico (@ZhijingJin) November 7, 2021
At #EMNLP2021 Evelina Fedorenko makes a strong case to defuse criticism that neural language models cannot "think". Neither can the human language modules in the brain, she argues, based on human brain studies. #EMNLP2021livetweet pic.twitter.com/WXiUZkWZTy
— Zeta Alpha (@ZetaVector) November 8, 2021
#EMNLP2021livetweet
Should language technology (LT) be one-size-fits-all? In his keynote speech, Prof. Steven Bird argues that we should *not* assume the approaches and assumptions for developing LT apply to all languages & communities.
1/7#emnlp2021enhttps://t.co/S5onMBSRu1— Renny P. Kusumawardani (@rennypradina) November 10, 2021
#EMNLP2021 #EMNLP2021livetweet
[PT-BR]
Para as pessoas interessadas em interpretação do que os modelos Transformers estão aprendendo, segue um trabalho bem interessando "Transformer Feed-Forward Layers Are Key-Value Memories", da Mor Geva et al.https://t.co/COXgTeN2cb— Gabriel Oliveira (@GOdS_briel) November 9, 2021
#EMNLP2021livetweet
[PT-BR] Hoje é dia de fazer propaganda do meu trabalho 😀CIDEr-R: Robust Consensus-based Image Description Evaluation
✏️ Gabriel Oliveira (me), Esther Colombini (@esthercolombini), Sandra Avila(@sandraavilabr)
— Gabriel Oliveira (@GOdS_briel) November 11, 2021
#EMNLP2021LiveTweet
[PT-BR] Para as pessoas interessadas em métodos de avaliação para tarefas multi-modal aqui vai um recomendação de leitura:CLIPScore: A Reference-free Evaluation Metric for Image Captioning
✏️ Jack Hessel(@jmhessel) et al.
— Gabriel Oliveira (@GOdS_briel) November 10, 2021
#EMNLP2021LiveTweet
[PT-BR] Segue uma recomendação de leitura às pessoas interessadas design em conjunto com aprendizado de máquina:Machine-in-the-Loop Rewriting for Creative Image Captioning
✏️Vishakh Padmakumarl (@vishakh_pk) et al.
📽️https://t.co/Zwz43NJKz6— Gabriel Oliveira (@GOdS_briel) November 10, 2021
Most works on semantic change use neural word representations(w2vBERT). How do features compare? Surprisingly well.@glnmario Andrey Kutuzov, Lidia Pivovarova#EMNLP2021livetweet #CoNLL2021 https://t.co/M5MPTvB4Fw pic.twitter.com/iEcdVrsiSW
— Leshem Choshen 🤖🤗 (@LChoshen) November 11, 2021
#EMNLP2021 Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?https://t.co/ARACENSLxr
Rochelle Choenni, Ekaterina Shutova, Robert van Rooi (No twitter?!) pic.twitter.com/lAwnnzI2JT— Leshem Choshen 🤖🤗 (@LChoshen) November 7, 2021
#EMNLP2021 We Need to Talk About train-dev-test Splits@robvanderg
Tuning is done on the dev, so people use the test to compare many models and the test sets become unreliable. We should use another split. pic.twitter.com/dwsz4Hyjvl— Leshem Choshen 🤖🤗 (@LChoshen) November 8, 2021
#EMNLP2021 #EMNLP2021livetweet https://t.co/3Y79Hf4xwPhttps://t.co/vn0PU0wyGZ
Jimin Hong, Taehee Kim, Hyesu Lim, Jaegul Choo
AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain
How to change the vocabulary to fit our data (with a pretrained model). pic.twitter.com/oFu1ihn02J— Leshem Choshen 🤖🤗 (@LChoshen) November 8, 2021
#EMNLP2021
William Merrill @yoavgo @royschwartzNLP @RamanujanVivek @nlpnoah https://t.co/MkZGAqcuiohttps://t.co/Uem2c0nixV pic.twitter.com/2xelzI37wr— Leshem Choshen 🤖🤗 (@LChoshen) November 7, 2021
#EMNLP2021 #EMNLP2021livetweet https://t.co/HBmHH9qJvv@ramitsawhney Megh Thakkar @AGARvalaAgarwal Di Jin @Diyi_Yang Lucie Flek
HypMix: Hyperbolic Interpolative Data Augmentationhttps://t.co/IwrMoNfg7N pic.twitter.com/Hia0BqveuH— Leshem Choshen 🤖🤗 (@LChoshen) November 8, 2021
#EMNLP2021 #EMNLP2021livetweet https://t.co/Cl1EQsMyey
On Homophony and Rényi Entropy@tpimentelms @clara__meister Simone Teufel @ryandcotterell pic.twitter.com/MdIDVtjnUd— Leshem Choshen 🤖🤗 (@LChoshen) November 9, 2021
#EMNLP2021 #EMNLP2021livetweet https://t.co/G4eb9NIn4W
The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learninghttps://t.co/XgxRK7ioqz@ArijRiabi @AriannaBisazza @VerhoefTessa
Emerging language and linguistics,
gotta love this https://t.co/404mruzv1h— Leshem Choshen 🤖🤗 (@LChoshen) November 9, 2021
#EMNLP2021 Competency Problems: On Finding and Removing Artifacts in Language Datahttps://t.co/XpxixDEvnc
William Merrill, @JesseDodge , @nlpmattg , Matthew Peters, @alexisjross, @sameer_, @nlpnoah— Leshem Choshen 🤖🤗 (@LChoshen) November 7, 2021
#EMNLP2021 #EMNLP2021livetweet @LazarKoren Benny Saret, Asaf Yehudai, Wayne Horowitz, Nathan Wasserman, @GabiStanovsky https://t.co/xf6LZ1VE05 https://t.co/GUlSv6wNx9
— Leshem Choshen 🤖🤗 (@LChoshen) November 8, 2021
#EMNLP2021 Coarse2Fine: Fine-grained Text Classification on Coarsely-grained Annotated Data@MekalaDheeraj, @VarunGangal, @shangjingbohttps://t.co/xR953Pga9a
Can we learn fine-grained classes with data on only coarse grained models? pic.twitter.com/nCtD2d3Apd
— Leshem Choshen 🤖🤗 (@LChoshen) November 7, 2021
#EMNLP2021 #EMNLP2021livetweet https://t.co/K8OJ0hQNLjhttps://t.co/KHuMVZ3a25
Information-Theoretic Characterization of Fusion@neil_rathi @OlgaZamaraeva @ethanachiThe first quantitative measure of how morphological fusing a language is.
p.s. Kudo 1st author in high school! pic.twitter.com/jipozLYao2— Leshem Choshen 🤖🤗 (@LChoshen) November 9, 2021
#EMNLP2021 AligNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate
Jongyoon Song, @sungwon__kim , Sungroh Yoonhttps://t.co/YHrHPvBfYW pic.twitter.com/YBX7yi7PnC— Leshem Choshen 🤖🤗 (@LChoshen) November 7, 2021
To give you a context of the significance of the following work, here is an article why in Indonesia simultaneously no-one, and everyone, speaks the national language, Bahasa Indonesia:https://t.co/MKdt6HMmCu
1/4#EMNLP2021livetweet #emnlp2021en— Renny P. Kusumawardani (@rennypradina) November 11, 2021
#EMNLP2021 How do you model what people think they know about what others think? And, how dialogues influence such beliefs in the course of a collaboration? “MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks” aims to answer these.
1/n#emnlp2021en pic.twitter.com/cQYrLzq3Io— Renny P. Kusumawardani (@rennypradina) November 7, 2021
#EMNLP2021LiveTweet 🧵
📃 Multimodal Pretraining Unmasked
✏️ E. Bugliarello (@ebugliarello), R. Cotterell (@ryandcotterell), N. Okazaki (@chokkanorg), D. Elliott (@delliott)
🎬 https://t.co/pQYz4grx61
📚 https://t.co/bZMM48JfOU
/1 pic.twitter.com/CPgVI6bDit— Max (@mxmeij) November 10, 2021
#EMNLP2021LiveTweet 🧵
📃 HypMix: Hyperbolic Interpolative Data Augmentation
✏️ Ramit Sawhney (@ramitsawhney), Megh Thakkar, Shivam Agarwal (@Shivamag12, Di Jin, Diyi Yang (@Diyi_Yang), Lucie Flek (@lucie_nlp)
🎬 https://t.co/WmEuMlEBHs
📚 https://t.co/1lk8bGojG6
/1 pic.twitter.com/GFvGxhIHKZ— Max (@mxmeij) November 10, 2021
#EMNLP2021LiveTweet 🧵
📃 Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers
✏️ Stella Frank (@stellalalallets), Emanuele Bugliarello (@ebugliarello), Desmond Elliott (@delliott)
🎬 https://t.co/Wj0OTkN07T
📚 https://t.co/dAGcjgmmWT
/1 pic.twitter.com/mjbNq4DvNH— Max (@mxmeij) November 10, 2021
#EMNLP2021LiveTweet 🧵
📃 COVR: A Test-Bed for Visually Grounded Compositional Generalization with Real Images
✏️ Ben Bogin (@ben_bogin), Shivanshu Gupta, Matt Gardner (@nlpmattg), Jonathan Berant (@JonathanBerant)
🎬 https://t.co/FTB5z72v8s
📚 https://t.co/uOrRMc8lLn
/1 pic.twitter.com/8hq5YO7LQe— Max (@mxmeij) November 9, 2021
#EMNLP2021LiveTweet 🧵
📃 On Pursuit of Designing Multi-modal Transformer for Video Grounding
✏️ Meng Cao, Long Chen, Mike Zheng Shou, Can Zhang, Yuexian Zou
🎬 https://t.co/r3lslpe5kH
📚 https://t.co/XpKL4zBO1I
/1 pic.twitter.com/LvGlxqR0RT— Max (@mxmeij) November 9, 2021
#EMNLP2021LiveTweet 🧵
📃 Inflate and Shrink: Enriching and Reducing Interactions for Fast Text-Image Retrieval
✏️ Haoliang Liu, Tan Yu (@taoyds), Ping Li
🎬 https://t.co/edKFuvLo4n
📚 https://t.co/sghqJCyp3Y
/1 pic.twitter.com/XO2CqjwtuK— Max (@mxmeij) November 9, 2021
#EMNLP2021LiveTweet 🧵
📃 Robust Open-Vocabulary Translation from Visual Text Representations
✏️ Elizabeth Salesky (@esalesk), David Etter, Matt Post (@mjpost)
🎬 https://t.co/yMSTxXajrM
📚 https://t.co/fNaLOptVEv
/1 pic.twitter.com/nTJnlUMZZ8— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 Boosting Cross-Lingual Transfer via Self-Learning with Uncertainty Estimation
✏️ Liyan Xu, Xuchao Zhang, Xujiang Zhao, Haifeng Chen, Feng Chen, Jinho D. Choi (@Jinho_D_Choi)
🎬 https://t.co/JwxdHKwGjz
📚 https://t.co/ehrt2bAxlp
/1 pic.twitter.com/CHdKDJsej0— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 It Is Not As Good As You Think!
✏️ J. Zhao (@michellezhaozh1), P. Arthur (@_philip_arthur), G. Haffari (@haffari), T. Cohn (@trevorcohn), E. Shareghi (@EhsanShareghi)
🎬 https://t.co/ta9id1iENw
📚 https://t.co/CRG5OKh66g
/1 pic.twitter.com/IN1aLpky6C— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 A Generative Framework for Simultaneous Machine Translation
✏️ Yishu Miao, Phil Blunsom, Lucia Specia (@lspecia)
🎬 https://t.co/OHjq2rAKiH
📚 https://t.co/27ujMOgf33
/1 pic.twitter.com/OPC5ksdPKA— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 Controlling Machine Translation for Multiple Attributes with Additive Interventions
✏️ Andrea Schioppa, Katja Filippova (@fajtak), Artem Sokolov, David Vilar
🎬 https://t.co/3dAzLA8zPx
📚 https://t.co/c2qLWUOoKs
/1 pic.twitter.com/EUl4L9s0MW— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation
✏️ Haoran Xu (@fe1ixxu), Benjamin Van Durme (@ben_vandurme), Kenton Murray (@kentonmurray)
🎬 https://t.co/TcH8ghZpsM
📚 https://t.co/LqfEtftWGj
/1 pic.twitter.com/iCaag8tLzL— Max (@mxmeij) November 8, 2021
#EMNLP2021LiveTweet 🧵
📃 Multilingual Unsupervised Neural Machine Translation with Denoising Adapters
✏️ Ahmet Üstün (@ahmetustun89), Alexandre Berard , Laurent Besacier (@laurent_besacie), Matthias Gallé (@mgalle)
🎬 https://t.co/9yVc5C0wHw
📚 https://t.co/9OPlZnKGLU
/1 pic.twitter.com/M32JF4JauN— Max (@mxmeij) November 8, 2021
#EMNLP2021 "Disentangling Representations of Text by Masking Transformers", by Xiongyi Zhang, Jan-Willem van de Meent, and Byron Wallace https://t.co/1vsqMJeSCw
— Iftitahu Nimah (@IftitahuNimah) November 7, 2021
#EMNLP2021 "Aligning Faithful Interpretations with their Social Attribution", by Alon Jacovi and Yoav Goldberg https://t.co/Q8ZjF6e3SP
— Iftitahu Nimah (@IftitahuNimah) November 7, 2021
#EMNLP2021 "How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?", by Indira Sen et al. https://t.co/QqVlsl8qfL
— Iftitahu Nimah (@IftitahuNimah) November 7, 2021
#EMNLP2021 "Idiosyncratic but not Arbitrary: Learning Idiolects in Online Registers Reveals Distinctive yet Consistent Individual Styles", by Zhu and Jurgens https://t.co/wVTc3CTaHa
— Iftitahu Nimah (@IftitahuNimah) November 7, 2021
Tertarik dengan Question Answering System?
Sedang berlangsung tutorial tentang Multi-Domain Multilingual Question Answering by @seb_ruder dan @aviaviavi__ https://t.co/2ANGZe8eve#EMNLP2021livetweet #emnlp2021id pic.twitter.com/7JNBWTL4UD
— Said al faraby (@SaidFaraby) November 11, 2021
“IndoNLI: A Natural Language Inference Dataset for Indonesian” oleh @rmahendrarm, @AlhamFikri, @samuel_louvan, Fahrurrozi Rahman, & @claravania menyajikan dataset inferensi bahasa natural (NLI) pertama Bahasa Indonesia yg disusun oleh manusia.
1/4#EMNLP2021livetweet #emnlp2021id— Renny P. Kusumawardani (@rennypradina) November 11, 2021
#EMNLP2021 Bgmn memodelkan apa yg orang pikirkan tentang pemikiran orang lain? Bgmn percakapan mempengaruhi pemikiran tsb di sepanjang kerjasama? “MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks” berusaha menjawab pertanyaan2 tsb.
1/n#emnlp2021id pic.twitter.com/K3OBpFTubi— Renny P. Kusumawardani (@rennypradina) November 7, 2021
#EMNLP2021 #EMNLP2021livetweet "UNKs Everywhere: Adapting Multilingual Language Models to New Scripts", by Jonas Pfeiffer et al. https://t.co/SazkNKfVpf
— Iftitahu Nimah (@IftitahuNimah) November 9, 2021
List of #EMNLP2021 keynote talks + papers that I've been binge-watching during conference to get a quick grasp about the paper's main message (only few of them are live-tweeted):
/1
— Iftitahu Nimah (@IftitahuNimah) November 12, 2021