ΑΙhub.org
 

A critical survey towards deconstructing sentiment analysis: Interview with Pranav Venkit and Mukund Srinath

by
02 November 2023



share this:

Mukund and PranavMukund Srinath (left on photo) and Pranav Venkit (right).

In their paper The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis, Pranav Venkit and Mukund Srinath, and co-authors Sanjana Gautam, Saranya Venkatraman, Vipul Gupta, Rebecca J. Passonneau and Shomir Wilson, present a review of the sociotechnical aspects of sentiment analysis. In this interview, Pranav and Mukund tell us more about sentiment analysis, how they went about surveying the literature, and recommendations for researchers in the field.

Could you give us a bit of background to sentiment analysis and why you decided to review the literature?

Sentiment analysis, often referred to as opinion mining, is a branch of natural language processing (NLP) that focuses on determining and extracting the emotional tone or sentiment expressed in text data, such as reviews, social media posts, or any written content. This is the cumulative brief definition that is most commonly used in NLP. This field, which examines the emotions and opinions expressed in text, is a vital component of sociotechnical systems. These systems find extensive application in domains like healthcare, finance, and policy making. Notably, sociotechnical systems reflect the intrinsic harmony between social and technical elements, fostering the seamless integration of sentiment analysis in real-world scenarios.

Efforts abound to enhance the precision of sentiment measurement, a critical pursuit when considering its wide-ranging applications. Nevertheless, when viewed through the lenses of non-technical disciplines such as sociology or philosophy, the concept of sentiment takes on a wholly distinct character. This discrepancy leads us to confront a fundamental query: What does “sentiment” truly signify in the context of sentiment analysis? We embarked on an exploration to ascertain whether this term boasts a well-defined and universally understood meaning, essential for the development of effective sociotechnical systems.

How did you go about surveying the field – what was your methodology?

To comprehensively investigate the underlying concepts of sentiment within the domain of sentiment analysis, our study was designed to encompass a wide array of themes within the peer-reviewed literature. Our research aimed to provide a holistic view of the sentiment analysis landscape by examining sources from reputable venues, including the ACL Anthology, NeurIPS proceedings, SCOPUS proceedings, and Semantic Scholar. While our goal was not to compile an exhaustive catalog of all sentiment analysis publications, we sought diversity in our sources to cover the multifaceted aspects of this field.

Our curation process focused on peer-reviewed literature commonly found in sentiment analysis, spanning models, applications, survey papers, and frameworks. Additionally, we employed reference graphs to expand the scope of our study and identify pertinent papers within the field. We initiated our research with an initial set of 256 papers. Subsequently, our team meticulously evaluated each paper’s relevance, resulting in the inclusion or classification into research themes that best aligned with our study objectives. Ultimately, our survey encompassed 189 papers.

Each category of papers underwent distinct scrutiny to unveil the field’s weaknesses. However, our central inquiry remained consistent across all categories: the definition of sentiment and sentiment analysis. We endeavored to determine whether these works acknowledged the social dimensions of sentiment and whether the field had established well-defined and widely accepted definitions.

What were some of your main findings?

One of the most prominent findings of our work was shedding light on the lack of effort in defining sentiment and sentiment analysis within the field. From the list of our papers, we found out that close to just 3% of the papers tried to define sentiment from a technical lens and close to 30% of the papers define what sentiment analysis means. We expect works in this field to define such terms better as there are multiple perspectives used in various research. This is even more important if sentiment analysis is used in healthcare to monitor patients’ responses to certain treatments or drugs. Knowing what sentiment means in this context is highly important, and we saw a lack of effort put into defining the same.

Our study also exposes a lack of explicit definitions and frameworks for characterizing sentiment, resulting in potential challenges and biases. From the work that defines sentiment and sentiment analysis, we saw that there were varying or oversimplified definitions of these terms. We also could commonly see terms such as opinion, valence, subjectivity, and sometimes emotion, being synonymously used with sentiment when these terms are used very differently in social contexts.

Has the review highlighted some challenges, or gaps in the literature, that should be addressed?

Certainly. In our research, we aimed to shed light on the challenges that arise when applying a technical concept in a social context. It’s a common issue we encounter in the field, exemplified by the complexities of interpreting notions like bias and emotion. However, what stood out is the relative lack of attention given to these challenges in the context of sentiment and sentiment analysis.

One fundamental issue that demands immediate attention in the realm of sentiment analysis is the absence of a well-defined framework for sentiment and a clear-cut definition for sentiment analysis. Given that sentiment is inherently a deeply human and social concept, its measurement should ideally draw from fields like sociology and psychology. Surprisingly, much of the existing work in sentiment analysis overlooks this crucial requirement and fails to delve into the essence of the term “sentiment.”

Through our survey, we brought to the forefront this absence of a comprehensive framework, which consequently leads to the lack of a standardized definition for sentiment analysis. As a result, sentiment analysis is treated with a broad spectrum of interpretations, spanning from opinion mining to the detection of emotional states. Adding to the complexity, even when sentiment analysis is interpreted uniformly, there is a dearth of standardized metrics and methods to measure sentiment. This results in inconsistency in what is being measured and carries the potential for introducing biases and harm in real-world applications, particularly in sensitive domains such as government policy and healthcare.

Do you have any recommendations for researchers conducting work on sentiment analysis?

Yes! Many! We’ve taken a comprehensive approach to address these issues, leading us to create an ethics sheet that serves as a valuable resource for both researchers and developers. The aim is to enhance awareness of the limitations and potential applications of sentiment analysis models. Our recommendations extend not only to researchers engaged in sentiment analysis but also to developers and users of these models.

A significant contribution of our work is the development of this ethics sheet, which is applicable to stakeholders involved with any sentiment analysis model. It serves as a practical tool to help them navigate the common challenges associated with socio-technical systems. Within this ethics sheet, we’ve outlined 10 key questions meticulously crafted to empower users, enabling them to make informed decisions when selecting a model tailored to their specific needs. These questions prompt stakeholders to consider vital aspects such as transparency, consistency, reliability, robustness, and interpretability.

Among our recommendations, a paramount point, from our aspect, is to thoroughly grasp the specific use cases for the model in question. Once these are defined, it’s crucial to develop a precise and shared understanding of the concept of sentiment employed in these works. Failure to establish clear frameworks leaves room for misinterpretation, an issue that must be avoided. Oversimplifying a complex concept like sentiment, especially within a socio-technical context, can introduce biases into the models, making it vital to address this concern.

About the authors

Mukund

Mukund Srinath is a PhD student in the College of Information Sciences and Technology at Pennsylvania State University. His research is centered on creating scalable information retrieval and natural language processing systems, with a strong emphasis on ensuring the trustworthiness and fairness of machine learning models. His thesis work concentrates on analyzing and simplifying privacy policies at scale, for which he designed and developed PrivaSeer, a privacy policy search engine, and PrivBERT, a privacy policy language model. Mukund is passionate about AI for social good and improving society through technology.

Pranav

Pranav Venkit is a fourth-year doctorate student in the IST department at Pennsylvania State University pursuing a PhD in Informatics. He is currently working as a research assistant in the Human Language Technologies Lab at Penn State led by Dr. Shomir Wilson. His research primarily focuses on the fairness and inclusivity of AI systems, with a particular emphasis on the analysis of the impact they have on minority populations in terms of bias and harm. Pranav is deeply committed to understanding the potential risks associated with such technologies and developing effective solutions to foster a safer environment for their utilization by minority communities. Pranav is also a part of the collaborative initiative with the University of Chicago that is involved in understanding the language and behavior of Broadcast Police Communications toward minority populations.

Find out more




Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub monthly digest: April 2024 – explainable AI, access to compute, and noughts and crosses

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
30 April 2024, by

The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

In this episode, Ben chats Eleanor Drage and Kerry McInerney about good tech.
29 April 2024, by

AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association