ΑΙhub.org
 

Interview with AAAI Fellow Mausam: talking information extraction, mentorship, and creativity

by
06 June 2024



share this:


Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence by appointing them as Fellows. Over the course of the next few months, we’ll be talking to some of the 2024 AAAI Fellows. In the first interview in the series, we met Professor Mausam and found out about his research, career path, mentorship, and why it is important to add some creative pursuits to your life.

Could you start by giving us a quick introduction?

My name is Mausam. I am professor at the Indian Institute of Technology (IIT), Delhi, in the Computer Science department. I also have a secondary appointment with the Yardi School of Artificial Intelligence. It’s a new academic unit within IIT Delhi that was started about four years back and I was the founding head of the school until last year. I am also an affiliate professor at University of Washington, Seattle.

My research areas of interest within AI are quite broad. Much of my work has centred around natural language processing (NLP) and its connections to reasoning and knowledge, such as information extraction, question-answering, and dialogue systems. I have also worked in the traditional AI areas like automated planning and Markov decision processes. In more recent times, I have explored the field of neuro-symbolic artificial intelligence, where traditional AI systems and the modern neural deep learning systems interact to complement each other’s weaknesses. I also worked a fair bit on applications of AI to crowdsourcing, and have recently been exploring many other applications of AI, including in healthcare, material discovery, and robotics.

I wanted to ask about your career path to this point. How did you reach your current position?

In India we have a very difficult exam that all engineering students take for entrance to the top universities – the IITs. I had had good education in high school, and got a good score in this exam – so I got into a great college (IIT Delhi) as an undergrad student in computer science department.
After finishing my Bachelors, I decided to pursue graduate studies and landed at the University of Washington (UW). I actually took my first AI course there and, as that field felt much more exciting to me than the field I had originally gone to work in, I moved into AI. When I graduated from UW in 2007, I was looking for industry research positions and had got a few good offers. But, I got a lucky break, as Professor Oren Etzioni at UW had been awarded significant funding to start a new centre, and he offered me a research assistant professor role there. Getting a faculty position at a top university was no less than a dream come true. This role gave me an opportunity to strengthen my research as well as develop advising skills, under the mentorship of two senior faculty members – Oren and my PhD advisor Professor Dan Weld. This experience was really valuable for me – I co-advised seven PhD students during this time; I also taught a few courses.

Upon completing my six years as a faculty member at UW, I moved back to India – something I was planning for several years. I returned to my undergrad alma-mater, IIT Delhi. By then, I was starting to be somewhat visible in the research field, and I became one of the first few hires in modern AI at IIT Delhi.

I moved in 2013, which was also the time when AI was undergoing a transformation with newfound focus on deep learning – that became a bit challenging – as I was simultaneously also starting a new research group here. So, I had to deal with multiple novel challenges at once. But, over time, I was able to create a culture of top-quality AI research in my group. Initially, I worked on problems I took back from the US. But, very soon, I realized that the Indian ecosystem offered several potential collaborators and many other exciting problems can be researched here. That is exactly what I have done. I have been lucky to get some excellent PhD students and many amazing Master’s and undergraduate students, who have really broadened the research in my group, along with several senior collaborators. Our papers get consistently published at top venues, even though most of them have first-time student authors. Several of our projects have been fairly influential in the research world. And generally, the group has developed a strong reputation. Over time I have been lucky to become one of the more visible AI researchers in the country – I am the first AAAI Fellow working out of India. I feel very satisfied to have developed a strong research career in my country.

Is there a successful project that really stands out in your career so far?

In terms of successful projects, I think one of my longest running projects is the Open Information Extraction (Open IE) project. This is a project that I inherited from my post-PhD mentor Professor Oren Etzioni at the UW, and I brought it to India, when I returned. The goal is to generate a structured, machine-readable representation of the information in text by extracting textual tuples, and providing them to the user for a rapid exploration of a textual corpus.

People have been studying the field of information extraction for 40 years now, but our specific vantage point is to not be tied to a certain vocabulary or ontology. That allows us to extract information in a domain agnostic fashion. This is called “open information extraction” – “open” because we are not tied to a vocabulary and ontology. Open IE allows us to rapidly process any piece of text, in any domain, and output information tuples, which we can then put through a search engine so that the right information gets to the user for a wide variety of queries.

This project became quite successful for us. My team has, a couple of years back, released the sixth iteration of the “Open Information Extraction” tool, developed using the latest neural models. The first four versions were developed at UW and I continued the development from India and released the next generations of the tool. This particular project has not only been impactful in the research community (one of my works from 2012 received the ten-year Test-of-Time award at ACL in 2022), but it has also been used by several companies, and researchers from different fields. Google has its licence, many startups and larger companies alike have been able to benefit from the tools that we have released.

How do you decide on which research projects to pursue?

That’s a good question. I think of it in a two-pronged fashion. One is that there are certain topics that are core to my way of thinking about the fundamental AI question – that is “how can we make machines more intelligent?” As we assess existing AI solutions in the context of specific problem settings, we can easily identify some limitations. This automatically forces us to define a research problem – how can we strengthen the current solutions by reducing these limitations? This is a more algorithmic, traditional computer science, way of thinking.

But another pragmatic way of picking research problems is via bringing together researchers with different skills and viewpoints. I bring a certain vantage point of AI to the table and there are lots of researchers around the world who think differently to me. By working together, we can bring in those complementary views, with the goal of making the sum bigger than the sum of parts. And this is how I have been able to interact with, say, doctors on healthcare applications, and with material scientists on material discovery applications, or even other AI researchers with skillsets different to mine. This approach is particularly interesting now, given that AI is touching every field of inquiry.

These are the two ways I’ve picked my research problems. With some of my core work – the more NLP-style work or traditional planning-style work – I used the first approach. In contrast, with some of the applied problems that require domain expertise, I have worked with specific collaborators, and we have collectively defined research problems towards bridging the gap between domain’s needs and AI’s abilities.

Could you talk about some of the changes in AI research that you’ve witnessed since you started your career?

Well, I started in 2002, and anyone will tell you that AI has made humongous progress in the last 22+ years. When I started in AI, it was not yet the coolest thing to do. Garry Kasparov was defeated by Deep Blue about 6 years previously, and that was the only big success we talked about at the time. Neural networks have been known since the 60s, but, when I started in AI, people thought of them as black magic, that they are unpredictable, whose results are not easy to reproduce. Bayesian was the name of the game, with researchers working on mathematically rigorous foundational Bayesian models for various kinds of applications. Additionally, large data sets were not very common – they were small and were generally hand annotated.

So, back in the early 2000s, we worked in AI primarily because we really believed in the goal of making machines intelligent; we wanted to investigate the nature of intelligence, and related conceptual questions. Almost nobody thought about the ethics of AI. In fact, if you talked about AI being super powerful, people would generally laugh at you. At that point, the only real success had been in the game of chess, and it was clear that those successes would not translate to much else directly.

2013 marked the start of an AI revolution. AlexNet reduced error on object recognition drastically, Richard Socher’s work in sentiment mining started coming out, Word2Vec was developed, and then suddenly everybody started to gravitate towards neural models, because they were giving unprecedented successes. Now that forced a change of research methodologies for almost everyone. I also had to adapt particularly because I was never trained much in neural modeling in 2000s. In fact, not only me, it created a common starting point for almost all research groups anywhere in the world – they all had to develop expertise in neural models. This benefited China the most because they already had put together significant infrastructure, but they did not have strong research legacy. The resurgence of neural models brought a reset where legacy was not relevant, and China took advantage of that and really took research in the country forward and became an AI leader over time.

Then, as large datasets got curated and as large models got trained, suddenly we started staring at the possibility that this is real, that AI is going to make a real impact in the world, in a way that was only found in science fiction until then. Governments sprang into action and developed their AI policies. People started to think about applications of AI more than AI itself. AI became the hype, and researchers all around started flocking into AI, not because they wanted to think about the fundamental questions, but because AI was the next “in” thing. A lot of application-oriented research started to show real successes and then, in parallel, complementary research initiatives started to ask about ethics and responsible AI – when AI gets deployed in the real world, what guardrails do we need to put in place so that AI gets used in a fair, accountable and ethical fashion?

I have seen all this in the last 22 years. This has been a very, very interesting and fascinating journey so far with so many changes, progress and excitement in the field.

I was wondering if there were any duties or engagements that you have to carry out in your role as an AAAI Fellow?

AAAI has a very interesting program called the “Lunch with a Fellow”, which I’m very excited to contribute to. The idea is that the Fellows get an opportunity to take a few young students out for lunch at the AAAI conference and have a conversation about research and more. I feel that every successful researcher has many stories to share, especially about the times when they were not so successful. I have benefited from hearing these experiences, and it is very important for young students to hear these too, because research can be sometimes daunting. It can be a struggle for a young student because, as an undergraduate the whole class studies the same topics, and there is often one right answer to every question, and that answer is at the back of the book. So, from that experience, the transition to research – working at the edge of scientific knowledge and pushing it further – is a fairly challenging one. All the senior people have gone through it and not always easily, but the young students don’t realise it. They feel that their experience is unique, that their challenges are unique. Sometimes they feel like an impostor, or that they are not good enough to pursue a research career. This particular phenomenon needs to be very strongly dispelled, and it can be best dispelled by people who have achieved a certain amount of success. We need to tell the young students that whatever they are feeling is not out of place, we all felt it, and maybe some of us still feel it. But, if they keep at it, research is not as hard as it seems. I hope I can share this message in my Fellow lunch with students.

Have you got any interesting hobbies or any interests outside of your research you’d like to mention?

I do a lot of Indian classical music. I play a few instruments and I have done some singing. I feel that music has really given me a very beautiful safe space to express myself. It has also potentially increased my creativity, which, I want to believe, has helped my research.

My theory is that for a perfect life, we must strive to achieve an equilibrium, a balance of three – some physical activity, some aesthetic activity and some intellectual activity. These are the three main tenets that build who we are as a person. Physical activity is important for keeping our body fit – if you have fewer pains as you age, you can have a longer-term productivity. Then there is food for the brain, which keeps it sharp – so any kind of intellectual activity is a must. And then, finally, we also need to give the right focus to our heart. I feel that pursuing some art is critical for that, because it inculcates our creativity. I believe that if one can create this balance, such that we are really good at, say, one sport, one art form and one science subject, that would, for me, make a complete person. I should say that I am not it, because I’m really bad at sports. But at least I have felt strongly that doing more music has really improved my creativity in my AI research.

About Professor Mausam

Mausam is a Professor of Computer Science at IIT Delhi, and served as the founding head of Yardi School of Artificial Intelligence until September 2023. He is also an affiliate professor at University of Washington, Seattle. He has over 100 archival papers to his credit, along with a book, two best paper awards, and one test of time award. Mausam was awarded the AAAI Fellow status in 2024 for his sustained contributions to the field of artificial intelligence. He has had the privilege of being a program chair for two top conferences, AAAI 2021, and ICAPS 2017. He received his PhD from University of Washington in 2007 and a B.Tech. from IIT Delhi in 2001. You can find out more at his webpage.



tags: ,


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



ChatGPT is changing the way we write. Here’s how – and why it’s a problem

Have you noticed certain words and phrases popping up everywhere lately?
07 October 2024, by

Will humans accept robots that can lie? Scientists find it depends on the lie

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another.
04 October 2024, by

Explainable AI for detecting and monitoring infrastructure defects

A team of researchers has demonstrated the feasibility of an AI-driven method for crack detection, growth and monitoring.
03 October 2024, by

The Good Robot podcast: the EU AI Act part 2, with Amba Kak and Sarah Myers West from AI NOW

In the second instalment of their EU AI Act series, Eleanor and Kerry talk to Amba Kak and Sarah Myers West
02 October 2024, by

Forthcoming machine learning and AI seminars: October 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 1 October and 30 November 2024.
01 October 2024, by

Linguistic bias in ChatGPT: Language models reinforce dialect discrimination

Examining how ChatGPT’s behavior changes in response to text in different varieties of English.
30 September 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association