ΑΙhub.org
 

Interview with Amy McGovern – creating trustworthy AI for environmental science applications


by
24 February 2021



share this:
amy mcgovern feature

Dr Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), and is based at the University of Oklahoma. We spoke about her research, setting up the Institute, and some of the exciting projects and collaborations on the horizon.

AIhub focus issue on climate action

Firstly, could you give us a general overview of the research you are involved in at the moment?

In terms of the Institute, we got funded to be one of the inaugural Institutes in September 2020 and our focus is on creating trustworthy AI with a focus on weather applications, climate applications and coastal oceanography. However, we are aiming for a broad set of applications so we named ourselves AI2ES to reflect environmental science (ES) generally.

We’re developing AI hand-in-hand with meteorologists, oceanographers, climate scientists, and risk communication specialists who are social scientists. The risk communication specialists are working on understanding the reasons behind whether the AI we’re developing gets used or not, gets trusted or not, why trustworthiness matters, and what we can do to make the AI better.

Prior to the Institute, I’ve been working on AI for high impact weather for years, mostly forecasting future events. Our work was mostly concerned with post-processing, so taking the model output and using AI to try and improve the predictions.

How did you get involved in this field and what motivated you to pursue this line of research?

That’s an interesting question because all of my training is in computer science and machine learning. When I started at the University of Oklahoma, I was hired specifically to create a link between the School of Computer Science and the School of Meteorology with the idea that I would do AI for weather. Nobody really knew what that meant, they just thought it would be a cool idea.

I’ve always been interested in the weather and Earth science. If you’d asked the seven year old me, I wanted to be an astronaut. So, working on weather and Earth science related projects is very interesting to me. I don’t like working on methods for the sake of methods, I like working on methods to make a real difference to humanity. For me, weather is that. We’re not going to stop a tornado, a hailstorm, or a hurricane, but we can predict and understand them better, and thus, we can save lives. The concept that we could create AI methods that would actually save lives was just awesome to me. For me, that was the hook, right there.

What can you say about the increase in the use of AI in forecast prediction modelling over recent years?

The last few years it has dramatically increased – I’d say it’s exponential. For years I’ve been the chair or the vice-chair for the American Meteorological Society AI conference and (not counting 2020/1 which was affected by the pandemic) our submission numbers have gone up exponentially. For many years, there was just a small group of us doing AI and, all of a sudden, people started to see that it really works and now everybody wants to do AI. We’re actually in the process of planning a summer school for trustworthy AI for environmental science.

The Institute has only recently been set up so presumably you are at the stage of getting everything in place. How is it all going? It must’ve been quite challenging setting everything up during a pandemic.

We spent all of September through December in a sprint – we had to put a strategic plan together. Now we are actually starting to do the research, which is exciting.

We haven’t met people face-to-face this year and we are doing everything over video at the moment.

There are between 50-60 people involved in the Institute, so it’s a lot. We have seven academic institutions, plus NCAR (National Center for Atmospheric Research), plus four private industry partners. It’s a lot of people spread across the whole of the US. We probably wouldn’t all be coming together even if there wasn’t a pandemic. We would have been having local meetings at the different institutions, but we’d mostly have been seeing each other on video screens anyway.

AI2ES logo

Could you describe what a typical day might involve for you?

Lots and lots of video meetings – like everybody else in academia. Today, I have two meetings that are involved with the Institute. One is about setting up a summer school on AI for environmental science. In the other we are kicking off the explainable AI project. We’ll be creating the smaller groups that are going to be working on the different topics. So we’ll be getting the grad students, postdocs and faculty together.

I’m doing this in-between meetings with my own grad students and postdocs, faculty meetings and teaching. I’m teaching an AI ethics course this semester so that’s a new preparation for me. The course is entirely online and asynchronous. It’s called “AI, Ethics and Geoethics”, because we wanted to talk about the ethics of AI specifically for what we’re doing – climate, weather etc; making sure that what we’re producing is ethical and that we’re not biasing the predictions.

An example that we’re thinking about is to do with using crowdsourced data. The assumption is that people have smartphones or instruments for collecting data. However, there is clearly a lot less of this equipment in poorer areas. Does that affect the modelling, does it affect the predictions? We don’t know yet, but it’s an idea we’ve come up with that we need to investigate further. We do know that tornados and hail are reported more frequently in places where there are bigger populations. And it’s not that they are happening more frequently there, it’s just that there are more reports of them there. Does that also affect the predictions? The farmers care about hail, for example, it destroys their fields, but they’re not likely to have a whole bunch of people out there reporting.

What is the most challenging aspect of the research – is it the data collection, the modelling side, or both?

There’s definitely difficulties in both. The modelling is incredibly computationally intensive and one of the things that we are just starting to do is figure out how to put AI into the modelling to make it faster and still keep the physics that matters. The initial attempts to put AI into the modelling tend to blow up because AI is basically physics agnostic, and so investigating ways that we can put AI in there and still respect the laws of physics is one of our projects. There have been demonstrations of using AI for pieces of numerical weather systems where they can take something that took, for example, a minute and replace it with something that takes a second. If you are calling that code 1000 times per time step that kind of saving really adds up.

The data collection is also critical as we need labeled data to train machine learning models. As I mentioned before, sometimes the data is biased and sometimes the data that we are about simply does not exist or there are not enough examples. Both aspects make training machine learning for extreme weather an interesting challenge.

Are there any projects you are particularly excited about?

Definitely excited about the project I just mentioned, integrating AI into models. Another project, and one which I’m more involved in, is explainable AI. I think there are some really cool ways we’re going to tie the explainable AI methods with physics. What we’re going to be predicting and showing is that the inside of a deep network respected the laws of physics and what it learned was physically bound. If it’s explainable, people will trust it more. I’m pretty excited about that project. Scientists, like climate scientists, ocean scientists or atmospheric scientists, all respect the laws of physics because that’s what the atmosphere does. If you show them something that is just data driven and doesn’t have the laws of physics in it they are less likely to trust it. My real hope is that we’re going to be able to do something that is scientific discovery. So, in addition to just improving prediction we can improve our understanding of the phenomena.

You’ve mentioned that your work is a collaborative effort between academia and industry partners. Could you say a bit about how those partnerships are working, or will work?

It will work in different ways for the different partners. One of our partners is IBM and they are really interested in the R2O [Research-to-Operations] and they are interested in taking what we’re doing and putting it on phones and pushing the AI predictions out. I’m excited about that because it means more impact.

Google (Research) is another one of our partners and, as well as their interest in R2O, they’re also interested in fundamental research, because they do both. We’ve also got partnerships with NVIDIA and a small company called Disaster Tech. NVIDIA are more interested in the research end and Disaster Tech are more interested in the R2O. I think both aspects are interesting to me. I’m excited to see that that’s going to happen.

We’ve also had discussions with a car company. They care about extreme weather predictions because their cars that are left outside could get damaged. They wanted to see if we could create an application for them. You could potentially develop a similar application for the airline industry.

What advice would you give to students who are looking to get into this field?

I would say, read the papers and contact the faculty. A lot of us are hiring right now – this is hiring season for graduate students. All of the Universities involved with our Institute are hiring this year and some of us are also hiring undergraduates so don’t be afraid to ask the faculty to get involved in their research.

What are your ambitions and plans for the Institute, short-term and long-term?

We’re going to be creating fabulous new AI that is trustworthy – that’s our long-term goal over the five years. My own long-term goal is that we’re going to revolutionise our understanding and our predictions.

Even longer term, I see a need for AI, broadly across environmental sciences, to be a multi-agency and multi-sector effort. We need academia, industry, and government funded labs working together, and we need to work with international partners because climate is an urgent problem. We must improve our resiliency and create AI that can help humanity do that. We come back to the fact that we can’t stop the hurricanes, we can’t stop the tornados, we can’t stop the flooding, but we can predict them better. We can figure out what we need to do to make humanity more resilient to all these changes. Eventually we’d like to mitigate the changes too, but at the very least we can tackle the resiliency part. So, the long-term goal I see is that we have multi-agency funding and a much larger Institute that is working on AI for environmental science at a very broad level.

Biography

Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. She is the Lloyd G. and Joyce Austin Presidential Professor, School of Computer Science at the University of Oklahoma. She obtained a Ph.D. in Computer Science from the University of Massachusetts Amherst. Her research focuses on developing and applying machine learning and data mining methods for real-world applications with a special interest in high-impact weather.

Links to find out more

NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)
Amy McGovern
Amy McGovern publication list
20th Conference on Artificial Intelligence for Environmental Science



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association