ΑΙhub.org
podcast
 

New voices in AI: environmental conservation with Lily Xu

by
21 September 2022



share this:

Welcome to episode 9 of new voices in AI

Lily Xu shares her work and adventures in using AI for environmental conservation.

You can find out more about her work on her site. You can also see her previous interview with AIhub.

All episodes of New voices in AI are available here

The music used is ‘Wholesome’ by Kevin MacLeod, Licensed under Creative Commons



transcript

Daly: Hello, and welcome to New Voices in AI, the series where we celebrate the voices of PhD students, early career researchers, and those with a new perspective on AI. I am Joe Daly and this week I am talking to Lily Xu about her work. And without any further ado, let’s begin.

Daly: First of all, thank you so, so much for for joining us today. If you could just introduce yourself: who you are, where you are and a little bit about what you’re working on at the moment.

Xu: It’s great to chat with you, Joe. Thank you so much for reaching out and letting me speak to the AI hub community. My name is Lily Xu and I’m a PhD student at Harvard University studying computer science. My work focuses on applications of AI for environmental conservation. I’m thinking about questions of how to strategically allocate limited resources — which is always the constraint that we have for sustainability tasks.

My primary application area is environmental conservation through poaching prevention, to help rangers in protected areas around the world strategically determine where poaching hotspots might be and how they ought to plan patrols. This is using machine learning, online learning, and reinforcement learning to try to figure out: Where should we be going? Where do we need to collect more data from to improve our models? ow do we do sequential decision making and planning in these changing and uncertain environments?

Daly: That’s amazing and super important work. There’s so much work that needs to be kind of done in sustainability, protecting the environment and wildlife, and all of that really important stuff. How did you kind of get started in AI?

Xu: My path to AI research was very nontraditional. I came into grad school straight out of undergrad, where I studied computer science and Spanish. A that time I did not have a direct research interest; I just thought, I really enjoyed the computer science class that I’ve taken, I feel like this is my relative strong suit,and I found it really interesting. But I didn’t really do research in undergrad. I also wasn’t “oh, I want to do AI” — four years ago AI was big, but not as dominant as it is today. So even going into AI was not obvious.

Then when I was looking into grad schools to decide what I wanted to spend four to six years working on, I came across a number of researchers — including my current advisor — who were working on computational approaches to sustainability challenges. I thought that was so inspiring and interesting. I personally believe that environmental sustainability is the most pressing issue of our generation, but I always thought that that was something that I would just care about my personal life — reduce my personal environmental impact; encourage other people that vegetarian food tastes great and we don’t have to buy these things new, we can like try to move away from car-centric lifestyles and all of that. I didn’t realize that that was something I could actually use my technical skills for; I found that really inspiring. I decided I wanted to try to make a career out of working on environmental challenges through this computational lens. And that’s how I stumbled into AI — because most of the techniques that people are applying were computer vision and optimization, machine learning and planning. So I fortunately stumbled into this very hot topic of AI through this backdoor.

Daly: Yeah, they say these phrases “important challenges for a generation” which makes it already very clear the importance of this kind of work. Could you tell us a little bit more about some of the implications of your research and of what about it specifically makes it really interesting as an area of study for you?

Xu: There are a lot of different challenges in this space of environment conservation that really highlight to us the important gaps in computer science that we need to fill. As everyone in this AI hub community probably knows, AI has really been developed and implemented and deployed and revolutionized a lot of digital spaces. It’s really revolutionized Silicon Valley, and a large fraction of Fortune 500 companies claim to be using AI in some way. Many big tech companies are making their profits in advertising and whatnot through AI.

But there’s a huge cavern between the AI that has been developed for Silicon Valley and what AI can do for these real-world physical problems. Because in these real-world challenges such as environmental conservation, we don’t have all of the resources that we have in these industry spaces. We don’t have access to a billion users that we can collect data from. We don’t have very fast timestamps steps. Suppose we’re Netflix and we’re trying to figure out whether an audience would like this movie or not, so let’s just show it to 1000 people and then we’ll see in 10 minutes whether they click on those movies. They are able to collect information really really quickly, they’re able to iterate and deploy. There’s sort of no bottleneck there. \

Whereas in on-the-ground challenges, if we want to collect more data to understand what poaching patterns are like, we have to wait for a call with a park manager. We have to talk to them and say “hey, this is an area where we want to go”, send them GPS coordinates. They have to wait until the next ranger planning meeting, which only happens once a month — on the 11th of the month in Cambodia, for example, in Srepok Wildlife Sanctuary. And then after the 11th of that month, OK, this is where we’re going to go on patrol. Then they’ll go on four-day long patrols. It’ll take another three weeks before the data gets uploaded. And then it takes however long for them to send us that data. So the process of collecting and updating data is very, very slow.

For these really pressing real-world challenges, we don’t have the luxury of really short time stamps; we don’t have the luxury of a lot of data. I think there’s a lot of room for identifying the barriers between the AI techniques that we’ve developed, then adapting those algorithms to work in these low-data settings and these short timescale, short horizon environments with a lot of uncertainty. There’s also some biases in the data, there’s changes like non-stationarity in the environment with exogenous factors that we aren’t able to account for. How do we actually come close to understanding what this environment space even looks like? Then what do we need to care about, and what flexibility do we have that we don’t have to worry too much about, when it comes to actually planning and implementing something for decision making?

Daly: Yeah, the issue of having the kind of data resources to do what you need to do. It’s something that we’ve definitely heard before in things like natural language processing for low-resource languages. So much of the time, it’s not so much the data issue, it’s kind of making the most of the data that you that you do have. Just making that link, there sounds like there could be lots of implications for these methods already.

Xu: Yeah, similar to NLP for low-resourced language. A lot of these challenges are really a challenge of justice. Right now, the benefits of AI include better healthcare decisions, better recommendations, more interesting music, faster systems, better infrastructure. All of these kinds of advances that AI is enabling are, for the most part, being diverted to very wealthy countries and very wealthy subcommunities within those countries.

These variations will further broaden the gap between the world’s rich and privileged and the world’s poor and low-resourced. Similar to NLP, how can we develop these kinds of techniques? How do we develop like assistive tools and text-to-speech systems for low-resource languages so that those language speakers can actually benefit from all of the things that NLP enables. How can we also translate that to environmental and healthcare challenges in places where we don’t have a lot of access to data all the time? Close this gap so that we are not just broadening the world’s inequality.

Daly: Trying to avoid perpetuating existing imbalances as much as possible. Looking at all these different areas, and what is something that really excites you about your area of work?

Xu: Sometimes when I tell people that I’m doing research in computer science, they think I just spend my whole day programming, working on code, and building systems. I would say I probably spend maybe 10% of my time like actually coding and the rest of it spent thinking and in meetings and talking people and managing teams and deploying systems and mentoring undergraduates.

But one thing that I am very, very lucky to be able to do is spend a good fraction of my time thinking about challenges with people who are not in computer science. Several of my projects have been working with people in the conservation world, either from the academic side as ecologists or conservation biologists, or from the practitioner side. Working with conservation managers in protected areas in Belize and Cambodia and Uganda, and also talking with folks at conservation NGOs such as WWF, Wildlife Conservation Society, Zoological Society of London, trying to learn what their world looks like, understand what their priorities are, what they think to be the most important conservation challenges, and what they think that the biggest gaps are, like focusing on a specific species. What are the resources that we really need to invest more time in to make sure are best allocated? How do we deal with changing impacts across seasons? Really just trying to understand how landscapes work.

I have like no formal background in ecology or conservation. But it’s been a real joy to spend a lot of my time,reading academic papers in conservation biology. I’ve also had the really good fortune of spending a week each at two different protected areas that we’ve worked with. Back in 2019 I was in Cambodia at Srepok Wildlife Sanctuary, visiting WWF and talking to rangers. Then just last month, at the beginning of August, I spent a week in Belize at Rio Bravo Conservation Management Area. That was a real treat. This is the first protected area that we’re working with in Latin America and we got to go on three different patrols with the ranger teams there. We saw leopard tracks, parrots, all sorts of birds, monkeys, turtles. I really just got to see the landscape and be immersed: wow, this is rainy season and we’re trudging through 6 inches of water and mud.

I did not expect to have these experiences as a computer scientist. I feel very lucky to be in this interdisciplinary space where not only do I get to learn about what these other areas are focused on, but I also get to be immersed in these settings and be able to learn from them and understand directly what their day-to-day experiences are like. These are the resources they have… this whole wealth of challenges where like AI is not a solution. Sometimes AI can really not play any role; they need like more funding, they need better equipment, they need more training. But also from that kind of immersive experience try to understand and isolate, what are the things that AI could actually help with? What are the parts of this landscape where they don’t really know what the illegal hunting or wildfire poaching patterns are like. And then I’m like a puzzle maker trying to figure out where are the gaps and what is the correct piece that might fit into here? Needing to work well with existing systems so that whatever systems that we create are not a burden, but rather fit in nicely with their existing priorities.

Daly: Yeah, that sounds like there’s some really, really incredible experiences there as well. All these different organizations and traveling around, is there anything that’s like particularly memorable? I mean the six inches of rain that’s quite memorable in and of itself. Any other standout moments in your research experience so far?

Xu: That’s a fantastic question. When we were working with Srepok Wildlife Sanctuary in Cambodia, we made these predictions of where we think high-risk poaching areas are and sent those locations over. The rangers went on patrol, then reported to us “this is what we’re finding.” But when we were first doing this, we sent them 15 different areas in the park, they went on patrol for a month, came back, and sent us their patrol data. We looked at this and said “hey, a lot of the areas that we sent, you didn’t end up patrolling at all”. We put up a map and said, “here’s the area that we pointed to, and here’s the area where you walked around, and they’re not the same. So what’s going on there? Did we send you an area that was not good?”. And their response was “We’ve tried to go there, but there’s no waterhole. So when we go on this four-day patrol, we’re camping out. We need water access so we can drink and cook food and whatnot.”

Daly: Oh, of course!

Xu: This was a huge“duh” moment which really emphasized to us that the experience we’re having — sitting at our desks looking at digital maps on this big macro scale — is really not the experience of what it’s like to be on the ground. We realized these conversations are absolutely essential for us to do anything that is useful. But then when we actually visited and accompanied them patrol, we realized, wow, these are actually the constraints that we have. And we’re experiencing it first hand.

Similarly, we had another experience in in Belize. Right now it is rainy season; they experience very strong seasonality where the rivers flood over, the ground gets very waterlogged and muddy and marshy. During the dry season that all dries out, everything is bone dry, you can walk through really easily. But since it is very intense rainy season right now, we were looking considering: here’s a place that we think might be at high risk of illegal hunting. And then we tried to go there and we got in a kilometre in from the main road, but it was just like waterlogged and we were just bushwhacking everything. There’s obviously no traps there. Nobody had been there at least in several months.

So we’re recognizing, OK, this prediction is not relevant right now, but it might be relevant in the dry season when access really changes. Right now it is not. So we don’t need to worry about the entire park. We just need to worry about areas that are accessible, that are closer to a main road. These physical constraints are just kind of a shock every time. And then afterwards it just seems like “oh duh” — these people have to sleep, and drink water, these people have to move through a space. This is not like going on a nice hike where we can just walk around wherever; this is dense jungle.

Daly: Yeah, it’s kind of like when you’re building models of things, there’s so much simplification that you have to do and then the expectation versus reality of the actual places, that’s quite a big challenge I imagine to try and address. So that’s one challenge, but what do you think could be some of the big sort of challenges and opportunities in AI?

Xu: Looking more broadly, I think there’s a lot of really interesting challenges in the environmental space, and other socially relevant, socially urgent problems. Related to things I was talking about before: how do we take approaches that are used in these high-resource, high-data settings and translate them to be useful in other very important settings? I think that in the environmental world specifically, there is a whole world of optimization and planning that is not yet being considered by other AI researchers or by conservationists.

In the theoretical ecology world, for example, what they’ve been doing for several decades now, planning and what they call adaptive management, looks at Markov decision processes to model how environments change over time> If an animal population has 100 individuals, and then a year passes and there’s like no hunting, then the population will increase to some amount. But then if there’s hunting or if there’s a drought or something like that, then the population would decrease by this amount. Those are probabilistic systems that you can model. Markov decision processes have been used in computer science for several decades, and it’s awesome that this model has been useful in ecology as well. But then in the past10 or so years, there’s been a lot of new advances in computer science for planning using reinforcement learning enabling us to model these systems more effectively, account for uncertainty, do robust planning. A lot of these techniques have been pioneered in computer science, mostly for the robotics world. That’s another case in which we have very high resources, very good funding, very good simulators. All these Amazon warehouses have, like 1000 robotic arms moving around so you can get new samples very quickly. In the environmental space, we don’t have of that. The healthcare world, for example, has been more at the forefront of taking advantage of these new advances in computer science. There are some faculty here at Harvard who are doing really fantastic work. Susan Murphy and Finale Doshi-Velez are trying to bring reinforcement learning techniques to clinical healthcare decision making and figure out: “based on whatever the patient diagnosis, what treatment regimen should we prescribe to them based on like their vitals, how they’re responding, and stuff like that?” So doing individualized treatment planning with scarce data and with a lot of uncertainty, but using reinforcement learning techniques and also developing new techniques like RL safety and interpretable machine learning that are really custom-designed for the healthcare space. A lot of those techniques would also carry over to this environmental space for the kind of problems that conservation biologists and landscape ecologists are thinking about. But that translation has not been done yet. The papers and the textbooks and the talks have not been communicated across those spaces.

You know, all the fields are boring into the ground, like drilling in, to these great levels of depth. We have all these tunnels going down, and we just have to carve the little, you know, naked mole-rat passages in between those tunnels so that people can actually go from one to another. Say, here’s the problem we have in healthcare, and we have the techniques in computer science, so let’s put this together. And then these neuro scientists have figured this other stuff out, so let’s adapt those techniques in computer science and then those computer science techniques get used by economists that then get used by ecologists to model predator-prey interaction. A lot of academic research is new for the field, but not new to the entire world. I think that those are the exciting kinds of questions — and very feasible to do, because the techniques exist. We just have to develop the shared language and communication skills.

Daly: Yeah, this does actually link very well, actually, with the question from the previous New Voice in AI. So Dimitri Coelho Mollo asked about how do you see relationships between existing approaches in AI? He was talking about symbolic AI and neural networks, embodied robotics and things like reinforcement learning. Your thoughts on whether you see them as completely independent approaches in competition with each other or whether they can be brought together in ways that could be quite promising and lead to new discoveries? And it sounds like you’ve already mentioned a little bit of this need to combine things.

Xu: I completely agree. I think that so many of the ideas that we’re that are innovative in this space up are just borrowing ideas and trying to figure out where those connections can be drawn. I think one thing that is a huge shame in computer science is that there’s so much new research that is being developed; there are thousands of papers at a single AI conference nowadays. AAAI, NeurIPS, ICML, … it’s really hard to keep track of even what’s new, so people spend too much time reading papers that have only come out in the past two to five years, and we’re missing out on all of this foundational stuff and core ideas. Foundational techniques that were developed in the 80s in computer science and economics and psychology and all these related spheres. I think that is a huge loss and we really should spend more time going back to basics so that we can develop this really strong foundation, then be able to make those connections. As Dimitri’s pointed out, a lot of these techniques aren’t new, they’re just new to some very localized space.

Daly: Yeah, actually this brings me onto the penultimate question. What is your question for the next person in the series?

Xu: Oh, interesting. OK, so I think because the AIhub audience is a little bit more broad and perhaps not everyone is academic. I’ll ask this person: what do you think is the biggest thing, or most exciting opportunity, that somebody who is not in the academic world can get into, related to AI research or development? Are there cool labs? Are there cool other ways to just get involved and learn about these problems and start to delve in?

Daly: Great question. It’s actually really nice to consider people that aren’t necessarily in the depths of academia. And yeah, just sort of see what their recommendations are for getting involved in some way.
And very, very finally; where can we find out more about your work, online or otherwise?

Xu: So I have a website https://lily-x.github.io/ or if you search my name I would probably come up somewhere. I have links to papers that I’ve written that are meant for an AI audience. I also have a paper that was written with a philosophy PhD student, who is now an ethics fellow at Stanford, that’s thinking more broadly at these questions. That paper asks what does it mean to do, quote, “AI for social good”? How do we take ideas from development economics and philosophy and bring them in to the AI community? To help us decide which problems to work on, how to actually go about designing and deploying an effective system. And then there are also a few other general audience pieces, including the AI hub interview with Lucy from a few months ago.

Daly: Yeah, as always, we will have links to all of that down somewhere on the page. But yeah, that was a brilliant thank you so much for joining us and, sharing your amazing work with us.

Xu: Thank you so much, Joe. It’s been really fun chatting with you and I’m very honored to be involved in this.

Daly: And finally thank you for joining us today, if you would like to find out more about anything mentioned or about the series, do join us on AIhub.org, and until next time, goodbye for now.



transcript



tags:


Joe Daly Engagement Manager for AIhub
Joe Daly Engagement Manager for AIhub




            AIhub is supported by:


Related posts :



Interview with Francesca Rossi – talking sustainable development goals, AI regulation, and AI ethics

“AI used to be a scientific and technical field, now it has become a socio-technical discipline"
28 March 2024, by

Datalike: Interview with Mariza Ferro

In their latest interview, Ndane and Isabella meet Mariza Ferro, professor at the Federal Fluminense University.

Interview with Amine Barrak: serverless computing and machine learning

PhD student, and AAAI Doctoral Consortium participant, Amine tells us about his research.
26 March 2024, by

AI UK 2024: Camden Council case study

How one of the London borough councils is using data and AI to help inform their decision making.
25 March 2024, by

How will generative artificial intelligence affect political advertising in 2024?

Illinois advertising professor Michelle Nelson talks about concerns around political advertising.
22 March 2024, by

Using machine learning to discover stiff and tough microstructures

Combining simulations and physical testing to forge materials with durability and flexibility for diverse engineering uses.
21 March 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association