ΑΙhub.org
 

AIhub coffee corner: Open vs closed science


by
26 April 2024



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a short conversation. This month, we consider the debate around open vs closed science. Joining the conversation this time are: Joydeep Biswas (The University of Texas at Austin), Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol) and Sarit Kraus (Bar-Ilan University).

Sabine Hauert: There have been many discussions online recently about the topic of open vs closed science. We’ve seen a lot of people advocating for open AI (not the company, but being open generally, just to clarify!). I was at an event recently in preparation for the AI summit in the UK. At my table were existential threat proponents, saying we need to close down AI models and code, because that’s the only way we can protect people from the technology, and then the others, often the academics, were saying that it needs to be open, because we need to be able to scrutinize it, refine it, share it, and be sure that it benefits all. So, I’ve seen this open vs closed discussion at that table, and also a lot on social media in the past month or so, with open letters for example. I’d love to hear your thoughts.

Tom Dietterich: I’ve been very vocal about openness. I don’t see how we can actually even have a useful technology unless it’s open. For instance, the problem of hallucination in LLMs is completely unsolved by the big companies. There’s some exciting work happening in the academic setting. But, more generally, when LLAMA was released by Meta there was an immediate explosion of exciting things that happened in the academic and the hobbyist community. There has also been a big uptake of LoRA, this low-rank, efficient way of doing fine-tuning. That’s probably saved a lot of carbon worldwide. So, there are a whole bunch of reasons why we want this to be open, in addition to the scrutiny and understanding the technology. We know that the current technology is not what we want ultimately, so we need to understand its strengths and weaknesses. Of course, I’m not a doomer. I think the doomers are fighting against windmills that they’ve imagined for themselves. I mean, there are clear risks, which I think are mostly on the side of cybersecurity and social media. Although I think the risk analysis needs to ask not “is it possible for AI to make something worse?”, but “how much headroom is there to actually make anything worse than it already is?”, because in an awful lot of cases it’s already really bad, and the additional impact of AI is actually pretty small.

Joydeep Biswas: I agree that we need the results of training, for example, to be open, and I also want to advocate for the fact that we need the data sets that these models have been trained on to be open as well. The reason for that is very simple, we still don’t have a good scientific understanding of some simple phenomena like, is distillation even doing anything or are we just using distillation as a way of leaking the training data? We won’t be able to answer these questions unless we have access to both the trained model and the data that was trained on.

Sanmay Das: I completely agree with everything that’s been said. I’ll add a couple of other things. In one of our previous Coffee Corner discussions, we talked about the existential risk debate, and once again you have to think about the incentives. I think people who are arguing for closed are claiming that it is because of existential risk, however, maybe they just want to maintain their positions in the space? Maybe some of them are genuinely worried, but it’s also really easy to delude yourself to believe things that are financially advantageous for you. I think it’s really important for as many people as possible to be able to do things with these models. The scale of them is so large that you can’t do this in a University setting, but if you had the ability to look at LLAMA, then you could start looking at things. For example, you could look at how semantically grounded the model is. If you look at NSFs current call for AI institute proposals, they’re really into these ideas that consider grounding and alignment, for example, but these have to be done with modern techniques. If you want to do that with modern techniques and you actually want to make progress in this space, I think that it would be a real problem to not have access to state-of-the-art models, or at least close to state-of-the-art models. It’s not that I’m saying OpenAI as a company needs to make its model public, it’s perfectly valid for them as a commercial entity to keep their model however they want. I actually think that this is a public good question. There should be a model that is being developed in the public sector that is able to achieve this sort of stuff and that is going to be completely open in terms of its development, the data that it’s trained on, etc. It would be out there as a research good, and available for scientific discovery and progress that is going to be publicly reported.

Sarit Kraus: I agree with Sanmay, for social good, we need everything to be open. I mean, this is clear; this is good for research and humanity. However, unfortunately, the resources to do AI are now concentrated in companies. If we’re looking at countries, some of them don’t have the money or the other resources to do research on LLMs, for example. When students graduate, if they really want to research these areas, they will go to a company. This is very frightening because they [the companies] are doing it for the money. That’s fine, companies’ main goal is to make money. This is what really bothers me, and I don’t have a good solution. One idea is that a few countries get together and form some public institutions that will do research on this large scale. There is Horizon Europe, but I get the impression that that is not going too well. I am worried that if this is taken on by a government institution, it will lead to bureaucracy. Who can do this type of research? I would say companies, governments, and maybe some non-profit organizations. That was OpenAI to start with, but now they are part of Microsoft, in some sense. We had Waze, which also started as a nonprofit organization, and then became part of Google. So, I’m really pessimistic about the ability to have open source and AI for social good, unless there is some change.

Tom: In terms of other open projects, Allen AI is leading an open LLM effort. We will have the National Artificial Intelligence Research Resource (NAIRR) in the US. Meta and IBM just announced the AI Alliance with the stated goal of promoting open AI research, although it isn’t clear exactly what this will mean.

Sabine: That’s a good point, Sarit. We all want open, but will it actually be open because of forces outside of our control.



tags: ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



New AI tool generates realistic satellite images of future flooding

  24 Dec 2024
The method could help communities visualize and prepare for approaching storms.

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association