As part of its second anniversary activities, CLAIRE hosted a webinar presenting the progress and future plans of its COVID-19 taskforce. Entitled, “CLAIRE taskforce for AI and COVID-19: results and next steps”, the webinar was conducted on 15 July 2020 with a focus on the three-month research outcomes in the areas of AI for bioinformatics, drug repurposing, and medical image analysis.
By Mike Williams
When you take a medication, you want to know precisely what it does. Pharmaceutical companies go through extensive testing to ensure that you do. With a new deep learning-based technique created at Rice University’s Brown School of Engineering, they may soon get a better handle on how drugs in development will perform in the human body.
The International Conference on Machine Learning (ICML) is a flagship machine learning conference that in 2020 received 4,990 submissions and managed a pool of 3,931 reviewers and area chairs. Given that the stakes in the review process are high — the careers of researchers are often significantly affected by the publications in top venues — we decided to scrutinize several components of the peer-review process in a series of experiments. Specifically, in conjunction with the ICML 2020 conference, we performed three experiments that target: resubmission policies, management of reviewer discussions, and reviewer recruiting. In this post, we summarize the results of these studies.
By Gianluca Bontempi, Ricardo Chavarriaga, Hans de Canck, Emanuela Girardi, Holger Hoos and Iarla Kilbane-Dawe
CLAIRE, the Confederation of Laboratories for AI Research in Europe, launched its COVID-19 initiative in March 2020 as the first wave of the pandemic hit the continent. Its objective is to coordinate volunteer efforts of its members to contribute to tackling the effects of the disease. The taskforce was able to quickly gather a group of about 150 researchers, scientists and experts in AI organized in seven topic groups: epidemiological data analysis, mobility data analysis, bioinformatics, medical imaging, social dynamics monitoring, robotics, and scheduling and resource management.
Accurately predicting how an individual’s chronic illness is going to progress is critical to delivering better-personalised, precision medicine. Only with such insight can a clinician and patient plan optimal treatment strategies for intervention and mitigation. Yet there is an enormous challenge in accurately predicting the clinical trajectories of people for chronic health conditions such as cystic fibrosis (CF), cancer, cardiovascular disease and Alzheimer’s disease.
Current machine learning methods provide unprecedented accuracy across a range of domains, from computer vision to natural language processing. However, in many important high-stakes applications, such as medical diagnosis or autonomous driving, rare mistakes can be extremely costly, and thus effective deployment of learned models requires not only high accuracy, but also a way to measure the certainty in a model’s predictions. Reliable uncertainty quantification is especially important when faced with out-of-distribution inputs, as model accuracy tends to degrade heavily on inputs that differ significantly from those seen during training. In this blog post, we will discuss how we can get reliable uncertainty estimation with a strategy that does not simply rely on a learned model to extrapolate to out-of-distribution inputs, but instead asks: “given my training data, which labels would make sense for this input?”.
There were seven interesting and varied invited talks at NeurIPS this year. Here, we summarise the first three, which were given by Charles Isbell (Georgia Tech), Jeff Shamma (King Abdullah University of Science and Technology) and Shafi Goldwasser (UC Berkeley, MIT and Weizmann Institute of Science).