The conference on Neural Information Processing Systems (NeurIPS) 2020 kicked off on Sunday 6th December and will run until Saturday 12th December. Here, we give a brief summary of many of the planned sessions and events for the week ahead.
We are excited to announce that next month we will launching the AIhub focus issue on “AI for Good”, which will specifically concentrate on the UN Sustainable Development Goals (SDGs). Each month we will pick a different goal and highlight work in that area.
Last week saw the virtual running of the 12th Asian Conference on Machine Learning (ACML). The event had been due to be held in Thailand, but instead went online and the organisers decided to make all content freely available. You can watch all of the invited talks, tutorials, workshops, and video presentations of the contributed papers. Also, find out who won the conference awards.
By Benjamin Cramet and Sylvie Grand-Perret
The first Fly AI report provides an overview of the many ways that artificial intelligence is already applied in the industry and assesses its potential to transform the sector.
Last week the Open Data Institute (ODI) hosted their annual summit. This year the event was held virtually and included keynote talks, panel discussions and expo booths. The summit brought together people from a range of sectors to discuss the future of data. Topics covered included the use of data in innovation, climate studies, health, policy making, and more.

A stellar flare is a sudden flash of increased brightness on a star. Young stars are prone to these flares which can incinerate everything around them, including the atmospheres of nearby planets starting to form.
Finding out how often young stars erupt can help scientists understand where to look for habitable planets. But until now, searching for these flares involved poring over thousands of measurements of star brightness variations, called ‘light curves’, by eye.

Researchers from TU Wien, IST Austria and MIT have developed a recurrent neural network (RNN) method for application to specific tasks within an autonomous vehicle control system. What is interesting about this architecture is that it uses just a small number of neurons. This smaller scale allows for a greater level of generalization and interpretability compared with systems containing orders of magnitude more neurons.
