news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   February 26, 2021

orchestra

Researchers in EPFL’s Digital and Cognitive Musicology Lab used an unsupervised machine learning model to “listen to” and categorize more than 13,000 pieces of Western classical music, revealing how modes – such as major and minor – have changed throughout history.

by   -   February 26, 2021

bulgarian flag

The Bulgarian government has adopted a “Concept for the Development of Artificial Intelligence“, planned until 2030. This strategy is in line with the documents of the European Commission, considering AI as one of the main drivers of digital transformation in Europe and a significant factor in ensuring the competitiveness of the European economy and high quality of life.

by   -   February 25, 2021

Eleni Vasilaki

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.

by   -   February 24, 2021

amy mcgovern feature

Dr Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), and is based at the University of Oklahoma. We spoke about her research, setting up the Institute, and some of the exciting projects and collaborations on the horizon.

by   -   February 23, 2021
Understanding how artificial intelligence algorithms solve problems like the Rubik’s Cube makes AI more useful.

By Forest Agostinelli

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

by   -   February 23, 2021

Anna Lenhart

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Anna Lenhart about congress and the tech lobby.

by   -   February 22, 2021

equitable climate mitigation

In this webinar from Climate Change AI you can hear from panellists in industry and academia as they discuss climate change mitigation. They consider how we can tackle climate change while addressing social inequalities, and investigate whether AI could help.

Marco Maratea

In interview number five in this series of Meet the Team Leaders from the CLAIRE COVID-19 Initiative, we hear from Marco Maratea, Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi. University Genova, Italy.

by   -   February 19, 2021

Image of earth with superimposed digital network

In December 2020, the Royal Society published a report on Digital Technology and the Planet: Harnessing computing to achieve net zero. In his foreword, Professor Andy Hopper, Vice President of the Royal Society and Professor of Computer Technology, University of Cambridge writes: “Nearly a third of the 50% carbon emissions reductions the UK needs to make by 2030 could be achieved through existing digital technology.”

by   -   February 18, 2021

world map - temperature

The Intergovernmental Panel on Climate Change (IPCC) fifth assessment report states that warming of the climate system is unequivocal and notes that each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850. The projections of the IPCC Report regarding future global temperature change range from 1.1 to 4°C, but that temperatures increases of more than 6°C cannot be ruled out [1]. This wide range of values reflects our limitations in performing accurate projections of future climate change produced by different potential pathways of greenhouse gas (GHG) emissions. The sources of the uncertainty that prevent us from obtaining better precision are diverse. One of them is related to the computer models used to project future climate change. The global climate is a highly complex system due to many physical, chemical, and biological processes that take place among its subsystems within a wide range of space and time scales.

by   -   February 17, 2021

BAIR GPT-2 article figure
By Eric Wallace, Florian Tramèr, Matthew Jagielski, and Ariel Herbert-Voss

Most likely not.

Yet, OpenAI’s GPT-2 language model does know how to reach a certain Peter W (name redacted for privacy). When prompted with a short snippet of Internet text, the model accurately generates Peter’s contact information, including his work address, email, phone, and fax:

feature image - building model structure

Energy-efficient buildings are one of the top priorities to sustainably address the global energy demands and reduction of CO2 emissions. Advanced control strategies for buildings have been identified as a potential solution with projected energy saving potential of up to 28%. However, the main bottleneck of the model-free methods such as reinforcement learning (RL) is the sampling inefficiency and thus requirement for large datasets, which are costly to obtain or often not available in the engineering practice. On the other hand, model-based methods such as model predictive control (MPC) suffer from large cost associated with the development of the physics-based building thermal dynamics model.

by   -   February 16, 2021
t-SNE plots for different ATAC-seq data
Clustering performance comparison when different thresholds and parameters are changed. Figure taken from Predicting transcription factor binding in single cells through deep learning, published under a CC BY-NC 4.0 licence.

Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level. In a study published recently in Science Advances, UCI researchers describe how their deep-learning technique can also be successfully used to observe gene regulation at the cellular level. Until now, that process had been limited to tissue-level analysis.

by   -   February 15, 2021

AI seminars

This post contains a list of the AI-related seminars that are scheduled to take place between now and the end of March 2021. We’ve also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually.

by   -   February 15, 2021
checkerboard full scheme
Top and Bottom Right: RealNVP [3] uses checkerboard and channel-wise partitioning schemes in order to factor out parameters and ensure that there aren’t redundant partitions from previous layers. GLOW [4] uses an invertible 1×1 convolution which allows the partitioned to be ‘learned’ by a linear layer. We show that arbitrary partitions can be simulated in a constant number of layers with a fixed partition, showing that these ideas increase representational power by at most a constant factor. Bottom Left: Random points are well-separated with high probability on a high-dimensional sphere, which allows us to construct a distribution that is challenging for flows.

By Viraj Mehta and Andrej Risteski

The promise of unsupervised learning lies in its potential to take advantage of cheap and plentiful unlabeled data to learn useful representations or generate high-quality samples. For the latter task, neural network-based generative models have recently enjoyed a lot of success in producing realistic images and text. Two major paradigms in deep generative modeling are generative adversarial networks (GANs) and normalizing flows. When successfully scaled up and trained, both can generate high-quality and diverse samples from high-dimensional distributions. The training procedure for GANs involves min-max (saddle-point) optimization, which is considerably more difficult than standard loss minimization, leading to problems like mode dropping.


supported by: