Researchers in EPFL’s Digital and Cognitive Musicology Lab used an unsupervised machine learning model to “listen to” and categorize more than 13,000 pieces of Western classical music, revealing how modes – such as major and minor – have changed throughout history.
The Bulgarian government has adopted a “Concept for the Development of Artificial Intelligence“, planned until 2030. This strategy is in line with the documents of the European Commission, considering AI as one of the main drivers of digital transformation in Europe and a significant factor in ensuring the competitiveness of the European economy and high quality of life.
Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.
However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Anna Lenhart about congress and the tech lobby.
In this webinar from Climate Change AI you can hear from panellists in industry and academia as they discuss climate change mitigation. They consider how we can tackle climate change while addressing social inequalities, and investigate whether AI could help.
In interview number five in this series of Meet the Team Leaders from the CLAIRE COVID-19 Initiative, we hear from Marco Maratea, Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi. University Genova, Italy.
In December 2020, the Royal Society published a report on Digital Technology and the Planet: Harnessing computing to achieve net zero. In his foreword, Professor Andy Hopper, Vice President of the Royal Society and Professor of Computer Technology, University of Cambridge writes: “Nearly a third of the 50% carbon emissions reductions the UK needs to make by 2030 could be achieved through existing digital technology.”
The Intergovernmental Panel on Climate Change (IPCC) fifth assessment report states that warming of the climate system is unequivocal and notes that each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850. The projections of the IPCC Report regarding future global temperature change range from 1.1 to 4°C, but that temperatures increases of more than 6°C cannot be ruled out . This wide range of values reflects our limitations in performing accurate projections of future climate change produced by different potential pathways of greenhouse gas (GHG) emissions. The sources of the uncertainty that prevent us from obtaining better precision are diverse. One of them is related to the computer models used to project future climate change. The global climate is a highly complex system due to many physical, chemical, and biological processes that take place among its subsystems within a wide range of space and time scales.
Yet, OpenAI’s GPT-2 language modeldoes know how to reach a certain Peter W— (name redacted for privacy). When prompted with a short snippet of Internet text, the model accurately generates Peter’s contact information, including his work address, email, phone, and fax:
Energy-efficient buildings are one of the top priorities to sustainably address the global energy demands and reduction of CO2 emissions. Advanced control strategies for buildings have been identified as a potential solution with projected energy saving potential of up to 28%. However, the main bottleneck of the model-free methods such as reinforcement learning (RL) is the sampling inefficiency and thus requirement for large datasets, which are costly to obtain or often not available in the engineering practice. On the other hand, model-based methods such as model predictive control (MPC) suffer from large cost associated with the development of the physics-based building thermal dynamics model.
Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level. In a study published recently in Science Advances, UCI researchers describe how their deep-learning technique can also be successfully used to observe gene regulation at the cellular level. Until now, that process had been limited to tissue-level analysis.
This post contains a list of the AI-related seminars that are scheduled to take place between now and the end of March 2021. We’ve also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually.
The promise of unsupervised learning lies in its potential to take advantage of cheap and plentiful unlabeled data to learn useful representations or generate high-quality samples. For the latter task, neural network-based generative models have recently enjoyed a lot of success in producing realistic images and text. Two major paradigms in deep generative modeling are generative adversarial networks (GANs) and normalizing flows. When successfully scaled up and trained, both can generate high-quality and diverse samples from high-dimensional distributions. The training procedure for GANs involves min-max (saddle-point) optimization, which is considerably more difficult than standard loss minimization, leading to problems like mode dropping.