ΑΙhub.org
 

#ICML2020 invited talk: Iordanis Kerenidis – “Quantum machine learning : prospects and challenges”

by
07 August 2020



share this:

ICML
The third and final ICML2020 invited talk covered the topic of quantum machine learning (QML) and was given by Iordanis Kerenidis. He took us on a tour of the quantum world, detailing the tools needed for quantum machine learning, some of the first applications, and challenges faced by the field.

Why quantum machine learning?

Iordanis started his talk with a bit of background into quantum computing and why we should be interested in it. He stressed that we should not think of quantum computers as just being a faster processor and providing a blanket speed-up. Crucially, the quantum method is a fundamentally different way of performing computation; it could be much faster for certain tasks, but not all. He doesn’t expect quantum computing to replace classical computers, rather to remove bottlenecks and open the door to new applications. As quantum computing requires a completely different way of thinking we will need to rethink and invent new algorithmic solutions for the quantum domain.

With a large enough quantum computer applications that could become more efficient include classification, recommendation systems, q-means clustering, boosting and expectation maximisation. There are avenues of investigation for realising this potential:

  1. Reduce resource requirements for QML algorithms – namely, to ensure that they can work for thousands of qubits of not so high quality, rather than needing a million qubits of perfect quality.
  2. The field is in an interesting position where the hardware and software are being developed at the same time. This means that QML researchers can work with hardware developers to construct specific architectures for overcoming bottlenecks.
  3. For the next few years quantum computers will be “noisy” machines. That means that there will be an error associated with the results of calculations. However, Iordanis noted that this is not something that would kill the possibility of quantum computing for applications, because ML already has to deal with a lot of noise (either from data or artificially added to make systems more robust).

Supervised learning

Classification

To give the audience an idea of how quantum plays a role in classification, Iordanis picked one of the most straightforward examples: the nearest centroid classifier. This is a classification model that assigns to observations the label of the class of training samples whose mean (centroid) is closest to the observation. To add some quantum power to the algorithm one can compute the distance between observation and mean not classically, but quantumly. This is a method that can actually be applied today using small dimensions. The results presented showed that the resulting labelling using the quantum method was the same as using the classical algorithm. Sometimes the quantum algorithm makes more mistakes due to the noisiness of the system.

Dimensionality reduction

This is an area where quantum can yield good performance. There are many classical techniques existing in this space, such as principle component analysis (PCA), linear discriminant analysis (LDA), and slow feature analysis (SFA). These methods use linear algebra to map from one space to a space of lower dimensions. From there, classification is carried out. It is possible to create quantum analogies of these methods and use quantum linear algebra and a quantum classifier. See, for example, this paper on Quantum classification of the MNIST dataset via Slow Feature Analysis.

Recommendation systems

This was one of the first end-to-end applications in quantum machine learning. Recommendation systems use a preference matrix containing many users and products; some information exists about user preferences for certain products, but much of it is missing. The task is to find out which missing information would have a high value (i.e. information suggesting that a product that would be of great interest to the user). Iondanis and Anupam Prakash constructed a quantum algorithm for this problem which you can read about in detail in their paper: Quantum recommendation systems. Iordanis noted that it is unlikely that the quantum algorithm will become faster than the best classical algorithms for this application.

Quantum neural networks

There have been many different proposals for quantum neural network architectures. Some examples include: “Simulating a perceptron on a quantum computer”, “Quantum Convolutional Neural Networks”, “Quantum Neuron: an elementary building block for machine learning on quantum computers”, and “Continuous-variable quantum neural networks”.

A quantum neural network can be described as a quantum circuit with parameterised gates. For example, a gate could take a quantum state and rotate the vector with an angle theta. The quantum neural network then trains itself by using some input, running it through the circuit and then has a qubit produce a label at the end. Training examples are used to figure out the parameters of the gates. A lot of care needs to be taken in choosing the architecture for the system. Researchers have already produced results from such networks, albeit with small systems and datasets.

Iondanis and his colleagues have taken a different approach. Instead of trying to define a quantum neural network, they wondered if they could train classical networks faster by using a quantum computer. They did find some speed-ups but they weren’t considerable.

Unsupervised learning

The presentation briefly touched on unsupervised learning, specifically k-means clustering. Iondanis and colleagues have developed a quantum analogue (q-means clustering). They used the same methodology as the classical technique but utilised quantum procedures whenever possible. The team have extended their method to expectation maximisation for Gaussian mixture models and spectral clustering.

Conclusions

In closing, Iordanis remarked that the focus for QML needs to be on working out practical solutions to real-world problems. Finding the first QML applications will be a formidable challenge, but it is certainly worth pursuing. He was keen to stress that the field should not be over-hyped: quantum machine learning is not going to solve all our problems. He believes that success in this field will certainly be aided by collaboration between classical and quantum ML researchers.

About Iordanis Kerenidis

Iordanis Kerenidis (CNRS and QC Ware) received his PhD from the Computer Science Department at the University of California, Berkeley, in 2004. After a two-year postdoctoral position at the Massachusetts Institute of Technology, he joined the Centre National de Recherche Scientifique in Paris as a permanent researcher. He has been the coordinator of a number of EU-funded projects including an ERC Grant, and is the founder and director of the Paris Centre for Quantum Computing. His research is focused on quantum algorithms for machine learning and optimization, including work on recommendation systems, classification and clustering. He is currently working as the Head of Quantum Algorithms Int. at QC Ware Corp.



tags: ,


Lucy Smith , Managing Editor for AIhub.
Lucy Smith , Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association