ΑΙhub.org
 

#ICML2020 invited talk: Iordanis Kerenidis – “Quantum machine learning : prospects and challenges”


by
07 August 2020



share this:

ICML
The third and final ICML2020 invited talk covered the topic of quantum machine learning (QML) and was given by Iordanis Kerenidis. He took us on a tour of the quantum world, detailing the tools needed for quantum machine learning, some of the first applications, and challenges faced by the field.

Why quantum machine learning?

Iordanis started his talk with a bit of background into quantum computing and why we should be interested in it. He stressed that we should not think of quantum computers as just being a faster processor and providing a blanket speed-up. Crucially, the quantum method is a fundamentally different way of performing computation; it could be much faster for certain tasks, but not all. He doesn’t expect quantum computing to replace classical computers, rather to remove bottlenecks and open the door to new applications. As quantum computing requires a completely different way of thinking we will need to rethink and invent new algorithmic solutions for the quantum domain.

With a large enough quantum computer applications that could become more efficient include classification, recommendation systems, q-means clustering, boosting and expectation maximisation. There are avenues of investigation for realising this potential:

  1. Reduce resource requirements for QML algorithms – namely, to ensure that they can work for thousands of qubits of not so high quality, rather than needing a million qubits of perfect quality.
  2. The field is in an interesting position where the hardware and software are being developed at the same time. This means that QML researchers can work with hardware developers to construct specific architectures for overcoming bottlenecks.
  3. For the next few years quantum computers will be “noisy” machines. That means that there will be an error associated with the results of calculations. However, Iordanis noted that this is not something that would kill the possibility of quantum computing for applications, because ML already has to deal with a lot of noise (either from data or artificially added to make systems more robust).

Supervised learning

Classification

To give the audience an idea of how quantum plays a role in classification, Iordanis picked one of the most straightforward examples: the nearest centroid classifier. This is a classification model that assigns to observations the label of the class of training samples whose mean (centroid) is closest to the observation. To add some quantum power to the algorithm one can compute the distance between observation and mean not classically, but quantumly. This is a method that can actually be applied today using small dimensions. The results presented showed that the resulting labelling using the quantum method was the same as using the classical algorithm. Sometimes the quantum algorithm makes more mistakes due to the noisiness of the system.

Dimensionality reduction

This is an area where quantum can yield good performance. There are many classical techniques existing in this space, such as principle component analysis (PCA), linear discriminant analysis (LDA), and slow feature analysis (SFA). These methods use linear algebra to map from one space to a space of lower dimensions. From there, classification is carried out. It is possible to create quantum analogies of these methods and use quantum linear algebra and a quantum classifier. See, for example, this paper on Quantum classification of the MNIST dataset via Slow Feature Analysis.

Recommendation systems

This was one of the first end-to-end applications in quantum machine learning. Recommendation systems use a preference matrix containing many users and products; some information exists about user preferences for certain products, but much of it is missing. The task is to find out which missing information would have a high value (i.e. information suggesting that a product that would be of great interest to the user). Iondanis and Anupam Prakash constructed a quantum algorithm for this problem which you can read about in detail in their paper: Quantum recommendation systems. Iordanis noted that it is unlikely that the quantum algorithm will become faster than the best classical algorithms for this application.

Quantum neural networks

There have been many different proposals for quantum neural network architectures. Some examples include: “Simulating a perceptron on a quantum computer”, “Quantum Convolutional Neural Networks”, “Quantum Neuron: an elementary building block for machine learning on quantum computers”, and “Continuous-variable quantum neural networks”.

A quantum neural network can be described as a quantum circuit with parameterised gates. For example, a gate could take a quantum state and rotate the vector with an angle theta. The quantum neural network then trains itself by using some input, running it through the circuit and then has a qubit produce a label at the end. Training examples are used to figure out the parameters of the gates. A lot of care needs to be taken in choosing the architecture for the system. Researchers have already produced results from such networks, albeit with small systems and datasets.

Iondanis and his colleagues have taken a different approach. Instead of trying to define a quantum neural network, they wondered if they could train classical networks faster by using a quantum computer. They did find some speed-ups but they weren’t considerable.

Unsupervised learning

The presentation briefly touched on unsupervised learning, specifically k-means clustering. Iondanis and colleagues have developed a quantum analogue (q-means clustering). They used the same methodology as the classical technique but utilised quantum procedures whenever possible. The team have extended their method to expectation maximisation for Gaussian mixture models and spectral clustering.

Conclusions

In closing, Iordanis remarked that the focus for QML needs to be on working out practical solutions to real-world problems. Finding the first QML applications will be a formidable challenge, but it is certainly worth pursuing. He was keen to stress that the field should not be over-hyped: quantum machine learning is not going to solve all our problems. He believes that success in this field will certainly be aided by collaboration between classical and quantum ML researchers.

About Iordanis Kerenidis

Iordanis Kerenidis (CNRS and QC Ware) received his PhD from the Computer Science Department at the University of California, Berkeley, in 2004. After a two-year postdoctoral position at the Massachusetts Institute of Technology, he joined the Centre National de Recherche Scientifique in Paris as a permanent researcher. He has been the coordinator of a number of EU-funded projects including an ERC Grant, and is the founder and director of the Paris Centre for Quantum Computing. His research is focused on quantum algorithms for machine learning and optimization, including work on recommendation systems, classification and clustering. He is currently working as the Head of Quantum Algorithms Int. at QC Ware Corp.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association