ΑΙhub.org
 

#NeurIPS2020 invited talks round-up: part two – the real AI revolution, and the future for the invisible workers in AI


by
22 January 2021



share this:
NeurIPS logo

In this post we continue our summaries of the NeurIPS invited talks from the 2020 meeting. Here, we cover the talks by Chris Bishop (Microsoft Research) and Saiph Savage (Carnegie Mellon University).

Chris Bishop: The real AI revolution

Chris began his talk by suggesting that now is a particularly exciting time to be involved in AI. What he termed “the real AI revolution” has nothing to do with artificial general intelligence (AGI), but is driven by the way we create software, and hence new technology. Machine learning is becoming ubiquitous and can be used to solve many problems that cannot, yet, be solved using other methods.

One exciting project that Chris talked about was work carried out in his lab to provide a radical new way of storing data. He and his team are using overlapping holograms, stored within a crystal. The aim is to provide the best of both worlds, combining the cost-effectiveness of traditional hard disk drives with the performance of the more expensive solid state disks. Machine learning, in the form of a convolutional neural network (CNN), is used to obtain data from the images that result when the data stored in the holograms is extracted from the crystal using a reference beam.

Chris also talked about medical diagnosis and the integration of AI systems to assist healthcare professionals. Specifically, he spoke about the field of radiation oncology, where the goal is to use radiation to treat tumours. Large CNNs can be used to mark the boundaries of the tumour on the many image slices of a 3D computerised tomography (CT) scan. The clinicians then check the image segmentation produced by the CNN system and can make any adjustments as needed. The CNN system acts as a tool to speed up the process, rather than replacing the clinician.

To find out more about these projects, and others that Chris is involved in, you can watch the talk here.


Saiph Savage: A future of work for the invisible workers in AI

Saiph’s talks focussed on the “invisible workers” of AI. The AI industry has created new jobs that have been essential to the development and deployment of intelligent systems. These new jobs typically focus on labelling data for machine learning models by, for example, categorising content or transcribing audio. This human labour alongside AI has powered rapid development of, now commonplace, technologies such as voice assistants. However, the workers powering the AI industry are often invisible to consumers.

Saiph presented ideas for how we can design a future of work for empowering the invisible workers behind our AI. She proposed a framework that transforms invisible AI labour, providing opportunities for skills growth, hourly wage increase, and facilitates transitioning to new creative jobs that are unlikely to be automated in the future. She talked about a tool she has developed, called Crowd Coach, where workers share strategies that they have used to enhance their skills and wages. An AI element of the tool helps to pick out the most pertinent pieces of information which can then be shared with other workers. Saiph proposed that web plugins of the tool be integrated into existing labour platforms to guide workers to success.

There was an interesting question and answer session following the presentation which featured an “invisible” AI worker who talked about her experiences working for a number of companies. The tasks she has worked on have included classifying videos, verifying websites, and coding to train robots.

Watch the talk and the Q&A session here.




tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence