ΑΙhub.org
 

AIhub coffee corner: Rethinking AI education

by
18 November 2020



share this:
AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This month, we discuss AI education.

Joining the discussion this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Holger Hoos (Leiden University) and Oskar von Stryk (Technische Universität Darmstadt).

Sabine Hauert: As we are starting the new term, the question is how should we do AI education and what should students be learning? Thinking more broadly, how should we rethink AI education for the general population? There will be huge swaths of the public that will need to gain an understanding of AI, or be trained in the use of AI. Let’s start with a focus on the University side. Any insight into what we should be doing differently with AI education?

Sabine: I can start by saying that I think we need more of it. It is clear that there is a skills gap. There is a lot of demand for AI skills and not enough training of it, and I wonder why that is. Is it due to a lack of programmes? Is it due to a lack of people who see this as a career choice? How do we fill this gap at the upper level where we are looking for students to become AI practitioners?

Oskar von Stryk: I see two different sides of AI education. One is education of the next generation of scientists and scientific leaders. The other is the education of other disciplines in how to use AI tools and technologies and to better understand them. At my university, the discussion is around who is going to teach AI for non computer science experts (at present the core AI is all in computer science). The question is, do they teach themselves or do they require experts?

Holger Hoos: Yes, that’s the discussion at my university as well and it’s a tricky one, actually, because you could argue this both ways. In Europe, there are ICT-48 networks of centres of excellence that are getting underway and part of their work is to think about AI education.

Holger: Currently, we teach people how to plug things together and we know it’s quite easy to download various programmes and run on some training data and be amazed at how well it does. However, I think what’s missing, at least in the AI courses I’m familiar with, is to show people what happens when stuff fails. We need systematic exposure, in a very hands-on way, to the ways in which AI systems can go wrong. What does it look like when a machine learning system starts doing unreasonable things? I’m sure there are courses that do a good job of this but my feeling is that there is a lot of work to be done here, especially when educating people who don’t have a computer science background to be sensitive to the failure modes. They should be using these systems, which must seem like complex black boxes, in ways that are not just full of a false sense of trust. So, I think that sensitising people in that way would be a major goal in AI education at all levels.

Sabine: I think that’s a great point – it’s not just about teaching them how it works, it’s about teaching them how it doesn’t work.

Tom Dietterich: I was reading an article yesterday about successful deployment of a system for sepsis detection in hospitals and this reminds me of a long-continuing theme in all computer science that successful application (especially with new technology) requires a deep understanding of the organisation and the social context, and engaging the people there in the design process. I still feel that we don’t teach AI and software engineering for end users; the HCI [human-computer interaction] side of software engineering. Of course, a lot of the most recent work in the bias and fairness area is also about the analysis of power relationships. That came up in the sepsis article too, because successful deployment of the system required that nurses would call doctors and tell them there was a problem, and that is the reverse of the standard power order. They had to figure out how to make that work. In my graduate work, I was at Stanford during the early days of the AI medicine era and this sounded totally familiar, 40 years later.

Tom: There is a related, more internal thing which is going under the name of MLOps [machine learning operations], which is like DevOps for machine learning systems. That’s not an introductory topic but it gets down to one of the big skills gaps: people who are able to keep a system running after it has been deployed. Currently, someone with a PhD in the field is needed to keep a system running. Technology maturity will be when an undergraduate with an AI engineering degree can keep that system running, instead of having to have a PhD there. Google has some great courses that they’ve developed for this kind of MLOps. There is even a conference (mlsys.org) that looks at some of these issues and companies are developing expertise. However, I don’t know of any university that teaches a class in this.

Sabine: There’s also a lot of gatekeeping from some parts of the community; thinking that you can’t do machine learning if you don’t have the maths background, and you don’t understand exactly how that black box works. That really prevents those that would be in the field as the potential users of the technology from even trying to develop these technologies themselves within their own application areas. So, there is also the need for training people to become machine learning operatives in a way that doesn’t require them to know everything about the fundamental mathematics. However, they would know how to use the systems they work on really well. I think a lot of people might try that if they were given the opportunity. I think that bit is missing in the current education set-up.

Oskar: This came up through the democratisation of AI: tools and methods have become available, together with large computing efforts – this made it easy to develop complex applications. Again, there are two sides to education here. One is how to use and apply AI methods and tools in all the many different disciplines and areas. The other is how to educate the next generation of scientists. AI can only move forward as a field if it has a strong research record, which goes beyond the application of deep learning technologies. The next generation of methods is needed and we need to provide education in order for students to succeed in this.

Tom: Regarding the question of representation, one of the big limitations of systems is the representations that are being learned and we don’t have any good tools for understanding those representations and they need to be more carefully engineered. I would stress knowledge representation in an AI class.

Oskar: The question is when to start the education. I remember that in Germany there was a long discussion about when to start computer science in schools. I’ve heard that in China in elementary class they are trying to include some AI education. There are also prerequisites that you need, such as core principles from mathematics and computer science, at least as a developer.

Sabine: There is a third category of AI education which is just AI literacy. This is something you can pick up at school, or later in life through courses such as the Finnish Elements of AI. We’ve been thinking about when to train students in AI in the UK and we’ve been thinking that rather than having a specific course it would great if they did that in geography and history – like if it were part of the toolbox they have that gets deployed across the curriculum rather than as a standalone segment. But maybe that’s something that comes once they’ve wrapped their head around AI.

Sabine: As part of some AI education research I’m involved in we spoke to members of the public and one of the things we found is that they said they didn’t want to learn about AI because they thought it wasn’t for them. However, after chatting with them for a little bit, if the question was asked differently (e.g. “do you want to know how NetFlix works?, do you want to know how your phone answers your questions?”), then they actually did want to learn a little bit more. We need to consider how AI education is presented to make it more appealing for a broader population of people.

Oskar: That’s right, if you just put a buzzword out it has no direct linkage to peoples’ lives and experiences. Adding a direct link to their life and experiences will have greater impact.

Oskar: In Germany, we have several masters courses that combine different disciplines and have a focus on computer science. I’ve noticed, over the past 12 months, that people from other disciplines, such as mechanical engineering, are trying to move into this field. We have a number of students who are willing to do one semester of qualification courses before entering the full masters course.

Sabine: Maybe we need to rethink the way we do education. I guess COVID is putting that all into the spotlight, with all the blended learning combining a lot of online learning, with some in-person, hands-on learning. I wonder if there is a way we could reach more people? I guess there is online learning but, as we discussed, we do definitely also need to include hands-on experiments, getting students to try things.

Oskar: That leads to a general question about how you could do education better across the board.



tags:


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Interview with Francesca Rossi – talking sustainable development goals, AI regulation, and AI ethics

“AI used to be a scientific and technical field, now it has become a socio-technical discipline"
28 March 2024, by

Datalike: Interview with Mariza Ferro

In their latest interview, Ndane and Isabella meet Mariza Ferro, professor at the Federal Fluminense University.

Interview with Amine Barrak: serverless computing and machine learning

PhD student, and AAAI Doctoral Consortium participant, Amine tells us about his research.
26 March 2024, by

AI UK 2024: Camden Council case study

How one of the London borough councils is using data and AI to help inform their decision making.
25 March 2024, by

How will generative artificial intelligence affect political advertising in 2024?

Illinois advertising professor Michelle Nelson talks about concerns around political advertising.
22 March 2024, by

Using machine learning to discover stiff and tough microstructures

Combining simulations and physical testing to forge materials with durability and flexibility for diverse engineering uses.
21 March 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association