The AIhub coffee corner captures the musings of AI experts over a short conversation. This month we tackle the topic of young people and what AI tools mean for their future. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Ella Scallan (AIhub).
As AI tools have become ubiquitous, we’ve seen growing concern and increasing coverage about how the use of such tools from a formative age might affect children. What do you think the impact will be and what skills might young people need to navigate this AI world?
Sabine Hauert: I met up with a bunch of high school friends when I was last in Switzerland and they were all wondering what their kids should study. They were wondering if they should do social science, seeing as AI tools have become adept at many tasks, such as coding, writing, art, etc. I think that we need social sciences, but that we also need people who know the technology and who can continue developing it. I say they should continue doing whatever they’re interested in and those jobs will evolve and they’ll look different, but there will still be a whole wealth of different types of jobs.
Ella Scallan: I was having a similar conversation with my friend the other day, reminiscing about all the careers advice we received when we were making our choices for university and how all the advice was often lacking, because no one could have predicted the current jobs market. Even before AI, the rise of social media just changed the whole economic landscape. We’re now in a very different world to the one we were trained for. So I wonder if AI will be similarly impossible to predict, and if people should just follow the things they enjoy, as you say. It’s hard to regret those kinds of choices.
Tom Dietterich: I think in a highly uncertain future, it seems like you should know the fundamentals really well. So mathematics, and understanding different approaches to modeling the world, whether it’s physics or economics or psychology. I think you’ve got to get those basic skills down along with the ability to write and think, obviously. That is more useful than learning to use the latest LLM system with the latest GitHub tool or whatever, because that will surely change. But, as a computer scientist, I would say learning about abstraction and how that has helped in the past is very important. Omar Khattab is a new faculty member at MIT, and he’s doing very interesting things trying to raise the level of abstraction of programming for systems that have LLMs as components. These tend to be called agentic systems, but they’re just LLMs as subroutines in some sense. And he gives them type signatures and then builds a compiler that automatically optimizes your prompts and this kind of thing, so just trying to raise the level of software engineering. I think that in computer science, with every change we always have to figure out the right abstractions to be using. And I think that’s going to be true with LLMs too. There will be programming, it will be at a different level of abstraction than we’ve done in the past, but it will still be programming, it will still be system building, it will still be maintenance.
Sabine: So does that mean school shouldn’t change? Because those fundamentals are what school is all about.
Tom: I think so, although obviously the pedagogy has to change because LLMs are undermining a lot of our techniques – those that have worked in the past don’t work now. Actually, they didn’t really work that well in the past, in my opinion.
Sanmay Das: I agree with Tom largely. Somebody was telling me recently about this thing that has always happened where we overestimate the effects of a technology in the short run, and we underestimate the effects in the long run. And so I think people are being very reactive and thinking about what they need to do in the next few years. I’ll have conversations with random kids who are in high school and going to college, and they’ll be worried that there won’t be any jobs for them in four years because the LLMs are going to take over. And I think that that’s unfortunate, that we’ve led people to be worried about things in this particular manner. It’s a complex set of reasons as to why that’s happened. When I was in college, they really drove it into us that what programming language you are learning doesn’t really matter. And I still firmly believe it. You need to be learning how to think about things. I feel like we have become much more reactive to industry demands. In the sense that “we want people who are trained in doing this”, which I don’t think is a good thing because I don’t think that’s the purpose of higher education.
With respect to the original question, something that we’ve been thinking about for a few years now is this question of what AI is going to mean for skills and knowledge. And on that, I’m actually genuinely worried. People used to make this huge deal ten years ago about not trusting Wikipedia. Kids would be told that they couldn’t cite or trust Wikipedia. I have two kids, one in middle school, one in primary school, and people are not drilling AI stuff into them in the same manner. They say “oh, you know, you can ask ChatGPT”. And this is absolutely crazy, because Wikipedia at least has human mechanisms that are pretty darn good at making sure that most of the stuff on there, especially like the high-traffic stuff, is reasonably accurate. And you have absolutely nothing like that with LLMs, at least in terms of verifiable knowledge. And so I think that I’m almost a little bit of a fundamentalist at this point myself. I’m trying not to use LLMs unless it’s something routine that would take a lot of effort, like producing something in a particular format. I really try not to use them for the more creative parts of what I’m doing. I’m personally worried that my skills will atrophy in ways that I don’t want them to. And so, I think, a real worry there about how kids use these tools might very much affect their abilities down the line. They’re great tools, don’t get me wrong, I think it’s really cool that these tools can do all the things that they can do. I do think that we need to be very careful in how they get used, especially by kids, because that’s really going to affect how they grow up.
Michael Littman: One thing I wanted to flag is that this is a topic, at least at the university level, that I’m engaged with very much day-to-day because of my current role as the Associate Provost for Artificial Intelligence. This “what are we teaching people and how are we teaching them wrong?” topic is probably the most broad. AI is touching everything in a bunch of different ways, but this is one that impacts everybody in education, and so that’s one that ends up occupying a lot of my attention. One of the things that we’re seeing on campus, and I’m sure this is happening in other places as well, is the rise of what I’ve been calling AI vegans. I guess Sanmay is an AI flexitarian, but the AI vegans are the ones who will not be part of this game at all. And I get that. I kind of like the flexitarian position better, one that says there’s a time and a place for it and you have to be really thoughtful. And this is something that I think makes a lot of sense at the collegiate level. It’s the K-12 (primary and secondary) level where it’s not as easy. The kids are not able to make those kinds of judgments on the fly. They can’t really be held responsible for their own education in a way. It’s much more tricky.
I very much agree with Tom in the sense that fundamentals are still fundamentals and teaching people to think is still important. In many ways, and not to throw shade on my own sector of the economy, but the things that we teach are not really the things that matter. We’re teaching the things that we teach because we have to teach something, because what we’re really teaching is how to learn, how to actually engage with new material and ask good questions and try to understand something that’s complicated. And we don’t know what the actual things are that people are going to need to be working on when they get out and they’re in the world. But having practiced those skills of how to get on top of new information is essential. I don’t think any of us believe that turning any of that over to chatbots is a good idea or a smart idea. So I think we should keep teaching the things that we teach because I think part of the reason that we teach them is to teach people how to learn and the content is kind of less significant.
But there is something that we don’t tend to teach that I think becomes even more relevant in this AI era. And that is teaching people how to work together to get things done. This is a skill that I watched my daughter deploy when she was in college. She happened to have been in college where I was teaching so I got to see some of this firsthand. She organized a group to put on a musical, and there were probably 100 different people that were contributing in various ways to this thing that was being created. And watching her coordinate those teams, getting people the right information in the right way so that they could do their part of this much bigger thing, I was blown away because this is a skill that I think a lot of us use and a lot of us need, but I don’t think many of us are ever taught it. And I think it’s so important. Because at the end of the day, what are we trying to do? We’re trying to get important things and relevant things done. And that involves some combination of technology and working with other people. And so being more explicit about that as a skill, I think would be incredibly valuable, not just because it’s obviously a really important skill, but because something about the way that we’re dealing with these chatbots also has this feeling of organizing this weird disparate group to get together to get a thing done. And I have to express myself really clearly, and I have to divide things up in a reasonable way. And it’s not about balancing parentheses, it’s about getting those pieces to fit together properly. Tom called it orchestration, which I think that’s a good way to put it.
Sanmay: So Michael, I have a question. What do you see in your position at the university? Is there a lot of pressure to make sure that students are AI ready? How should higher education respond in that kind of situation? Because obviously we have this push and pull, we still want students to be successful on the job market but, at the same time, we want them to learn. So I’m curious what your thoughts are.
Michael: It’s super interesting. What I’m seeing is, to a first approximation, the alumni are very unified in their voice that we need to be AI first. We need to be teaching AI and basically nothing else. Maybe this is the wealthy alumni, those are the ones who I get to hear from. And they’re the ones who are probably in the tech sector one way or another or just deeply involved in business. On campus, the faculty are much more skeptical of that as an idea. Some of them are interested and experimenting, but some of them are not interested or experimenting. And they really think that this is capitulation to forces that are ultimately going to destroy society as we know it. I occasionally get lectured in my office where a professor will just unleash this litany of concerns about where humanity is going and that this is somehow my responsibility to prevent and fix. And I see where that’s coming from, but where I end up landing is very much on this flexitarian side, which is to say that we need to get a lot more information about the impacts that these things have on people, we need to be doing a lot more experimentation about what’s good and what’s bad, we need to be open-minded and thoughtful and reflective. Like the critical-thinking things that we’re supposed to be very good at as academics, we need to make sure that we’re applying those skills and coming to conclusions that make sense.
Sabine: I think the challenge is that we can ban it, we can restrict how to use it, they will use it in whatever way they think is the most useful to them, again with this addictive element to it. And I think I told this story previously, I recently heard about a class where 30% of the students cheated with AI when that was explicitly banned. The team attempted to provide more training at the start of the year to avoid all these students cheating, yet nearly the same proportion cheated again, even with the preemptive training. And so there’s this sense that you can’t actually fight it. On one of my recent classes however, where students have to build robots and orchestrate lots of different hardware, software, testing the thing, I let them use AI as much as they wanted, and they made beautiful robots. And so, however much you want them to learn the fundamentals, they’re learning the fundamentals with AI, they’re using it in lots of ways which we have no control of. The worry however is that they lose the ability to learn critical thinking, thinking for themselves. And so I think that’s the tricky part.
I saw a recent story as well on a guy who had a job (copy editor) that was being replaced by AI. And so he asked AI what job he should do. And the AI told him he should be a tree cutter. And so he went down and cut a bunch of trees in his neighborhood and made a bunch of money, but then his back hurt because he’s a human. And so that’s not actually a very good job for him. I think it’ll be interesting to see how we figure out how best students should learn and all of that, but actually they will also be asking AI to help them figure it out for themselves.
Tom: Maybe the analogy is physical fitness and exercise, and there’s no point in bringing a forklift to do your weightlifting for you. You need to do that yourself because you’re trying to build your own muscles. A professor recently was talking about a midterm exam in her class, where she gave the exam in such a way that people couldn’t use AI to solve it, and they all did very badly. She gave the same exam to ChatGPT and it got a B plus. The students were all doing very well on their homeworks, where they were using LLMs, but they couldn’t actually do the things themselves. So how do we get our students into the weightlifting and exercise class mentality?
The other thing is that just in the last couple of days, I realized I have now fallen off of the leading edge. I can’t keep up with all these people that are using all of these orchestration tools and have a dozen agents running 24/7. I’m just going to have to wait for the technology to settle down to the point where it’s worth learning it. And that’s why I’m very hesitant to have students do anything except maybe learning how to use VS Code with Copilot, because the high-end tools are changing on a weekly basis. And I just don’t see how we can teach them what that is. It’s probably a couple more years before it really settles down and there’s a better established move for actually doing software engineering with these tools.
Michael: I wanted to address the question of how teaching needs to change to really reflect the moment that we’re in. And I think one of the important things that I’ve been seeing is shifting the focus from product to process. So to some extent the message that we’re sending the students is “we need you to make this paper, we need you to answer this question”. But we never really needed the answer to that question. That was the thing that really irritated me about grad school because I was a researcher between undergrad and grad school, and as a researcher, you spend your time answering the questions that people don’t have the answer to. And then I got back to school and people were asking me questions that they already knew the answer to. So the focus on the product is really problematic. The focus on the process is really important because people need to learn how to get things done. And so if we’re evaluating people on that and we send that message really clearly, I think that undermines a whole bunch of this stuff in the current way that we do teaching. And I think Tom was kind of hinting at that when he was saying maybe we never really were teaching the right way. And I think that is maybe true, but they were conflated before. And now that they’ve been decoupled, we have to really pay attention to the fact that, no, if the thing that we’re trying to teach is the process, we need to evaluate the process, not the end product.
Ella: Just to play devil’s advocate, there is the argument that when a new technology emerges, there’s always Ludditism and doom mongering. For example, Plato was very against mass literacy. He said everyone will become less intelligent because they wouldn’t have to remember everything. And sure, maybe Plato had a better working memory than any of us today, but we don’t live in a world where that confers any benefit, like it would have done in Ancient Greece. And the written language opened up myriads of possibilities that nobody then could have imagined. So I wonder if now we’re in a tricky transitional period where we experience a lot of the negatives, and we haven’t really worked out the way forward. Perhaps, in the future we’ll live in a very different world that requires very different types of cognitive skills, and maybe the people of the future will be able to work out how to navigate that – as human beings have always done.
Sanmay: If I run with the exercise analogy, running and jogging are relatively new things. Nobody used to run for fitness 60 years ago, and people didn’t use to weightlift. These are things that we’ve done because it was no longer part of our lives that would just automatically keep us fit. So at some level, I feel like the point is completely valid. This could open up a whole bunch of possibilities. There was a similar panic around calculators, but that doesn’t mean that my eight-year-old is allowed to use a calculator when he’s learning arithmetic. So I think that that’s one thing to keep in mind.
One thing I always used to tell my students is that I love group work. But when I gave them a problem set (and these were hard, long problems), I always told them to spend at least an hour thinking about it individually without asking anybody else the answer. The big problem ends up being you sit in a group, you think you understand how to solve the problem because somebody else solved it, and then you never struggled with it. So, when it shows up on an exam you don’t know how to go about doing it, you faked yourself into thinking that you knew. And this was before LLMs, but I think it’s almost exactly the same thing, but with the added issue of trustworthiness and not knowing whether something is correct. I think that that’s the other key point there.
Sabine: I think there’s something quite positive about just seeing where it goes and riding these changes. I think that there’s maybe something appeasing to that.