ΑΙhub.org
 

Team formation techniques in education


by
04 March 2021



share this:
three people working at a desk
Photo by Brooke Cagle on Unsplash

Collaborative learning is gaining acceptance as one of the most successful educational approaches to learning. The basic idea is to organise learners in groups to work together and solve problems or complete tasks. There is ample evidence that when learners actively engage in discussions, listen to different viewpoints, and defend their positions, they better understand new concepts and learn faster. A particular case of collaborative learning is co-operative learning, where each student is responsible for at least one specific aspect or competence needed to solve the problem jointly. The student is improving her understanding through collaboration with others and is also responsible for the group’s success concerning the aspect she is responsible for.

AIhub focus issue on quality education

A practical problem faced by teachers applying this learning methodology is to form good teams in a practical way. It is a subject of debate what a good team is. For many, a good team shows a balance of personalities and competencies. Harmony comes from group members with similar personalities, and productive creativity comes from people with opposing personalities. A diversity of competencies guarantees that teams can complete complex tasks. Altogether, whether a team is good or not can be assessed by particular combinations of several criteria, including personality, competencies, social relations, gender, etc.

At IIIA-CSIC, we have tackled this problem using combinatorial optimisation techniques. Eduteams is a free team formation system that partitions a classroom into student teams of the same size. Partitioning classrooms is a prevalent situation in co-operative learning and sometimes a barrier for a teacher to adopt this educational approach. Through the system, students fill questionnaires to determine their personality (using a post-Jungian approach) and intelligence (following Gardner). We will use the term competence to refer to intelligence in this article. The team-scoring heuristic function used by this system scores high with personality diversity, and when at least one student is extrovert, thinking and judging, and one student is introvert. This heuristic was inspired by previous work of professor Douglass Wilde from Stanford University. On the other hand, teachers provide requirements on the competencies needed (level and importance of each competence) to complete the students’ shared task description.

The AI system tries to guarantee that the student in charge of each required competence in each team (needed, as mentioned, in a co-operative learning setting) has the teacher’s required level of competence. We use minimum cost flow methods to assign competencies to students. Each competence has at least one student responsible for it, and each student has at least one competence she is responsible for. The system uses local search techniques to improve the initial solution swapping team members and competencies in search of improvements. Simulated data show that the algorithm’s solutions are good enough and very close to the optimum for medium-sized problems, e.g. partitioning 200 students in teams of size 10. Classical optimisation techniques fail to provide a solution in a reasonable time even for small problems, like partitioning a classroom of 30 students in groups of 3. Experimental results in real settings showed that teams formed by Eduteams perform up to 25% better than teams created by teachers.

A recent line of work called Edu2Com aims at building teams of students to match job descriptions. Given a set of job descriptions and a set of students, the system forms the most appropriate team for each job. Again, the optimisation techniques aim at making the worst team-job match as suitable as possible. The objective is to maximise employability and average student and employer satisfaction. We use semantic similarity techniques to determine the ontological alignment between student competencies and job descriptions, and anytime local search hill-climbing algorithms to optimise the solution. As in Eduteams, classical (non-AI) techniques like linear programming fail to solve even small-size problem instances. Current practices involving student tutors assigning students to jobs produce very suboptimal solutions.

Team formation is essential for the physical classroom but is even more critical for online education. Teachers have very little knowledge of each student’s profile in online education settings, thus forming teams is an almost impossible task for them. AI can enable online co-operative learning and broaden students’ access to quality education as pursued by the United Nations’ fourth SDG.

Detailed information, researchers involved, papers and software can be found here.



tags: ,


Carles Sierra is Research Professor and the Director of the IIIA-CSIC
Carles Sierra is Research Professor and the Director of the IIIA-CSIC

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence