ΑΙhub.org
 

Interview with Tianfu Wang: A reinforcement learning framework for network resource allocation


by
12 June 2024



share this:


In their work FlagVNE: A Flexible and Generalizable Reinforcement Learning Framework for Network Resource Allocation, accepted at IJCAI 2024, Tianfu Wang, Qilin Fan, Chao Wang, Long Yang, Leilei Ding, Nicholas Jing Yuan and Hui Xiong introduce a framework for addressing resource allocation problems. In this interview, Tianfu Wang tells us more about their framework, the implications of their research, and what they are planning next.

What is the topic of the research in your paper?

Our paper focuses on addressing resource allocation problems using a reinforcement learning (RL) framework, specifically in the domain of network virtualization, known as virtual network embedding (VNE). VNE involves efficiently mapping virtual network requests onto physical infrastructure. However, existing RL-based VNE methods are limited by the unidirectional action design and one-size-fits-all training strategy, resulting in restricted searchability and generalizability. In this paper, we propose a flexible and generalizable RL framework, named FlagVNE, to enhance network management efficiency and improve Internet providers’ revenue.

Could you tell us about the implications of your research and why it is an interesting area for study?

Our research has significant implications for network management, cloud computing, and 5G networks, etc., where efficient resource allocation is critical for meeting user demands and cost-effectiveness. This area is both promising and challenging because it tackles an NP-hard combinatorial optimization problem that is both complex and highly impactful. With the RL framework that can learn effective solving strategies, we aim to enhance the flexibility, efficiency, and generalizability of VNE solutions, which can lead to improved service quality and resource utilization for Internet service providers.

Could you explain your methodology?

Our methodology introduces several key innovations. Firstly, we propose a bidirectional action-based Markov decision process (MDP) model that allows for the joint selection of virtual and physical nodes, enhancing the exploration flexibility of the solution space. Secondly, to manage the large and dynamic action space, we introduce a hierarchical decoder to generate adaptive action probability distributions, ensuring high training efficiency. Thirdly, we employ a meta-RL-based training method with a curriculum scheduling strategy to facilitate specialized policy training for varying VNR sizes, which helps in overcoming generalization issues.

What were your main findings?

Our main findings demonstrate the effectiveness and versatility of the FlagVNE framework in optimizing network resource allocation. Experimental results show that FlagVNE outperforms state-of-the-art methods in terms of request acceptance rate, long-term average revenue, and revenue-to-cost ratio. We also observe that the bidirectional action design and meta-RL training approach contribute to superior performance and adaptability across different network sizes and traffic conditions. Furthermore, our results showcase the adaptability of FlagVNE to diverse network scenarios and its ability to generalize across different network architectures and traffic patterns.

What further work are you planning in this area?

Moving forward, we are working on addressing the multi-faceted and hard constraints of VNE more effectively, aiming for better constraint awareness. Additionally, we aim to explore the application of FlagVNE in other network domains such as cloud computing and edge computing. We also intend to collaborate with industry partners to deploy and evaluate FlagVNE in real-world network infrastructures, focusing on usability, scalability, and integration with existing network management systems

About Tianfu

Tianfu Wang is a Master’s student at the School of Computer Science and Technology, University of Science and Technology of China, supervised by Professor Hui Xiong (AAAS & IEEE Fellow). He received his B.E. degree from the School of Big Data and Software Engineering, ChongQiong University in 2022. His research interests include data mining, networking optimization, and large language models. He has published several papers in top conferences and journals, including KDD, IJCAI, MM, and TSC.

Read the work in full

FlagVNE: A Flexible and Generalizable Reinforcement Learning Framework for Network Resource Allocation, Tianfu Wang, Qilin Fan, Chao Wang, Long Yang, Leilei Ding, Nicholas Jing Yuan, Hui Xiong.



tags:


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Identifying patterns in insect scents using machine learning

  19 Dec 2025
Scientists will use machine learning to predict what types of molecules interact with insect olfactory receptors.

2025 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  18 Dec 2025
We collate our interviews with the 2025 cohort of doctoral consortium participants.

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

  17 Dec 2025
In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started celebrating their work as “human-made”.

AIhub blog post highlights 2025

  16 Dec 2025
As the year draws to a close, we take a look back at some of our favourite blog posts.

Using machine learning to track greenhouse gas emissions

  15 Dec 2025
PhD candidate Julia Wąsala searches for greenhouse gas emissions in satellite data.

AAAI 2025 presidential panel on the future of AI research – video discussion on AGI

  12 Dec 2025
Watch the first in a series of video discussions from AAAI.

The Machine Ethics podcast: the AI bubble with Tim El-Sheikh

Ben chats to Tim about AI use cases, whether GenAI is even safe, the AI bubble, replacing human workers, data oligarchies and more.

Australia’s vast savannas are changing, and AI is showing us how

Improving decision-making for dynamic and rapidly changing environments.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence