ΑΙhub.org
 

AI regulations are a global necessity, panelists say


by
22 June 2022



share this:
Earth network

By Megan DeMint

In a Cornell China Center (CCC) webinar held on May 27, legal scholars based in China, Switzerland, and the United States surveyed artificial intelligence (AI) regulation across the world, identifying strategic similarities and local distinctions. The event brought together more than 150 attendees across time zones for a conversation spanning intellectual property, disability rights, and global regulation benchmarks.

“We all face some common challenges,” said Rui Guo (Renmin University of China). Guo, a law professor whose research focuses on stereotypes and AI fairness, was one of four panelists addressing the complex challenges that AI introduces within societies at both the local and global levels.

“Some of the more local problems, like stereotypes in one society, may be intensified in a new technological context that may need the local to be responding more actively,” said Guo. “I think both local and global regulation are needed to deal with the bigger challenges.”

Rostam Neuwirth (University of Macau) analyzed the local versus global implications of European Union regulations. The EU’s General Data Protection Regulation, for instance, has set world standards, while a proposed artificial intelligence act introduces a new set of regulatory challenges.

The act would limit AI from deploying “subliminal techniques” that subtly influence or manipulate users. But where do we draw the line, Neuwirth asked? There is no absolute threshold that can be universally recommended to regulate AI, he argued, because each individual is unique in terms of how they may perceive information or be influenced.

Xiaoping Wu (World Trade Organization) and Linghan Zhang (visiting scholar, Cornell Law School) presented additional perspectives on AI regulation, speaking to issues of intellectual property and algorithm supervision in China.

“The Cornell China Center serves as a bridge between Cornell and China,” said Ying Hua, director of the China Center and associate professor in the College of Human Ecology. “We facilitate research collaborations with the goal to bring the best minds together to work on significant challenges that face the world.”

CCC offers grants to support research led by faculty based at Cornell and major university partners in China, including annual seed fund and China Innovation Grants.

Hua co-moderated the panel discussion with Xingzhong Yu, the Anthony W. and Lulu C. Wang Professor of Chinese Law at Cornell Law School.




Cornell University




            AIhub is supported by:



Related posts :



How AI is opening the playbook on sports analytics

  18 Sep 2025
Waterloo researchers create simulated soccer datasets to unlock insights once reserved for pro teams.

Discrete flow matching framework for graph generation

and   17 Sep 2025
Read about work presented at ICML 2025 that disentangles sampling from training.

We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it

  16 Sep 2025
A single individual using AI can produce multiple papers that appear valid in a matter of hours.

Deploying agentic AI: what worked, what broke, and what we learned

  15 Sep 2025
AI scientist and researcher Francis Osei investigates what happens when Agentic AI systems are used in real projects, where trust and reproducibility are not optional.

Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence