ΑΙhub.org
 

PitcherNet helps researchers throw strikes with AI analysis


by
21 May 2025



share this:

Image credit: University of Waterloo.

University of Waterloo researchers have developed new artificial intelligence (AI) technology that can accurately analyze pitcher performance and mechanics using low-resolution video of baseball games.

The system, developed for the Baltimore Orioles by the Waterloo team, plugs holes in much more elaborate and expensive technology already installed in most stadiums that host Major League Baseball (MLB), whose teams have increasingly tapped into data analytics in recent years.

Waterloo researchers convert video of a pitcher’s performance into a two-dimensional model that PitcherNet’s AI algorithm can later analyze. (Credit: University of Waterloo)

Those systems, produced by a company called Hawk-Eye Innovations, use multiple special cameras in each park to catch players in action, but the data they yield is typically available to the home team that owns the stadium those games are played in.

To add away games to their analytics operation, as well as use smartphone video taken by scouts in minor league and college games, the Orioles asked video and AI experts at Waterloo for help about three years ago.

The result is a comparatively simple system called PitcherNet, which overcomes challenges such as motion blurring to track the movements of pitchers on the mound, then yields data on metrics including pitch velocity and release point from standard broadcast and smartphone video.

Waterloo researchers used images generated during the training process to help build the PitcherNet AI technology. (University of Waterloo)

“The Orioles approached us with a problem because they weren’t able to analyze pose positions and, subsequently, the biomechanics of their pitchers at games that may not have access to high-resolution cameras,” said Dr. John Zelek, a professor of systems design engineering and co-director of the Vision and Image Processing (VIP) Lab at Waterloo.

“The goal of our project was to try to duplicate Hawk-Eye technology and go beyond it by producing similar output from broadcast video or a smartphone camera used by a scout sitting somewhere in the stands.”

To help train AI algorithms at the heart of the technology, researchers created three-dimensional avatars of pitchers so their movements could be viewed from numerous vantage points.

Broadcast video taken from centre field is used to create a three-dimensional human model by the PitcherNet system. (University of Waterloo)

Information from video processed by the system is provided to biomechanics analysts for the Orioles, who have committed to jointly funding the project for another year.

That data can be used to adjust how pitchers throw the ball to improve performance or avoid injuries, and assess the future success and durability of pitching prospects.

“Existing technology has already improved baseball analytics,” said Jerrin Bright, a PhD student who had a leading role in the project. “Since it’s limited to home games, however, there is a real need for solutions that work in any setting, especially for scouting. That’s where our system comes in.”

Researchers are now exploring the application of the underlying idea – AI analysis of player poses using standard broadcast and smartphone video – to other professional sports, including hockey and basketball, in addition to other aspects of baseball, such as batting.

A paper on the project, PitcherNet: Powering the Moneyball Evolution in Baseball Video Analytics, was presented at the 2024 IEEF/CVF Conference on Computer Vision and Pattern Recognition.




University of Waterloo




            AIhub is supported by:


Related posts :



Exploring counterfactuals in continuous-action reinforcement learning

  20 Jun 2025
Shuyang Dong writes about her work that will be presented at IJCAI 2025.

What is vibe coding? A computer scientist explains what it means to have AI write computer code − and what risks that can entail

  19 Jun 2025
Until recently, most computer code was written, at least originally, by human beings. But with the advent of GenAI, that has begun to change.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

  18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models

  17 Jun 2025
Find out how Mahammed is investigating multiple facets of biases in LLMs.

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

  16 Jun 2025
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content.

The Good Robot podcast: Symbiosis from bacteria to AI with N. Katherine Hayles

  13 Jun 2025
In this episode, Eleanor and Kerry talk to N. Katherine Hayles about her new book, and discuss how the biological concept of symbiosis can inform the relationships we have with AI.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

  12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Graphic novel explains the environmental impact of AI

  11 Jun 2025
EPFL’s Center for Learning Sciences has released Utop’IA, an educational graphic novel that explores the environmental impact of artificial intelligence.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence