ΑΙhub.org
 

Identifying light sources using machine learning


by
26 May 2020



share this:
photon counting set-up
Artistic impression of schematic experimental set-up for photon counting. The section in the light grey box corresponds to the thermal light part of the experiment and the section in the dark grey box corresponds to the coherent light part.

The identification of light sources is very important for the development of photonic technologies such as light detection and ranging (LiDAR), and microscopy. Typically, a large number of measurements are needed to classify light sources such as sunlight, laser radiation, and molecule fluorescence. The identification has required collection of photon statistics or quantum state tomography. In recently published work, researchers have used a neural network to dramatically reduce the number of measurements required to discriminate thermal light from coherent light at the single-photon level.

In their paper, authors from Louisiana State University, Universidad Nacional Autónoma de México and Max-Born-Institut describe their experimental and theoretical techniques. They demonstrate the potential of machine learning to perform discrimination of light sources at extremely low light levels. This is achieved by training single artificial neurons with the statistical fluctuations that characterize coherent and thermal states of light.

The experimental set-up involves a continuous wave (CW) laser that is divided by a 50:50 beam splitter. Half of the beam passes through optical components which generate pseudo-thermal light. The emerging photons are counted by a superconducting nanowire single-photon detector (SNSPD). The other half of the beam is used as a coherent light source and is detected by another SNSPD. The data are divided in time bins of 1µs; this timeframe corresponds to the coherence time of the laser. The equipment is tuned so that the mean number of photons counted in each bin is below one.

The distributions of photon counts obtained from repeated runs of this experiment were used to train and test an ADALINE neuron and, for comparison, naive Bayes classifier. ADALINE is a single-layer neural network model based on a linear processing element, proposed by Bernard Widrow, for binary classification. It has no hidden layers, simply consisting of inputs and an output neuron.

The researchers also tested different machine learning methods: a) one-dimensional convolutional neural network (1D CNN) and b) a multilayer neural network (MNN). Interestingly, they found that these more sophisticated methods did not significantly affect the classification. They concluded that a simple ADALINE offers a perfect balance between accuracy and simplicity.

The team believe that their work has important implications for multiple photonic technologies, such as LiDAR and microscopy of biological materials. Using their method fewer measurements are needed for classification, enabling researchers to identify light sources much more quickly. In certain applications, such as microscopy, this means that they can limit light damage since they don’t have to illuminate the sample nearly as many times when taking measurements.

Read the research in full

Identification of light sources using machine learning
Chenglong You, Mario A. Quiroz-Juárez, Aidan Lambert, Narayan Bhusal, Chao Dong, Armando Perez-Leija, Amir Javaid, Roberto de J. León-Montiel, and Omar S. Magaña-Loaiza
Applied Physics Reviews (2020)

The work is also posted on arXiv.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar

  23 Feb 2026
Find out more about Tanmay's research on RL frameworks, the latest in our series meeting the AAAI Doctoral Consortium participants.

The Good Robot podcast: what makes a drone “good”? with Beryl Pong

  20 Feb 2026
In this episode, Eleanor and Kerry talk to Beryl Pong about what it means to think about drones as “good” or “ethical” technologies.

Relational neurosymbolic Markov models

and   19 Feb 2026
Relational neurosymbolic Markov models make deep sequential models logically consistent, intervenable and generalisable

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

  12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence