Professor Esteban Vera, Optolab’s director, was invited to the University of Arizona in the United States to give a seminar about our research field.
On Tuesday, April 9th he was at the University’s College of Optical Sciences presenting the seminar titled “Computational Imaging for Space Surveillance and Adaptive Optics”.
In this talk, Dr. Vera addressed how the proliferation of satellites in low-Earth orbits is transforming space surveillance and communication, and how we are tackling this new challenge at Optolab.
He explained how adaptive optics and deep learning will allow to enhance satellite communications and imaging, which is part of what we’re researching and implementing at the Optoelectronics Laboratory PUCV, as the professor presented at the Wyant College of Optical Sciences in Tucson, Arizona.
We present the abstract of the seminar:
The proliferation of satellites at low-Earth orbits is happening at an unprecedented pace and will demand outstanding surveillance, imaging, and communication capabilities. For instance, ubiquitous space surveillance to optically detect functioning or defunct satellites and space debris may require a terapixel-scale all-sky camera. If ever built, a terapixel camera will not be scalable in SWAP-C and bandwidth.
In the first part of the talk, I will describe our developments in optoelectronic coding and the use of event-based sensors linked with neuromorphic computing to achieve scalable compressive temporal imaging solutions, which will enable ultra-high-resolution, wide-field optical monitoring capabilities for space situational awareness (SSA).
Nevertheless, light traveling through the atmosphere is also heavily affected by atmospheric turbulence, troubling any attempt for high-resolution imaging. We can use adaptive optics, which is a technique already used in astronomy to correct atmospheric turbulence to deliver sharp images. Since adaptive optics relies on traditional wavefront sensing technology that also has intrinsic limitations in terms of speed, resolution, sensitivity, linearity, calibration, and cost, in the second part of the talk I will introduce our computational imaging approach for wavefront sensing that exploits deep learning.
Through the training of digital and optical neural network layers using an End-to-End approach, we show how we are crafting novel Al-driven wavefront sensors that not only may enable low-cost solutions for high-speed satellite communications and enhanced satellite imaging but will certainly boost the adaptive optics performance for the next generation of extremely large telescopes.
We’re thankful for the invitation to make known the work that the professor and our researchers are doing. We also congratulate Dr. Vera for the excellent seminar.