Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging
Published in Optics Express, 2025
Recommended citation: N. Diaz, M. Beniwal, M. Marquez, F. Guzmán, C. Jiang, J. Liang, and E. Vera, “Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging,” Optics Express 33, pp. 24027–24038, 2025. [DOI],[Paper].
Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-highspeed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a random coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation—which can be self-trained from a single measurement—to not only largely reduce the training time and eliminate the need for training datasets, but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.
Cite
@article{Diaz:25,
author = {Nelson Diaz and Madhu Beniwal and Miguel Marquez and Felipe Guzman and Cheng Jiang and Jinyang Liang and Esteban Vera},
journal = {Opt. Express},
keywords = {Genetic algorithms; Imaging systems; Inverse design; Optical imaging; Spatial light modulators; Streak cameras},
number = {11},
pages = {24027--24038},
publisher = {Optica Publishing Group},
title = {Single-mask sphere-packing with implicit neural representation reconstruction for ultrahigh-speed imaging},
volume = {33},
month = {Jun},
year = {2025},
url = {https://opg.optica.org/oe/abstract.cfm?URI=oe-33-11-24027},
doi = {10.1364/OE.561323},
abstract = {Single-shot, high-speed 2D optical imaging is essential for studying transient phenomena in various research fields. Among existing techniques, compressed optical-streaking ultra-high-speed photography (COSUP) uses a coded aperture and a galvanometer scanner to capture non-repeatable time-evolving events at the 1.5 million-frame-per-second level. However, the use of a randomly coded aperture complicates the reconstruction process and introduces artifacts in the recovered videos. In contrast, non-multiplexing coded apertures simplify the reconstruction algorithm, allowing the recovery of longer videos from a snapshot. In this work, we design a non-multiplexing coded aperture for COSUP by exploiting the properties of congruent sphere packing (SP), which enables uniform space-time sampling given by the synergy between the galvanometer linear scanning and the optimal SP encoding patterns. We also develop an implicit neural representation\&\#x2014;which can be self-trained from a single measurement\&\#x2014;to not only largely reduce the training time and eliminate the need for training datasets but also reconstruct far more ultra-high-speed frames from a single measurement. The advantages of this proposed encoding and reconstruction scheme are verified by simulations and experimental results in a COSUP system.},
}