Retrieval of the optical phase information from measurement of intensity is of a high interest because this would facilitate simple and cost-efficient techniques and devices. In scientific and industrial applications that exploit multi-mode fibers, a prior knowledge of spatial mode structure of the fiber, in principle, makes it possible to recover phases using measured intensity distribution. However, current mode decomposition algorithms based on the analysis of the intensity distribution at the output of a few-mode fiber, such as optimization methods or neural networks, still have high computational costs and high latency that is a serious impediment for applications, such as telecommunications. Speed of signal processing is one of the key challenges in this approach. We present a high-performance mode decomposition algorithm with a processing time of tens of microseconds. The proposed mathematical algorithm that does not use any machine learning techniques, is several orders of magnitude faster than the state-of-the-art deep-learning-based methods. We anticipate that our results can stimulate further research on algorithms beyond popular machine learning methods and they can lead to the development of low-cost phase retrieval receivers for various applications of few-mode fibers ranging from imaging to telecommunications.
Bibliographical noteThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Funding: This work was supported by the EPSRC Program grant TRANSNET (EP/R035342/1). E.S.M. acknowledges support of the of H2020 MSCA COFUND Program MULTIPLY, V.V.D. and S.K.T. acknowledges support by the Russian Science Foundation (Grant no. 17-72-30006).