BaseBoostDepth: Exploiting Larger Baselines For Self-supervised Monocular Depth Estimation

Kieran Saunders*, Luis J. Manso, George Vogiatzis

*Corresponding author for this work

Research output: Preprint or Working paperPreprint

Abstract

In the domain of multi-baseline stereo, the conventional understanding is that, in general, increasing baseline separation substantially enhances the accuracy of depth estimation. However, prevailing self-supervised depth estimation architectures primarily use minimal frame separation and a constrained stereo baseline. Larger frame separations can be employed; however, we show this to result in diminished depth quality due to various factors, including significant changes in brightness, and increased areas of occlusion. In response to these challenges, our proposed method, BaseBoostDepth, incorporates a curriculum learning-inspired optimization strategy to effectively leverage larger frame separations. However, we show that our curriculum learning-inspired strategy alone does not suffice, as larger baselines still cause pose estimation drifts. Therefore, we introduce incremental pose estimation to enhance the accuracy of pose estimations, resulting in significant improvements across all depth metrics. Additionally, to improve the robustness of the model, we introduce error-induced reconstructions, which optimize reconstructions with added error to the pose estimations. Ultimately, our final depth network achieves state-of-the-art performance on KITTI and SYNS-patches datasets across image-based, edge-based, and point cloud-based metrics without increasing computational complexity at test time. The project website can be found at https://kieran514.github.io/BaseBoostDepth-Project.
Original languageEnglish
Number of pages22
DOIs
Publication statusPublished - Jul 2024

Keywords

  • monocular responses

Fingerprint

Dive into the research topics of 'BaseBoostDepth: Exploiting Larger Baselines For Self-supervised Monocular Depth Estimation'. Together they form a unique fingerprint.

Cite this