Multi-view stereo via volumetric graph-cuts

G. Vogiatzis*, P. H.S. Torr, R. Cipolla

*Corresponding author for this work

    Research output: Chapter in Book/Published conference outputConference publication

    Abstract

    This paper presents a novel formulation for the multiview scene reconstruction problem. While this formulation benefits from a volumetric scene representation, it is amenable to a computationally tractable global optimisation using Graph-cuts. The algorithm proposed uses the visual hull of the scene to infer occlusions and as a constraint on the topology of the scene. A photo consistency-based surface cost functional is defined and discretised with a weighted graph. The optimal surface under this discretised functional is obtained as the minimum cut solution of the weighted graph. Our method provides a viewpoint independent surface regularisation, approximate handling of occlusions and a tractable optimisation scheme. Promising experimental results on real scenes as well as a quantitative evaluation on a synthetic scene are presented.

    Original languageEnglish
    Title of host publicationProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
    PublisherIEEE
    Pages391-398
    Number of pages8
    ISBN (Print)0769523722, 9780769523729
    DOIs
    Publication statusPublished - 25 Jul 2005
    Event2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 - San Diego, CA, United States
    Duration: 20 Jun 200525 Jun 2005

    Publication series

    NameProceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
    VolumeII

    Conference

    Conference2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
    Country/TerritoryUnited States
    CitySan Diego, CA
    Period20/06/0525/06/05

    Fingerprint

    Dive into the research topics of 'Multi-view stereo via volumetric graph-cuts'. Together they form a unique fingerprint.

    Cite this