Dress Like a Star: Retrieving Fashion Products from Videos

Noa Garcia, George Vogiatzis

Research output: Chapter in Book/Published conference outputConference publication

Abstract

This work proposes a system for retrieving clothing and fashion products from video content. Although films and television are the perfect showcase for fashion brands to promote their products, spectators are not always aware of where to buy the latest trends they see on screen. Here, a framework for breaking the gap between fashion products shown on videos and users is presented. By relating clothing items and video frames in an indexed database and performing frame retrieval with temporal aggregation and fast indexing techniques, we can find fashion products from videos in a simple and non-intrusive way. Experiments in a large-scale dataset conducted here show that, by using the proposed framework, memory requirements can be reduced by 42.5X with respect to linear search, whereas accuracy is maintained at around 90%.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
PublisherIEEE
Pages2293-2299
Number of pages7
Volume2018-January
ISBN (Electronic)9781538610343
DOIs
Publication statusPublished - 19 Jan 2018
Event16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Conference

Conference16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017
Country/TerritoryItaly
CityVenice
Period22/10/1729/10/17

Fingerprint

Dive into the research topics of 'Dress Like a Star: Retrieving Fashion Products from Videos'. Together they form a unique fingerprint.

Cite this