Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization

Lukasz Tracewski, Lucy Bastin*, Cidalia C. Fonte

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.

Original languageEnglish
Pages (from-to)252-268
Number of pages17
JournalGeo-Spatial Information Science
Volume20
Issue number3
DOIs
Publication statusPublished - 18 Sept 2017

Bibliographical note

© 2017 Wuhan University. Published by Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Funding: COST Action [grant number TD1202] ‘Mapping and the Citizen Sensor’.

Keywords

  • convolutional neural network
  • Land cover
  • land use
  • machine learning
  • photograph
  • volunteered geographic information (VGI)

Fingerprint

Dive into the research topics of 'Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization'. Together they form a unique fingerprint.

Cite this