TY - CHAP
T1 - Volunteered Metadata, and Metadata on VGI: Challenges and Current Practices
AU - Bastin, Lucy
AU - Schade, Sven
AU - Mooney, Peter
PY - 2018/1/22
Y1 - 2018/1/22
N2 - In the face of an exploding range of volunteered data initiatives, it is important to maintain good metadata and quality information in order to ensure the appropriate combination and reuse of the resulting data sets. At the same time, there is increasing evidence that validation and quality assessment of data (whether that data be volunteered or ‘official’) can sometimes be usefully crowdsourced, i.e. the required efforts can be distributed to a large number of people. However, as with VGI itself, maintaining the consistency, semantics and reliability of volunteered metadata presents a number of challenges. Initiatives which archive the history of features and tags (e.g. OpenStreetMap) lend themselves to some mapping of disputed features, but among citizen science projects in general there is often limited scope for users to comment on their own or others’ submissions in a consistent way which may be translated to any of the currently accepted geospatial metadata standards. At the same time, platforms which allow the publication of more ‘authoritative’ data sets, (e.g. Geonode and ArcGIS Online), have introduced the option of user comments and ratings. Volunteered metadata (on both authoritative and VG information) is potentially of huge value in assessing fitness for purpose, but some form of standardization is required in order to aggregate diverse ‘opinions’ on the content and quality of data sets and extracts the maximum value from this potentially vital resource. We discuss major challenges and present a set of examples of current practice which may assist in this aggregation.
AB - In the face of an exploding range of volunteered data initiatives, it is important to maintain good metadata and quality information in order to ensure the appropriate combination and reuse of the resulting data sets. At the same time, there is increasing evidence that validation and quality assessment of data (whether that data be volunteered or ‘official’) can sometimes be usefully crowdsourced, i.e. the required efforts can be distributed to a large number of people. However, as with VGI itself, maintaining the consistency, semantics and reliability of volunteered metadata presents a number of challenges. Initiatives which archive the history of features and tags (e.g. OpenStreetMap) lend themselves to some mapping of disputed features, but among citizen science projects in general there is often limited scope for users to comment on their own or others’ submissions in a consistent way which may be translated to any of the currently accepted geospatial metadata standards. At the same time, platforms which allow the publication of more ‘authoritative’ data sets, (e.g. Geonode and ArcGIS Online), have introduced the option of user comments and ratings. Volunteered metadata (on both authoritative and VG information) is potentially of huge value in assessing fitness for purpose, but some form of standardization is required in order to aggregate diverse ‘opinions’ on the content and quality of data sets and extracts the maximum value from this potentially vital resource. We discuss major challenges and present a set of examples of current practice which may assist in this aggregation.
UR - https://link.springer.com/chapter/10.1007/978-3-319-70878-2_8
UR - https://publons.com/wos-op/publon/41508389/
U2 - 10.1007/978-3-319-70878-2_8
DO - 10.1007/978-3-319-70878-2_8
M3 - Chapter
SN - 9783319708775
SN - 9783319890029
T3 - Earth Systems Data and Models
SP - 151
EP - 172
BT - Mobile Information Systems Leveraging Volunteered Geographic Information for Earth Observation
A2 - Bordogna, Gloria
A2 - Carrara, Paola
PB - Springer
ER -