Abstract
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame.
Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics.
In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Original language | English |
---|---|
Title of host publication | geoENV VII – Geostatistics for Environmental Applications |
Publisher | Springer |
Pages | 371-381 |
Number of pages | 11 |
Volume | 16 |
ISBN (Print) | 9789048123216 |
DOIs | |
Publication status | Published - 2008 |
Bibliographical note
geoENV 2008, 8-10 September 2008, Southampton (UK). The original publication is available at www.springerlink.comKeywords
- spatially-referenced datasets
- satellite-based sensors
- monitoring networks
- individual sensors
- environmental decision making
- generation of maps
- specific locations
- real-time data
- geostatistical operations
- interpolation
- map-generation
- emergency
- risk
- evacuation
- exploratory analysis
- grid based systems
- data likelihood
- parallel maximum likelihood variogram estimation
- parallel prediction algorithms
- Walker Lake data set