Landslides triggered by large earthquakes in mountainous regions contribute
significantly to overall earthquake losses and pose a major secondary hazard
that can persist for months or years. While scientific investigations of
coseismic landsliding are increasingly common, there is no protocol for rapid
(hours-to-days) humanitarian-facing landslide assessment and no published
recognition of what is possible and what is useful to compile immediately
after the event. Drawing on the 2015
Landsliding is a significant secondary earthquake hazard that can account for up to 25 % of earthquake fatalities in mountainous regions (Yin et al., 2009; Budimir et al., 2014). In addition, the collateral damage and disruption caused by landslides substantially inhibit short- and medium-term relief efforts by blocking or destroying transport corridors and communications (Bird and Bommer, 2004; Pellicani et al., 2014; Robinson et al., 2015). The assessment of landslide extent and impacts, beyond direct observations on the ground (Collins and Jibson, 2015; Tiwari et al., 2017), relies on the following three approaches: (1) empirical modeling, which uses a combination of pre-earthquake topographic data and information on ground motion and shaking intensity; (2) manual landslide mapping; and (3) automated landslide mapping. The last two use post-earthquake airborne or satellite remote sensing. The main outputs from these assessments are maps of landslide locations, extents, and densities, the humanitarian value of which is widely recognized (e.g., Goodchild, 2007).
Each approach has specific data requirements, with the capture and appraisal of those data resulting in an inevitable latency between the event and the release of information (UN-SPIDER, 2017; Fleischhauer et al., 2017). For manual mapping, the speed of information production is influenced by the nature of the landslides themselves, the data quality, and choices about what and how to map (Joyce et al., 2009). Although critical for defining the speed of the assessment, those choices have not previously been described or evaluated with respect to the timescales of the information needs of those on the ground. However, the potential value is clear: if available within a very short time frame (hours to days), information on landsliding can be highly beneficial.
Recently, considerable gains have been made in the capture of satellite imagery used for landslide assessment, particularly in terms of (1) the resolution and bandwidth of the sensors used, (2) the spatial and temporal coverage, and (3) the ease of access via online repositories (Voigt et al., 2016). However, no single automated method exists to map landslides in rapid response assessments due to the complexities and variability between earthquakes in different locations (Casagli et al., 2016), resulting in uncertainty regarding the type and timeliness of information that is useful to produce. Standards or guidelines for satellite-based emergency mapping (SEM) have been developed for some hazards, such as flooding (UN-SPIDER, 2017; Voigt et al., 2016), and mechanisms such as the EU Copernicus Management Service have provided specifications for the creation of rapid mapping products after disasters, including landslides. Despite these advances, clear and widely accepted guidelines for humanitarian-facing landslide assessments have not yet been developed, yet are essential for defining open, constructive, and ethical approaches to SEM.
While many satellite operators have tasked rapid image capture of earthquake-affected areas, either on humanitarian grounds via established international frameworks (e.g., the International Charter on Space and Major Disasters) or for commercial ends (Joyce et al., 2009), the use of these data is not necessarily coordinated. For example, commercial satellite imagery at submeter resolution was released for the benefit of the response to the 2010 Haiti earthquake (Harp et al., 2011). Over 300 map products were created within 2 weeks by a plethora of agencies, each using different procedures and standards (Duda and Jones, 2011; UN-SPIDER, 2017; Voigt et al., 2016). Uncoordinated mapping efforts undertaken with different objectives, and for different end users, can result in a duplication of effort and may cause confusion and data saturation amongst the humanitarian response community. This has the potential to produce an incomplete and inconsistent assessment of humanitarian need (IASC, 2012). In the longer term, these initiatives can result in multiple inventories for the same event, further adding to the confusion. For example, Xu (2015) described eight separate landslide inventories compiled after the 2008 Wenchuan earthquake in China. After the 2015 Nepal earthquakes, there was a 5-fold increase in landslide numbers between the inventories reported by Kargel et al. (2016; 4312), Martha et al. (2016; 15 551), Roback et al. (2017; 24 915), and Tiwari et al. (2017; 14 670). While some of these inventories were created in the immediate aftermath of the disaster, their use for scientific purposes nevertheless assumes complete coverage of the affected area. The resolution of mapping and the approach taken should therefore be stated clearly alongside the purpose of the inventory.
Previous research has defined appropriate scientific methods for coseismic landslide mapping (e.g., Gorum et al., 2011; Harp et al., 2011; Wasowski et al., 2011; Guzzetti et al., 2012), with some organizations, such as UNITAR/UNOSAT and EU Copernicus, requesting feedback from end users. However, there remains an absence of readily available information on what is actually useful for decision makers who are tasked with dealing with an earthquake and its cascading hazards, particularly where rapid response times are key. Underpinning the effort we describe below is the broad time frame of a humanitarian disaster response, based upon United Nations disaster response protocols. Central to this is the Humanitarian Needs Assessment, which aims to “provide fundamental information on the needs of affected populations and to support the identification of strategic humanitarian priorities” (IASC, 2012, p. 4). This approach to disaster response starts immediately after an earthquake and comprises a Situation Analysis (completed within 72 h) and a Multi-Sector/Cluster Initial Rapid Assessment (MIRA) report (completed within 2 weeks; IASC, 2015). During the first phase, emphasis is placed on obtaining pre- and post-crisis data to determine the disaster extent and scale. This phase “balances the need for accuracy and detail with the need for speed and timeliness” (OCHA, 2013) and informs the basis of the mapping approach described below. The UN approach emphasizes the need for information that is fit for purpose, such that superfluous detail and precision are actively discouraged (OCHA, 2013) .
While coseismic landslide inventories created for academic research are slowly and painstakingly collected, this approach is likely to be inconsistent with the requirements for rapid, widespread coverage and the identification of broad areas of concern. The need is therefore to identify the areal extent and location of landsliding (scale and intensity), assess how landsliding intersects with the location of people and infrastructure (impacts), and appraise the residual risks from induced hazards (priorities), such as existing or potential landslide dams. These needs must be balanced against the type and timeliness of information that can be produced. Post-earthquake end users of landslide information can be numerous, with complex responsibilities, requirements, and information needs. These requirements are also highly dynamic, often shifting from a broad-scale impact assessment to increasingly local-level detail over a matter of days, and are therefore challenging to satisfy through SEM (Voigt et al., 2016). As a consequence, the utility of particular forms of information evolves from the initial response to the early recovery. Importantly, the time necessary to produce some forms of information may render them redundant in the context of the initial response and therefore unnecessary to produce rapidly.
Here we examine these general issues by focusing on the case of the 2015 Gorkha earthquake and its aftershocks, which triggered thousands of landslides in Nepal. Given the steep terrain, the large rural population, and reported initial shaking intensities in Nepal, the potential for landslide-induced losses as a result of the 2015 earthquakes was quickly recognized (e.g., Gallen et al., 2016; Robinson et al., 2017). We reflect upon a rapid landslide assessment that was undertaken over the first 2 months after the earthquake and efforts to disseminate the findings to potential end users in Nepal and elsewhere. We consider the benefits and time needed for various assessments of landsliding that range from rapid appraisal to a full inventory, enabling an evaluation of the approaches that can effectively inform critical decision-making. We also consider the methods that we applied to expedite the generation of usable outputs, which were often at odds with the practices associated with collating a formal scientific landslide inventory. We close by offering recommendations for conducting future humanitarian need-driven rapid landslide assessments following a large earthquake.
Our mapping efforts were undertaken by a group of five analysts from Durham
University and three from the British Geological Survey (BGS), with
experience of conducting landslide research in Nepal or similar terrains.
The assessments fed information to, and were guided by, the needs of
humanitarian actors in Nepal, including the UN Resident Coordinator's Office
in Kathmandu and members of the Nepal Risk Reduction Consortium (NRRC), as
well as the Cabinet Office Briefing Room (COBR), the Scientific Advisory
Group for Emergencies (SAGE), the Foreign and Commonwealth Office (FCO), and
DFID (Department for International Development) in the UK. Contacts in Nepal
were well established because of a long-term collaborative project,
Earthquakes without Frontiers (see
The mainshock, which generated the majority of landslides (Martha et al.,
2016; Roback et al., 2017), occurred on the Main Himalayan Thrust (MHT) with
Decision tree for prioritizing imagery used by Durham University for landslide mapping after the 2015 Gorkha earthquake. The relative importance of criteria decreases from left to right. Datasets were prioritized if they were efficient to pre-process and provided high-resolution data optimal for mapping. Imagery with large swath widths and acceptable off-nadir angles may be difficult to acquire in mountainous terrain. These criteria were therefore prioritized to reduce the time spent georeferencing and the number of images required. Given the submeter resolution of VHR imagery and the ability to pan-sharpen multispectral imagery, most image resolutions are now sufficient to map landslides with the potential to cause significant damage. Spectral resolution was therefore considered as a more useful criterion for distinguishing landslides of this type than spatial resolution. This decision tree may also be applied to image selection for automated landslide mapping.
Landslides are most identifiable in optical satellite images under daytime conditions with minimal shadow and cloud, captured at a time of year when vegetation and landslides produce a sharp radiometric contrast. From experience, such conditions are rarely coincident or likely. Given that landslides typically occur in steep and mountainous regions, often following prolonged rainfall, the potential for cloud cover in imagery is a key consideration for associated SEM. The Nepal Himalayas, for example, are obscured by cloud between mid-June and mid-September each year, during which time an estimated 90 % of annual fatal landsliding occurs (Petley et al., 2007). Landslide inventories conventionally draw on a full catalogue of imagery compiled before mapping begins to ensure consistent coverage of the entire area (Harp et al., 2011). Ideally, all images are collected by a single sensor, providing consistent spatial, spectral, and radiometric resolution appropriate for the type of landsliding under investigation. A key challenge of time-critical SEM responses is the selection of the most effective imagery for mapping. This selection must be made before complete knowledge of post-earthquake imagery can be acquired and usually before the general spatial distribution of landsliding is known. Most commonly, imagery from a variety of sensors is captured iteratively and is distributed across multiple on- and offline repositories and platforms. Efficient mapping from these data requires a method for selecting the most “useful” images, which demands that attributes such as the minimum swath width, maximum topographic distortion, and desired spatial, spectral, and radiometric resolutions are defined. The nature of the terrain, the ground cover, and the style of landsliding therefore holds considerable influence over the necessary requirements of imagery that is useful for mapping.
Consequently, as part of our effort, a protocol for prioritizing imagery from which to map was developed (Fig. 1). It quickly became apparent that, given the number and spatial extent of landslides and the need for mapping consistency, beginning to map from a new image committed one mapper to a considerable amount of time. During this time, it was increasingly probable that better imagery of the same area would become available. Imagery was therefore prioritized by three criteria: (1) the platform and hence speed with which the imagery could be handled and analyzed; (2) characteristics of the imagery, including cloud cover and geometric distortion; and (3) the spatial and spectral resolution, as well as the swath width. These criteria were used to develop a decision-tree structure for efficient image selection that is described in Fig. 1.
Efficient mapping requires a platform for quick navigation and mapping of large quantities of images or a way of bypassing the need for georeferencing. The image source, and hence the platform, influenced which images were prioritized due to the relative ease with which mapping could be conducted as compared to downloading, pre-processing, and mapping from raw imagery. While this made the mapping more fragmented, the mapping time was substantially reduced.
Two platforms were employed for image interpretation: ESRI's ArcMap and
Google Earth
To reduce georeferencing times, we used the DigitalGlobe
The use of both ArcMap and Google Earth
The second criterion related to the quality of imagery and was determined primarily by the degree of cloud cover as well as the sensor incidence angle off-nadir. Imagery with minimal cloud cover was prioritized in order to observe as much of the ground as possible within a short period of time and to minimize the time spent on georeferencing. None of the post-earthquake images were completely cloud-free and so mapping was undertaken from multiple images wherever practicable in order to develop a mosaic of coverage. It was especially imperative to distinguish between unmapped areas obscured by cloud cover from mapped areas with no landslides. The angle off-nadir was considered because georeferencing time increased (and accuracy decreased) with increasing angle. Critically for earthquake-triggered landslides, initial data acquisition is commonly focused at the published epicenter rather than across the full extent of ground shaking. During the initial phases of the response, satellites were tasked to capture images centered on the epicentral region that lay south and west of the most intensive areas of landsliding further to the north. Images to the north and east were therefore captured with relatively high incidence angle off-nadir. This resulted in significant topographic occlusion and image distortion, exacerbated by the steep topography (Roback et al., 2017).
Given the prevalence of cloud cover and off-nadir viewing angles, imagery
was drawn upon from a wide range of sensors, including Cartosat, DMCii,
EO-1, GeoEye, Landsat, Pléiades, RapidEye, SPOT, and WorldView. Based
upon the mountainous areas of Nepal that experienced moderate to severe
shaking, as estimated by ShakeMap, the area of shaking sufficient to trigger
landslides was approximated at 35 000 km
Spectral resolution and contrast were also used in selecting suitable images. Given our observation that most landslides were shallow and comprised rockfalls and shallow rockslides, spectral resolution and, in particular, the presence of a near-infrared (NIR) band were of considerable importance in landslide mapping. These were prioritized over spatial resolution as long as the latter remained commensurate with the size of landslides. In the case of WorldView-2 and WorldView-3, although panchromatic imagery provides greater spatial resolution, the ability to distinguish vegetation from freshly exposed bedrock and regolith in landslide scars was reduced due to the lack of multispectral imagery.
The final criterion was the spatial resolution of imagery. Most
large (
For consistency, most landslide inventories adopt a single method of
landslide delineation (i.e., as points, polylines, or polygons), depending
upon the type of output and the scale of the event. It is also common to
identify individual landslides rather than delineate areas impacted by
multiple landslides (Guzzetti et al., 2012; Marc and Hovius, 2015). In
global landslide databases (e.g., Kirschbaum et al., 2010; Petley, 2012) and
many coseismic landslide inventories, landslides are specified as point
features as an efficient means to locate and count large numbers of
landslides (Kargel et al., 2016; Tiwari et al., 2017). Regional- to
local-scale landslide inventory maps tend to document landslides as
polygons, which can be used to understand impact zones or to separate source
from deposit (Guzzetti, 2004; Guzzetti et al., 2012). Polygons are required
where assessments of landslide area and volume, sediment yield, or
connectivity of landslide deposits to the fluvial network are needed
(e.g., Roback et al., 2017). The focus at the BGS was on mapping polygons, while
the initial focus of the Durham effort was the collection of point data,
which were subsequently expanded to polylines. The decision to collect point
data at Durham was based on the need for rapid analysis and the large
numbers (10
Timeline of image acquisition, mapping, disaster reports, and other earthquake damage assessments from 25 April 2015. Earthquake timing is also added alongside the approximate onset of the monsoon on 10 June (46 days after the Gorkha earthquake). The timing of the OCHA On-site Operations Coordination Centre (OSOCC) Situation Analysis reports and the Nepal Government's Post Disaster Needs Assessment (PDNA) is added alongside the proposed timings of the Situation Analysis and MIRA report as defined by IASC (2015). No MIRA report was created following the Nepal earthquakes due to logistical difficulties in organizing its creation and physical access constraints (ECHO, 2015). The timeline is nonlinear, with each vertical line representing 1 day.
The chronology of selected image release, cloud cover, mapping, and released reports is provided in Fig. 2. Within 48 h of the 25 April mainshock, initial estimates of the likely geographical distribution of landslides were based upon the outputs of the USGS ShakeMap and a limited number of reports from the ground (e.g., via social media). Although this provided a first-order approximation of potential landslide locations, coseismic landsliding is determined by the interactions between topography, ground shaking, and local site geology (Meunier et al., 2008; Parker et al., 2015; Marc et al., 2016). Empirical landslide susceptibility models (Gallen et al., 2016; Parker et al., 2017; Robinson et al., 2017) provided probabilistic estimates of the likelihood of a landslide at any point in space within the affected area. These models predicted that landslide probabilities were high but also variable across the affected districts, especially in the middle to high Himalayas north and east of the epicenter where topographic relief increases, but where population densities remain high. Estimates provided by the USGS ShakeMap, upon which such models rely, underwent several refinements within the first 48 h, resulting in minor alterations to model predictions, but the overall spatial distribution of relative landslide density remained unchanged. Comparisons between predicted landslide density and observed landslide density have since highlighted some important discrepancies (Gallen et al., 2016), including an overestimation of landsliding to the south of Kathmandu in the Sivalik Hills.
Prior to 2 May, cloud cover limited the availability of useable imagery
across the entire affected area. During this period, two approaches were
undertaken to locate landslides and to prioritize areas for mapping once
cloud-free imagery became available. Estimates of landslide location and
qualitative size (small, medium, large) were collated from photographs and
footage posted on social media and, later, from airborne video from the news
media. Although only
From 2 May onwards, more frequent small breaks in cloud cover provided useful image coverage in a limited but increasing number of locations. Cloud cover was often concentrated around high elevation topography, leaving valley bottoms visible. Mapping of individual landslides therefore focused in areas proximal to the channel network and lower elevation slopes to survey for landslide dams, similar to those triggered by the 2008 Wenchuan earthquake (Cui et al., 2009; Xu et al., 2014).
Extract from landslide impacts map released on 4 May 2015, 9 days after the Gorkha earthquake and 2 days after cloud cover recession. Orange dots represent the location of observed individual landslides, at the point at which they reached the valley base. Red dots represent potential valley blocking landslides that had the potential to inhibit river flow, posing a future breach risk downstream. Red lines represent valleys identified as having experienced very intense landsliding, predominantly rockfall and dry debris flows. The black line delimited the southern limit of the area of intense landsliding. This limit was observed where solid and was anticipated where dashed, given that it was not visible in imagery. Both the 25 April (Gorkha) and 12 May (Dolakha) epicenters are added to this map for reference, despite its release prior to the Dolakha earthquake.
In order to rapidly map as large an area as possible, and due to cloud cover on higher ground, each landslide was initially marked as a single point at the toe, where the risk to infrastructure and likelihood of valley blocking was greatest. The imagery that was available during this phase had generally high off-nadir viewing angles and so geolocation errors after orthorectification were lower close to valley bottoms. In instances where the landslide toe ran out to but did not block the channel network, a “yes/no” attribute was added describing the potential for the deposit to block the valley. In instances where upstream pooling of water and a restricted flow downstream was identified indicating blockage, a separate valley-blocking marker was created (Fig. 3). These locations were fed to the USGS for visual inspection as part of their assessment of present and future landslide hazards (Collins and Jibson, 2015).
Valleys with particularly intense landsliding were recorded with a polyline
running up river from the southernmost visible extent of landsliding (Fig. 3).
The aim of this was to delineate the southernmost limit of major
landslide disruption, and hence the likely northern limit of unimpeded road
access, using the predominantly north–south-oriented drainage network.
This was mapped as a solid line where the limit was observed and a dashed
line where the limit was inferred in the absence of imagery. Subsequent
mapping showed this line to be an accurate estimate, with the area of
intense landsliding (
As increasingly cloud-free imagery became available, manual mapping speeds
increased. Landslides were subsequently identified with polylines to provide
an attribute of scale and to define where landslides intersected
infrastructure, such as roads. A record of areas mapped and areas obscured
by cloud was maintained. Mapping using VHR imagery identified that the
majority of coseismic landslides were narrow (
Our accompanying notes (an example of which is provided in Table 1)
summarized the key observations, the methods used, and key messages about
the intensity, locations, and general risks posed by these landslides. The
maps and underpinning data were disseminated as Google Earth
Approximately 5600 coseismic landslides were identified in the affected
area by 18 June, 42 days after the earthquake. This comprised
Extract from map released on 7 May 2015, 12 days after the Gorkha
earthquake. Colored zone shows landslide distribution and relative intensity
(number of landslides km
Extract from map released on 21 May, 9 days after the Dolakha earthquake. Due to cloud cover and image acquisition, this map did not include landslides that occurred following the Dolakha earthquake.
Example of notes that accompanied the map released on 18 June, an extract from which is presented in Fig. 6.
Extract from map released on 19 June 2015 containing landslide data
from both earthquakes, comprising
Comparing our rapidly derived inventory with subsequent, independently
collated inventories (Martha et al., 2016; Roback et al., 2017; Tiwari et al.,
2017) shows that our inventory underestimated the total number of landslides
by up to
The approach was heavily determined by the scale of the rupture and the
presence of cloud cover in the run up to the South Asian monsoon, both of
which necessitated the collection of a considerable number of images and a
means of prioritizing them. In drier regions, or following earthquakes or
rainfall that affect a much smaller area, the chronological order of outputs
is unlikely to change. However, the offset in timing between initial
landslide models and the mapping of landslides using either radar or optical
satellite imagery is likely to decrease. The 2016 Kaikoura earthquake, New
Zealand, ruptured an area 200
Generating a useful assessment of landsliding immediately after an earthquake remains challenging due to a lack of clarity around what information is possible to acquire under severe time constraints and what information is actually useful (Robinson et al., 2017). Our mapping effort showed that delays in information production can occur due to image availability, image quality, cloud cover, and the time taken to handle and map from imagery once it became available. While some clarity on increasing the speed of these processes can be provided via reflections such as this, pertinent information is inevitably unique to each earthquake and its sociopolitical context. At the highest level, information on landsliding within the first 72 h can help to define the scale, extent, and distribution of landslide impacts across the entire affected area, particularly if this area is otherwise inaccessible. Given the delays in image capture and mapping, full landslide mapping for an event on the scale of the Gorkha earthquake or larger is impossible to achieve within this 72 h time frame. However, as the number and exact location of all landslides is not important to disaster managers at this stage of a response (OCHA, 2013; IASC, 2015), a faster approach is preferable.
Robinson et al. (2017) explored the merits of seeding an empirical landslide
model with the initial outputs from rapid post-earthquake mapping efforts,
such as our initial attempts (Fig. 3). They found that small numbers
(
A clear exception to this finding is in assessing the imminent potential for secondary hazards posed by landslide dams (e.g., Cui et al., 2009; Kargel et al., 2016). It is widely recognized that landslide dams typically fail soon after formation, with 41 % failing within 1 week (Costa and Schuster, 1987). Rapid assessment to inform the management of this risk is therefore vital. However, features indicative of progressive failure, such as widening tension cracks, are too small to be visible in even the highest-resolution satellite imagery, and so SEM is mostly valuable for locating and low-resolution monitoring of landslide dams. An appraisal of the risk that they pose is best undertaken on the ground.
Our findings suggest that there is potential additional value in informing post-earthquake landslide mapping efforts to target medium- to longer-term information needs, as well as the immediate response. The transition from disaster response to recovery can occur over a matter of days, and while some information gathered in the immediate earthquake aftermath may not be instantly useful, it may become valuable for later decision. For example, given that earthquakes elevate landslide hazard for sustained periods of time (e.g., Marc et al., 2015), continually updating coseismic landslide maps to assess how the hazard evolves is potentially of great value, yet is rarely undertaken. In the aftermath of the Nepal earthquake, there were 46 days between the mainshock and the first rainfall-induced fatal landslide of the monsoon. Detailed mapping that describes individual landslides and the potential for remobilization is invaluable in assessing risks during future monsoons. However, as such uses require a high level of local detail and precision, mapping must be accurate, which can be difficult to achieve within limited time frames. Defining the aim and output of responsive mapping is therefore vital to establish the data that must be collected.
It is equally clear that there is no requirement to wait until an earthquake occurs to start defining what information could be useful with those charged with managing the response. Scenarios or planning exercises are widely used to prepare those involved in disaster response (Davies et al., 2015) and could be extended to consider coseismic landslide hazard assessments to define what information can be provided and when. This process would be of value to end users, but also to those producing landslide assessments to ensure that aims are realistic and defined by needs. Similar discussions for other forms of geohazard have benefitted from protocols and guidelines that aim to standardize approaches, outputs, and procedures (UN-SPIDER, 2017). Groups such as the CEOS Working Group on Disasters and the UN-SPIDER IWG-SME are vital frameworks for establishing these technical, practical, and ethical guidelines on SEM for coseismic landslide assessment.
In circumstances where mapping individual landslides is of value, the choice of whether to digitize points, polylines, or polygons is an important consideration. The choice must be based on the extent of the mapping area, the time available for mapping, and the number of landslides to map. However, estimating the number and extent of landslides in the immediate aftermath of a disaster is complex, and the choice of digitization technique must be open to change in response to reasonable assumptions about the nature of the event. This decision is also based on the desired outputs and the scale on which they will be used. The reliability of the geometrical data provided by polygons, while beneficial, is highly sensitive to the accuracy and consistency of image orthorectification, which are challenging in steep terrain. We observed that, where a landslide spanned an altitudinal range of more than several hundred meters, the accuracy of results generated strongly depended upon the spatial resolution of the imagery and the sensor incidence angle. As a result, where multiple data sources are used and image resolution varies across the affected area, the number and size distributions of polygons also vary, leading to systematic inconsistencies in mapping. Coarser, and hence more rapid, methods of mapping are valuable for a rapid assessment of landslide impact across the whole earthquake affected area but are less useful for understanding individual landslides. We found that polylines offered a compromise that retains some of the speed of mapping points but also enables an assessment of landslide size and intersection with features of interest, such as roads, buildings, or rivers.
Semiautomated and automated approaches to image segmentation hold potential for more time efficient landslide mapping, with considerable success reported outside immediate post-disaster contexts (e.g., Tsai et al., 2010). However, discernible spectral changes across a landscape, upon which pixel-based segmentation depends, may only occur for failures within densely vegetated areas that have the potential to revegetate over short periods. A reliance upon spectral responses can also result in the misclassification of channel bank erosion and fluvial sedimentation, the misidentification of reactivations, and the division of large landslides into multiple fractions. While the increasing availability of VHR imagery directly enhances the accuracy of manual landslide mapping, the results of automated and semiautomated pixel-based methods that have used VHR imagery are susceptible to large spectral variance between pixels, creating intra-class variability, and are more sensitive to coregistration errors (Moine et al., 2009; Martha et al., 2010; Mondini et al., 2011). Object-based image analysis overcomes many of these issues by accounting for additional metrics such as color, texture, shape, and topography (Stumpf and Kerle, 2011), though the selection of useful object metrics is time intensive and varies from case to case. Both approaches are likely to benefit from the rich spectral information gathered by medium-resolution sensors, such as Sentinel-2, and short revisit periods that enable access to pre-event datasets. However, while the speed gain of (semi-)automated methods over manual methods increases with the area to be mapped, larger areas also increase the reliance upon imagery from a variety of sensors. The application of semiautomated and automated mapping with variable image characteristics and quality is yet to be reported. Future research into the use of Sentinel-2 imagery is therefore required (Voigt et al., 2016), and these approaches may yield an important assessment that sits between landslide probability models and manual landslide mapping from optical imagery in the aftermath of a trigger event (e.g., Stumpf et al., 2017).
In instances where cloud cover is prominent, the use of satellite-borne radar also has the potential to provide an assessment of large landslides prior to mapping from optical imagery. Large failures may be rapidly identified by significant morphological changes, such as shifts in the channel network. Alternatively, a large-scale shift in the dielectric constant of the slope, as vegetation is removed, may be detected by changes to the amplitude of the backscattered waves (Jin and Wang, 2009; Mondini, 2017). In this manner, SAR amplitude and intensity images have been used to map single landslides on the slope scale (Raspini et al., 2015; Plank et al., 2016) and, more recently, on the catchment scale following triggering events (Casagli et al., 2016; Mondini, 2017). However, SAR imagery requires a considerable amount of complex pre-processing and the accuracy of change is highly sensitive to the image acquisition geometry, which can be suboptimal in mountainous regions.
The time taken to produce outputs from our mapping campaign was most influenced by image availability, specifically that which was cloud-free over the area of interest. For this earthquake, the workload of five analysts appeared to yield a suitable balance between capacity, shared learning, and consistency, given the time frames to produce outputs. It was beneficial for all mappers to be in one laboratory, enabling easy coordination and communication to ensure coverage and consistency and to avoid replication. We were able to partition the earthquake-affected area into regions of interest for each mapper, and these regions were dynamically updated in response to the availability of high(er)-quality imagery. Given the increased capacity of the SEM community to develop map products in recent years, this partitioning represents an important phase in the coordination of multiple groups, thereby avoiding repetition and increasing the consistency of outputs (Voigt et al., 2016).
The introduction of larger satellite constellations with more advanced sensors also expedites the availability of imagery for future mapping campaigns, increasing the efficiency of post-disaster mapping (Voigt et al., 2016). For example, Sentinel-2 combines a large swath width (290 km) with a moderately high spatial resolution (10 m visible and NIR), which will reduce the number of images, and thus processing time, required to cover large areas. In addition, the shorter return period (5 days for Sentinel-2a and 2b, compared to 16 days for Landsat 8) will increase the probability of observing the ground through gaps in any cloud cover, reducing the time needed to produce outputs. Our effort demonstrated that once imagery is available, mapping can be rapid (2 to 3 days), given suitable capacity. However, we have also found that it cannot be assumed that a landslide inventory or assessment will be possible to generate immediately once an image is captured. This is a problematic assumption that raises expectations of both those producing landslide assessments and also those who could use them.
The timeliness of an SEM landslide assessment must be considered relative to alternative sources of information. While each earthquake is different, multiple sources of information will become available to decision makers, primarily based upon networks collating human intelligence from those on the ground. In Nepal, nationwide systems capable of rapidly assessing the earthquake impacts included the networks of the military, Red Cross, and local government. Such approaches can, however, be subjective, incomplete and inconsistent in coverage and cumbersome to administer (OCHA, 2013; Datta et al., 2018). Inevitably, such assessments are also restricted to areas with functioning communications or to accessible parts of the road network, at least until systematic reconnaissance can be undertaken. Such systematic reconnaissance is also highly contingent upon favorable weather and available resources. Consequently, some areas can remain isolated for days or weeks. For example, the Jhelum Valley in Pakistan after the 2005 Kashmir earthquake (Petley et al., 2006; Owen et al., 2008; Mahmood et al., 2015) and the Rasuwa and Upper Bhote Kosi valleys after the 2015 Nepal earthquakes were left isolated by landsliding, leaving the status of thousands of households largely unknown as the wider response effort gained pace.
Through the proliferation of mobile technologies, open-source mapping, and
online GIS, an increasingly important role for social media and
crowd-sourced data in disaster response is emerging (e.g., Zook et al., 2010;
Fleischhauer et al., 2017). Following the Gorkha earthquake, crowd-sourced
mapping campaigns initiated by Tomnod (with imagery from
DigitalGlobe
To date crowd sourcing has not, however, been employed to map coseismic landslides in a manner that is reliable. Landslide mapping requires pre- and post-earthquake datasets, knowledge of failure processes and mechanics, and an understanding of what is possible to observe based on the spectral characteristics of the imagery. Research is needed into how best to support crowd-sourced mapping to generate reliable landslide mapping and inventories and to feed learning from compiling science-focused landslide inventories into this process. In our campaign, we also benefited from insights from social media to identify and locate landslides in areas with persistent cloud cover. A combination of archived pre-earthquake imagery and reported locations allowed us to locate the exact hillslope that had failed in 20 locations, the positions of which were later verified by our formal mapping. A platform that permits this combination of data with more conventional mapping therefore offers an attractive means of collating and verifying landslide data.
Advances in collating landslide inventories, including crowd sourcing and the key messages that can be distilled from their analysis, are valuable for disaster response. However, key messages need to be articulated quickly and clearly along with any associated limitations or uncertainties. The various means of landslide assessment that have been discussed above are summarized in Table 2. This provides a chronology of outputs that clarifies what we have found possible to achieve within the time frames of the UN Situation Analysis and MIRA report. The various means of landslide assessment that have been discussed are summarized in Table 2. This provides a chronology of outputs that clarifies what we have found possible to achieve within the time frames of the UN Situation Analysis and MIRA report. The timescales of what is possible will vary between events, predominantly as a function of cloud cover for landslide mapping, but the suggested timescales in Table 2 are broadly independent of this. For example, following the first cloud-free imagery after the Gorkha earthquake, the production of an initial landslide assessment and inventory was available within approximately 5 days, as reflected in the description of a full point inventory. The benefits and limitations of each are included to provide detail on what is and is not possible to conclude. Importantly, once a dataset is made available online, it is publicly available for the foreseeable future. While this provides a good base for others to work from, care is needed in how and where data are shared and how caveats and uncertainties are communicated, in particular the method used to generate the dataset. Based on our experience of communicating landslide assessments, each published output requires the following accompanying information: (1) a supporting narrative that describes the aims, assumptions, methods, and limitations of the data; (2) a high-level analysis of the key messages or conclusions that can and cannot be reached on the basis of the mapping; (3) a statement of intent for further work, so that end users can see how the work will evolve; and (4) a mechanism for feedback or exchange between mappers and end users. Unless these elements are made available, the output is likely to be either overlooked, or it may be used in ways which were not intended.
Timescales, benefits, and limitations of landslide-related outputs, based on response to a large continental earthquake in a mountainous region. Approximate timings described are based on experience of undertaking landslide assessment after the 2015 Gorkha earthquakes, and related studies, but will inevitably vary between events.
Continued.
Continued.
Based on our experiences of the 2015 Nepal earthquakes, we provide the
following recommended approach to manual mapping of large numbers
( Choosing the best imagery, which has sufficient spectral and
spatial resolution, minimal topographic distortion and continuous spatial
coverage, is a key primary consideration prior to mapping. The area that
has suffered shaking sufficient to trigger landsliding ( There are significant gains to be made by combining manual
mapping and empirical modeling of coseismic landsliding. Within the first
24 h, the outputs from empirical models are likely to provide a useful
indication of the area impacted by landsliding, which can be used to guide
subsequent mapping efforts. Such models can be verified relatively quickly
by manually delimiting the area impacted by landsliding, without a need to
map each individual failure. These should be examined alongside Copernicus
Emergency Management Service reference maps. An online search for documented
landsliding on the ground also provides useful information for targeting
individual slopes. These information sources are particularly useful in
locations where the mappers have no background knowledge of landsliding or
baseline datasets and should be examined within 48 h of the earthquake. Pre-event imagery must be sought to ensure that only landslides
triggered by the event, or those remobilized, are mapped. Medium-resolution
(Sentinel-2 or Landsat 8) imagery is sufficient as a baseline dataset. High-resolution imagery made available in Google Earth may also prove useful, as
long as the most recent image acquisition occurred after previous regional
meteorological events, such as the South Asian monsoon. Preliminary outputs, which precede a full inventory and can be
produced much more quickly, can be of value to disaster managers on the
ground. This includes the locations of valley blocking events, areas of
severe landsliding, and other general observations. Where available, high-resolution imagery from tasked sensors should be used in the
first instance in order to identify valley blocking events as each image
tile is made available. However, given that the initial focus of such
imagery is likely to be over urban centers and the epicenter, it cannot be
assumed that these first datasets will cover the total area affected by
landsliding. Once medium-resolution imagery covering a larger area is made
available, this can be used to manually identify valley blocking events over
the entire area within hours. Only once a valley blocking event has been
breached should its monitoring be discontinued. Areas of severe landsliding should be noted during searches for valley blocking events. Details of the most
severely affected valleys and the approximate region affected by landsliding
should be quickly disseminated. This need not necessarily constitute a
formal map product. Selecting the most suitable mapping platforms needs to weigh the
speed of access to data against the ease with which mapping can be
undertaken. Once the above stages are complete, formal individual landslide
mapping can begin. The mapping platforms available should be assessed and a
consistent protocol established amongst those involved. If imagery is
available through platforms such as Google Crisis, these have the advantage
of removing the need for imagery download and processing, but they can mean
delays in obtaining access to the latest imagery. Such platforms also allow
pre- and post-event imagery to be compared and then overlaid with a terrain model. The chosen mapping method has a significant impact on the time
needed to map large numbers of landslides. If time is limited, mapping
landslides as points is advantageous. A map of landslide points,
significantly affected valleys, and the area within which points are found
should be possible within 1 to 3 days of the first medium-resolution
imagery. This is equivalent to the creation of Copernicus Emergency
Management Service delineation maps, which provide an assessment of the
extent of the event. The highest-resolution data may not always be the most
appropriate for wide-area mapping of landslides. From our experience,
medium-resolution imagery such as Sentinel-2 and Landsat 8 currently
provides a good balance between image footprint size and coverage and
spatial and spectral resolution. Sentinel-2 imagery has a frequent return
interval (5 days), increasing the probability of image availability in
the days after an event and providing recent pre-event imagery.
High-resolution imagery, which tends to have a smaller footprint, is best if
incidence angles are close to nadir, such as Outputs should be open access and clearly explained. Maps
should be made available in open formats, alongside a description of the
methods, limitations, and key messages. Accompanying vector data should also
be provided given that the value of much of these data is that they can be
overlaid with other data, such as assets or infrastructure. If possible,
feedback on the data being produced from those using it on the ground is valuable. If there is a continued need to generate more granular detail,
landslides should be individually delineated using polylines, as a
compromise between speed and detail as compared to points and polygons.
Polylines enable the magnitude of events to be approximated and can be used
in combination with infrastructural data in order to identify events that
may have caused highway blockage or damage. In some developing regions,
vector data are likely to improve with time following the event due to
crowd-sourced mapping initiatives. Polyline mapping of the area is
potentially possible to complete within approximately 1 week and a map
product provided. Maps of the number of landslides per unit area (density of
landsliding) are useful indictors of the extent and spatial distribution of
relative landslide intensities, and any accompanying landslide vector data
should be made available. Polygons are only recommended for mapping landslides if capacity
permits and where imagery is suitable. Where imagery is subject to high
levels of topographic distortion and therefore poor registration, there is
little gain in meticulously mapping landslide extents with polygons, both
from a scientific and from a risk reduction perspective. The time required
to produce this data is also highly likely to exceed the time frame within
which it is needed to inform the initial disaster response. Small numbers of
landslides mapped with polygons distributed across the area delineated in
the initial point-based mapping could become useful as training datasets for
landslide probability models and automated mapping (e.g., Stumpf et al.,
2017). In such instances, this mapping should occur in parallel to all other
mapping.
In this paper, we have reflected on our experience of creating an inventory of coseismic landslides rapidly after the 2015 Nepal earthquakes. While scientific efforts to map coseismic landslides may aim to assess the hazard in an urgent manner to inform the humanitarian response, they are rarely completed rapidly enough to do so. As such, scientific efforts to generate useful information require recognition of what is both useful and practicable within the available time frame. We have demonstrated what can realistically be achieved, including the time critical decisions that need to be taken to expedite the mapping process. While any increase in the rate of image availability increases the likelihood of producing useful landslide assessments, the consideration of what is possible (given handling and processing constraints on mapping) and what is useful (given the priorities of end users responding to humanitarian crises) remains pertinent for other future events.
Our lessons can and should inform the approach and expectations of those who seek to produce rapid (days to months) coseismic landslide assessments and those who would benefit from using this information. There is clearly no requirement to wait until an earthquake occurs to begin conversations around what is or could be useful, and these conversations should involve scientists, government representatives, and humanitarian response teams. The efforts of UN-SPIDER and the CEOS Disaster Working Group are vital for ensuring coherence in the response to future earthquakes. With rapid advances in social media and accessible geospatial data, it is likely that future post-earthquake assessment will benefit from more systematic crowd-sourced data collection and integration.
The final landslide dataset is freely available as a shapefile
of polylines on the Humanitarian Data Exchange (
The authors declare that they have no conflict of interest.
The research was funded by NERC Urgency Grant NE/N007689/1, the NERC-ESRC Earthquakes without Frontiers project (NE/J01995X/1), GCRF grant NE/P016014/1, and the UK Department for International Development (DFID) as part of the Science for Humanitarian Emergencies and Resilience (SHEAR) program. This study has also been in part supported by the DIFeREns2 (2014–2019) COFUND scheme supported by the European Union's Seventh Framework Programme no. 609412).
We are grateful to H. Bell and S. Whadcoat from Durham University who provided assistance during the mapping. We thank the various agencies that made satellite imagery freely available through the International Charter on Space and Major Disasters. We also thank the United Nations Office for Coordination of Humanitarian Affairs (UN-OCHA) and the UN Resident Coordinator's Office in Kathmandu, and DFID offices in London and Nepal, whose input in helping to define the timelines of useful information for emergency response greatly informed this study. Colm A. Jordan and Tom A. Dijkstra publish with the permission of the Executive Director of the British Geological Survey. Edited by: Jean-Philippe Malet Reviewed by: Odin Marc and one anonymous referee