Potential slab avalanche release area identification from estimated winter terrain : a multi-scale , fuzzy logic approach

Avalanche hazard assessment requires a very precise estimation of the release area, which still depends, to a large extent, on expert judgement of avalanche specialists. Therefore, a new algorithm for automated identification of potential avalanche release areas was developed. It overcomes some of the limitations of previous tools, which are currently not often applied in hazard mitigation practice. By introducing a multi-scale roughness parameter, fine-scale topography and its attenuation under snow influence is captured. This allows the assessment of snow influence on terrain morphology and, consequently, potential release area size and location. The integration of a wind shelter index enables the user to define release area scenarios as a function of the prevailing wind direction or single storm events. A case study illustrates the practical usefulness of this approach for the definition of release area scenarios under varying snow cover and wind conditions. A validation with historical data demonstrated an improved estimation of avalanche release areas. Our method outperforms a slope-based approach, in particular for more frequent avalanches; however, the application of the algorithm as a forecasting tool remains limited, as snowpack stability is not integrated. Future research activity should therefore focus on the coupling of the algorithm with snowpack conditions.


Introduction
Location and extent of avalanche starting zones are of crucial importance in order to correctly estimate the potential danger that avalanches pose to roads, railways and other infrastructure.In current engineering practice for hazard map-ping and planning of long-term mitigation measures, release area size is a key input in numerical models of avalanche dynamics such as RAMMS (Christen et al., 2010) or SAMOS-AT (Sampl and Granig, 2009;Sampl and Zwinger, 2004).They allow the assessment of run-out, velocity, flow height or impact pressure of avalanches, and are especially important when historical data are sparse or completely lacking.Studies of modelling extreme (Barbolini et al., 2000), as well as small and frequent avalanches (Dreier et al., 2014), have revealed high model sensitivity to changes of release area size and location.Therefore, a very precise definition of potential release areas is not only crucial for hazard mapping or long-term mitigation purposes, but is a precondition for successful integration of avalanche dynamics simulations in the planning of more short-term mitigation measures, such as road closures or ski resort safety, where small and frequent avalanches also pose a threat.

Modelling potential release areas
The complex task of release area estimation remains subject to the individual judgement of avalanche experts.It is based on terrain analysis, historical data, field visits and, for example, aerial photography of the winter surface.Nonetheless, with increasing availability of geographic information systems (GISs) and digital terrain models (DEMs), the development of algorithms centred on automating the process of release area definition has been attempted.Such algorithms are mainly based on terrain parameters, such as slope, aspect and curvature derived from digital terrain models (DTMs) (Maggioni and Gruber, 2003;Bühler et al., 2012).Some authors use additional snow meteorological parameters such as the duration of snow cover (Pistocchi and Notarnicola, 2013) or the elevation of isotherms (Cia et al., 2014), or they introduce a qualitative roughness measure derived from aerial photographs (Ghinoi and Chung, 2005).An overview of the existing approaches and their corresponding terrain parameters is shown in Table 1.
These methods, however, have some severe shortcomings, which limit their use in practical applications.
-In general, coarse-scale elevation models ( > 10 m) are used, limiting the application for the definition of extreme avalanche scenarios with very high return periods.One reason is that such coarse resolutions cannot capture surface roughness, such as rocks, smaller ridges or gullies.Whereas surface roughness is negligible in extreme situations, it is highly relevant for smaller, more frequent avalanches, which cause the vast majority of casualties in Switzerland today and threaten mountain transport ways and ski runs (Maggioni et al., 2012).Small-scale terrain features often delineate release areas with a shorter return period, where only parts of the whole potential starting zone are released.The algorithm of Bühler et al. (2013) partly overcomes this limitation by using a DTM resolution of 5 m, allowing finer scale terrain features to be captured.
-Small-scale terrain features are smoothed out with increasing snow accumulation, which can change potential size and location of avalanche release areas.However, the influence of snow depth on terrain morphology is not taken into account by existing methods.
-Release area locations often vary depending on wind directions and their corresponding snow deposition patterns.Existing applications do not account for varying accumulation patterns due to variable storm directions.

Multi-scale terrain modelling
Generally, terrain parameters derived from gridded elevation models inherently depend on scale (scale tendency; Good-child and Quattrochi, 1997).In a raster (pixel)-based geometry, scale incorporates two components: the measurement resolution (resolution of the DTM) and the measurement extent (window size where a terrain parameter is calculated).One technique to calculate terrain parameters is to derive first-and second-order derivatives from a locally fitted surface around a point (or grid cell) of interest.For example, slope and aspect belong to the group of first-order derivatives, whereas plan and profile curvature correspond to second-order derivatives.
A widely adopted method for approximating the surface is the use of biquadratic polynomials of the form where z corresponds to the elevation estimate at a point (x, y) and a-f are the coefficients that define the quadratic surface (Evans, 1980).Originally, the six unknown parameters af are defined, taking into account all grid values within a 3 × 3 window around the central cell.Wood (1996) expanded this concept to what he refers to as a "multi-scale quadratic parametrisation", by fitting the trend surface over any arbitrarily sized window.By varying the extent of the window size, a surface parameter can be calculated for varying scales without changing the resolution of the elevation model.

Terrain-snow cover interaction
The multi-scale concept is especially useful in terms of characterising terrain-snow cover interaction.Mountain snow cover is generally characterised by a high spatial variability over a wide range of scales, from millimetre to kilometre scales (Blöschl, 1999).Due to the interaction of the wind field with topographical features, such as ridges or gullies, but also at finer scales such as single rocks or small depressions, snow is accumulated in depressions or in the leeward side of terrain features (Lehning et al., 2011).So far, the influence of wind on the deposition patterns of snow -defining potential locations where slab avalanches may preferentially form -is accounted for by using a curvature measure.
A parameter capturing the exposure and sheltering effects of terrain relative to a given wind direction could be, however, very beneficial to formulating release area scenarios.Terrainbased parameters with the aim of capturing wind shelter and exposure have been developed in the past (Winstral et al., 2002;Purves et al., 1998) and have been shown to capture wind sheltering effects and realistically produce accumulation patterns of snow (Schirmer et al., 2011;Erickson et al., 2005).They could hence easily be integrated into release area estimation procedures.Another result of irregular snow accumulation is a progressive cancelling out of surface roughness, leading to increasingly homogeneous snow deposition patterns (Mott et al., 2010).It is assumed that this may lead to a more uniform slab thickness and, in combination with reduced support from the bed surface, to potentially larger release area sizes -in particular for surface slabs (McClung, 2001;Simenhois and Birkeland, 2008).Further, Veitinger et al. (2014) showed that the progressive smoothing of surface roughness can be captured by a multi-scale roughness parameter.So far, roughness is only incorporated in some approaches as a single-scale terrain parameter, often not reflecting the adequate scale of a given snow scenario.

Fuzzy logic modelling for natural hazards
Natural processes can rarely be described or modelled as a result of sharply defined criteria or variables.Expert decision making therefore contains significant degrees of uncertainty owing to the complex nature of the process and the parameters involved.Current algorithms often do not consider this uncertainty in input parameters as they incorporate binary classification schemes, where the result strongly depends on the definition of thresholds.As a result, such algorithms can only distinguish between areas where avalanches are possible and where they are not.Consequently, existing release area algorithms mainly produce worst case scenarios, including all possible areas that can potentially release avalanches.Whilst this may be appropriate for an extreme situation, it is not suited to delineate smaller release areas, where only parts of the potential area release avalanches.Such issues cannot be solved through a classical logic approach; therefore, with this in mind, Zadeh (1965) introduced the fuzzy logic approach in an effort to deal with such imprecise data or diffuse rules.This approach overcomes the concept of sharp (so-called crisp) borders by introducing the membership concept.Every element, as opposed to belonging (or not) to a class, is attributed a degree of membership belonging to that class (referred to as a set in fuzzy set theory).This concept is very appealing for natural hazards applications as it allows the integration of human reasoning capabilities into knowledge-based expert systems.Moreover, it has already been successfully applied to landslide susceptibility mapping (Schernthanner, 2007), risk modelling of wet snow avalanches (Zischg et al., 2005) and avalanche release area estimation (Ghinoi and Chung, 2005).Such an approach would be particularly helpful for the definition of more frequent avalanches, where a differentiation between areas of different likelihoods of releasing avalanches is necessary.
Therefore, in this paper we present a new algorithm to define potential release areas, which improves on existing approaches by taking into account the effects of snow cover and wind transport on the summer terrain.Specifically, we introduce a multi-scale roughness parameter, where scale is adjusted as a function of snow depth, and we include a wind shelter index to define release area scenarios as a function of varying snow accumulation due to wind.This allows calculation of potential release area size with a snow-distributiondependent parameter.By using fuzzy logic to combine information we can further distinguish between different grades of release propensity.Thus, our approach allows for a definition of potential release areas, which goes beyond purely terrain-based parameters.Such approaches are particularly important for frequent avalanches and avalanche dynamics simulations of short-term hazard, for example in the case of road closures.

Overview
The release area algorithm developed in this paper is based on three parameters: slope, a wind shelter parameter and roughness, which is derived as a function of snow depth, accounting for variable winter topography with increasing snow accumulation.A fuzzy logic approach is then applied to combine and weight the variables to derive an indexed map for release area possibility (Fig. 1).
Generally, simple and computationally efficient membership functions (e.g.triangular functions) are favoured in fuzzy logic implementations.However, they only comprise linear segments and introduce very sharp changes at the corner points.In an effort to overcome these drawbacks, generalised bell functions (also referred to as Cauchy membership functions; Jang and Sun, 1997) are implemented in the algorithm.They are characterised by smooth outlines where the sharpness of transition can be set by a parameter.They are defined by The algorithm is programmed in the free software programming language R (version 3.0.3,http://www.r-project.org); in particular, the geocomputing and terrain analysis functions are accessed by the RSAGA package (Brenning, 2008) related to the open source desktop GIS, SAGA (http:// www.saga-gis.org).A Python interface (https://www.python.org/) was implemented to use the script as a tool in ESRI J. Veitinger et al.: Slab avalanche release area estimation ArcGIS (ArcGIS 10.2 for Desktop).The model code of the release area algorithm can be found in Veitinger et al. (2016).
In the next sections, the choice of the parameters, as well as the selection of value ranges to define the membership functions, is explained.

Slope
One of the most relevant terrain variables for the definition of avalanche starting zones is slope.Slope maps are regularly consulted in current avalanche hazard mapping practice as a basis for release area assessment.Generally, slab avalanches are released on slopes between 28 and 55 • (Perla, 1977;Schweizer and Lütschg, 2001;Schweizer and Jamieson, 2001).An increased release probability is generally observed above 35 • (Stoffel and Margreth, 2012).With respect to size, avalanches occurring below 35 • are often large.For very steep regions (> 45 • ), release probability for large avalanches decreases because of constant sluffing and avalanching, which prevent the formation of a continuous weak layer.Mostly small, frequent avalanches are observed.Above 55 • , sluffing hinders the formation of slabs (McClung and Schaerer, 2002), and only loose snow avalanches or sluffs are observed.
To calculate slope using quadratic parametrisation as proposed by Wood (1996), the magnitude (slope) of the steepest gradient at the central grid cell of the fitted surface needs to be determined.To derive this, the rate of change in x and y direction is calculated as follows: The partial derivatives for x and y are noted as and dz dy = 2by + cx + e. (5) In order to obtain the parameter at the central point of the surface (x = y = 0), Eqs. ( 4) and ( 5) are integrated into Eq.( 3), and note Slope (α) is thus given as Likewise, aspect is defined as The aforementioned definitions are consistent with others, as reported in the literature (Zevenbergen and Thorne, 1987).Based on multi-scale slope and aspect, several other terrain parameters such as roughness and wind shelter can be derived (see following sections).
In our algorithm, slope values between 28 and 60 • are considered to be potential release areas.The degree of membership µ s (x) to the class PRA of slope is modelled using a generalised bell membership function with the parameters a = 8, b = 3 and c = 40 (Fig. 2).
This function assigns the largest membership degrees to slopes of between 35 and 45 • .Slopes less steep than 30 • and steeper than 55 • are assigned low membership degrees as slab avalanches become increasingly unlikely for such slopes.

Wind shelter
Wind-terrain interaction is often mentioned in literature as an important factor to explain the location of slab avalanches.Concave areas are generally considered to be more prone to avalanche release than convex terrain (Maggioni and Gruber, 2003;Vontobel, 2011).One possible reason is that snow under wind influence is irregularly distributed and deposited, favouring leeward slopes, gullies or downslope terrain steepening.At a slope scale, wind drift (Gauer, 2001) and preferential deposition of precipitation (Lehning et al., 2008) are the main influencing processes for snow redistribution.Based on the importance of wind-terrain interaction for snow distribution, we consider a wind shelter parameter to be superior compared to a regular curvature measure for two reasons.
1.A wind shelter parameter accounts for sheltering effects independent of the direction (downslope or acrossslope) of a curvature measure.
2. Sheltering effects can be derived as a function of wind direction.This allows refined calculations, taking into account local particularities such as the main wind direction.
In this study, the wind shelter parameter of Plattner et al. ( 2006) is utilised, based on the wind sheltering parameter of Winstral et al. (2002) of the form where S = S(x 0 , a, δa, d) is a subset of grid cells within a distance of δd and a range of direction of a ± δa from the central cell x 0 .In our implementation we replaced the MAX function by the quantile function using the third quantile instead of the MAX.This accounts for the fact that locally, very large sheltering effects might be outweighed if the surrounding area is open to wind influence (e.g.large rocks in an otherwise open slope).The wind shelter index varies between −1.5 and 1.5 for complex alpine terrain.Negative values correspond to wind-exposed terrain, and positive values  to wind-sheltered terrain; therefore, the degree of membership µ w (x) to the class PRA of wind shelter is modelled using a generalised bell membership function with the parameters a = 2, b = 3 and c = 2 (Fig. 3).

Roughness
Surface roughness was mentioned as early as 1959, during which time it was recognised as an important parameter for avalanche release (Peev, 1959).In a shallow snowpack, terrain roughness can have a stabilising function, hindering the formation of continuous weak layers (Schweizer et al., 2003), as well as providing mechanical support to the snowpack (McClung, 2001).However, increasing snow accumulation is known to smooth out surface roughness (Veitinger et al., 2014), reducing snowpack variability in the surface layers (Mott et al., 2010) and the mechanical support of a slab (van Herwijnen and Heierli, 2009).In this case, the stabilising effects of terrain roughness disappear or even reverse (McClung and Schaerer, 2002), and the formation of continuous weak layers and slabs, which favours fracture propagation (Simenhois and Birkeland, 2008), is facilitated.This affects potential release area size and location -especially in the case of surface slab avalanches.Therefore, it seems crit- ical to consider roughness together with its alteration due to snow accumulation.
For geomorphological applications, different approaches to measure surface roughness do exist.Essentially, we can distinguish area ratio, vector dispersion and the variability of surface elevation methods (Grohmann et al., 2011).Notably, other methods, such as fractal dimension or wavelets, are not directly linked to measurable geomorphological variables, and are therefore difficult to interpret, limiting their use for practical applications.
The vector dispersion methods basically compare the orientation of normal vectors of a given patch of surface with the orientations of neighbouring surface patches.The degree of variability in the orientations serves as a measure of the irregularity of the topographical surface.Figure 5 illustrates the principle of vector dispersion methods.This especially highlights the dependence of scale, namely the resolution of the DEM and the moving window size.
One particularly appealing vector dispersion method is the vector ruggedness measure developed by Sappington et al. (2007), based on the vector approach proposed by Hobson (1972).The measure is well suited to capture the process of terrain smoothing of snow (Veitinger et al., 2014).Based on slope and aspect definitions, normal unit vectors of every grid cell of a digital elevation model (DEM) are decomposed into x, y and z components (Fig. 4a): A resultant vector |r| is then obtained for every pixel by summing up the single components of the centre pixel and its eight neighbours using a moving window technique: as shown in Fig. 4b.The magnitude of the resultant vector is then normalised by the number of grid cells and subtracted from 1: where R is the vector ruggedness measure.
In contrast to the original method of Sappington et al. (2007), a constant 3 × 3 kernel window to calculate roughness is preserved.Scale is accounted for by the size of the neighbourhood window used to compute slope and aspect.Scale is thus defined as the width of the kernel window.
The degree of membership µ r (x) to the class PRA of roughness is modelled using a generalised bell membership function with the parameters a = 0.01, b = 2 and c = −0.005(Fig. 6).
According to the roughness values found in avalanche experiments (Veitinger and Sovilla, 2016) and by expert interpretation, the roughness membership function assigns high membership values for even and smooth terrain features.Membership values strongly decrease for roughness between 0 and 0.001.Between 0.01 and 0.02, roughness values are assigned low membership degrees, accounting for the fact that avalanches are unlikely to happen but are possible in unfavourable conditions.Above 0.02, avalanches are considered to not occur.

Integration of terrain smoothing
In most cases, elevation models of the winter terrain, in particular for varying snow cover scenarios, are not available.Therefore, in this section we present a simple procedure to approximate winter terrain roughness from a DTM.In order to do so, we use earlier findings of Veitinger et al. (2014), who showed that terrain smoothing processes are related to scale as a function of snow depth and snow depth variability.This relationship is used to evaluate whether we can relate the scale of terrain roughness to given snow cover scenarios; in other words, the following question is posed: is there an optimal scale related to snow depth and its variability?Further, it was shown that patterns of snow depth, at least at peak accumulation, are highly persistent in between winter seasons (Schirmer et al., 2011;Veitinger et al., 2014), suggesting that the existence of site-specific winter terrain surfaces can be assumed.
Three snow distributions of three distinct winter seasons with a mean snow depth ranging from 1 to 4 m measured by airborne laser scanning in the Vallée de la Sionne field site are used.Details about the field site and snow distributions can be found in Veitinger et al. (2014).Furthermore, terrain roughness, for scales ranging from 6 to 25 m, is compared to snow surface roughness calculated at a scale of 6 m.The underlying surface models have a resolution of 2 m, the same as the one that is used in the release area algorithm.
Further, the coefficient of correlation R 2 between the multi-scale terrain roughness surfaces and snow surface roughness is calculated.In addition, the ratio between roughness of peak correlation and the measured surface roughness is computed to determine whether the two measures produce similar absolute values.
Figure 7 shows the correlation coefficient R 2 and the ratio between multi-scale terrain roughness and snow surface roughness for a rough terrain (CB1) and a smooth terrain (CB2).We observe that, for every snow depth distribution, a maximum correlation exists.Moreover, the scale of maximum correlation (characteristic scale) increases with greater snow depth.In smooth terrain (CB2), the characteristic scale is generally larger (10-18 m) than in rough terrain (CB1, 6-10 m).Further, R 2 is generally lower in smooth terrain and decreases with increasing snow depth (distribution).Further, the roughness ratio at the characteristic scale is close to 1 (0.8-1.2) for all snow distributions, indicating that the modelled winter terrain roughness produces reasonable approximations of the measured snow distributions.One exception is the snow distribution of 2.1 m in CB1, which shows a greater correlation with terrain than the snow distribution of 0.9 m.If we compare this with the roughness ratio, they produce a very similar smoothing degree.One possible explanation could be that, in very steep and rough terrain, such as CB1, the smoothing processes are limited to a certain level.Sluffing and avalanching might prevent the formation of a thicker snowpack and continuously redistribute snow from the steep flanks of gullies, downwards into the gully bottoms, preserving terrain roughness until the gullies are not fully filled with snow.
Based on these results, an empirical formula linking the kernel window size S for computing roughness (measured in number of grid cells) to the snow distribution can be derived as follows: with mean snow depth, HS, and the coefficient of variation, C v , defined by the standard deviation divided by the mean snow depth and accounting for the degree of terrain smoothing: This formula is representative of medium terrain roughness.Very rough terrain may require smaller window sizes for similar snow distributions.

Fuzzy logic modelling of potential release areas
In fuzzy set theory, three basic operators were originally defined (Zadeh, 1965): the union (OR), the intersection (AND) and the complement (negation).These operators are socalled non-compensatory.Compensation in the context of set theory signifies that a low value of one fuzzy set can be compensated by a high value of the other or vice versa.No compensation (or trade-off) is commonly inadequate for modelling purposes when aggregating human reasoning -in particular when conflicting variables are aggregated; therefore, the family of fuzzy operators was extended by so-called averaging (for example, arithmetic mean), which allow full compensation between variables (Zimmermann, 1987).As a result, the "fuzzy AND" operator, as defined by Werners (1988), is used in the algorithm.The degree of membership to the class PRA is therefore defined by with three fuzzy sets slope µ s (x), roughness µ r (x) and wind shelter µ w (x), and with γ defined as γ depends on the smallest membership value of the three fuzzy sets.For γ values approaching 0, the operator behaves as a minimum operator.If γ tends towards 1, the operator resembles an averaging operator.The behaviour of this operator, and, in particular, the role of γ can be best illustrated using an example.Imagine a medium steep slope of 32 • (µ r (x) = 0.5), once characterised by medium terrain roughness (µ r (x) = 0.5) and once by very low terrain roughness (µ r (x) = 0.8).In case of a classic minimum operator, both slopes would be assigned the same membership value to the class PRA; however, by individual judgement, a generally higher release probability would be assigned for a slope characterised by low surface roughness.In contrast, the "fuzzy AND" operator increases the membership value for the slope of low roughness compared to the rough slope, as a value of γ = 0.5 allows partial compensation.In contrast, assuming the steepness of the two slopes is 28 instead of 32 • , yielding a µ r (x) of 0.08 and a γ of 0.92, a membership value to the class PRA of approximately 0.1 (0.13 and 0.11 respectively) would be assigned to both slopes.In this case, the operator resembles the minimum operator, reducing compensation to a minimum.This behaviour is, in our opinion, reasonable, as a 28 • slope rarely produces avalanches -even for a perfectly smooth slope.

Input parameters
To perform a calculation of release areas, two mandatory inputs have to be provided: (1) a digital terrain model (DTM) and (2) a mean snow depth in the release area.Optionally, a main wind direction and a forest mask can be provided to exclude forested areas from potential release areas.The development of this algorithm is based on a DTM resolution of 2 m, as elevation models of such resolution are now nationwide available in Switzerland.If only coarser DEMs are available, resampling to a 2 m resolution is recommended.However, small-scale terrain features might not be represented for scales smaller than the initial DEM resolution, leading to erroneous calculations in shallow snow depth scenarios, where small-scale terrain features are highly relevant.
Further, an estimated mean snow depth HS in the area under examination must be provided by the user.Snow depth can be derived from a local weather station, or more rarely, from remotely sensed values.Together with the degree of terrain smoothing, the input snow depth defines the scale of surface roughness.Two options are available: a regular degree of terrain smoothing (default) and low terrain smoothing.The latter accounts for scenarios without significant snow distribution (e.g.little wind influence).Based on measured snow distributions (Veitinger et al., 2014), a coefficient of variation C v of 0.35 for regular smoothing and a value of 0.2 for low terrain smoothing is selected.
In addition, a default wind direction of 0 • with a tolerance of 180 • is set, if not explicitly specified by the user.This setting results in release area definition independent of a specific wind direction.To account for release area scenarios originating from weather systems of different directions, the wind direction can be set.It is recommended that a tolerance from the main wind direction of at least ±15 • is permitted, as local winds often strongly deviate from the main wind direction.Alternatively, if the local wind regime is understood, this should be used as input direction.

Verification and validation
Model validation is a very important step throughout the course of model development.Commonly, validation is referred to as evaluating the predictive power of a model, meaning a comparison of simulated and measured data, the so-called "operative validation" (Caswell, 1976).However, techniques such as sensitivity analysis of model parameters, comparison to other model results or, as commonly applied in the context of natural hazards, comparison of model output with a database of observed events of analysed hazard in the past (event validation), can be considered proper validation techniques (Rykiel Jr., 1996).
Further, in the case of a continuous model output, a division into several hazard classes is commonly requested by decision makers for practical reasons (such as the planning of mitigation measures, for example).The selection of such decision thresholds depends on the context of application of the hazard model and the costs associated with the two error types (false positives/negatives) rather than on the characteristics of the model itself.Accordingly, in this vein, Beguería (2006) proposes the conduction of the model validation independently of various final thresholds, and to use such validation output to propose decision thresholds, including associated confidence measures for the end users of hazard models.
One way to perform a threshold-independent validation is to plot performance measures (e.g.true/false positives) for all possible thresholds.Such a plot is refereed to as a Receiver Operating Characteristic (ROC) plot (Fawcett, 2006), as shown in Fig. 8.
The area under curve (AUC) is commonly used as a measure for the overall accuracy of the model -independent of a certain threshold.It can be interpreted in the sense that the diagonal line corresponds to the case of a random guess, thus meaning a 50 % chance of correct classification.Any curve situated further towards the upper corner would describe an improvement over the random guess.
In this study, we will perform event validation using, on the one hand, a long-term (30 years) avalanche record database.On the other hand, we will apply the algorithm to a single event from short-term hazard mitigation practice.

Long-term validation
To validate the algorithm, the long-term avalanche database of Zuoz was utilised.Since the winter of 1982/1983, avalanches have been continuously observed and documented in the region Zuoz (1716 m a.s.l.), a village in the Engadine valley in southeastern Switzerland.The area is characterised by a continental climate, with a mean winter precipitation of 270 mm and a mean winter temperature of −8 • C (Stoffel et al., 1998).Overall, more than 2200 avalanches have been manually documented by the local snow and avalanche observer of the village of Zuoz.
The area in the southern part of the village was selected, as no artificial releases are performed in these avalanche paths.The observer normally maps the entire avalanche perimeter without specifying the release zone.Based on all avalanches, we built several reference datasets.The first reference dataset comprises all avalanches, corresponding to the area which was at least once affected by an avalanche within the 30-year observation period.The selected area is shown in Fig. 9a.For the second reference dataset, we extracted the avalanche re- lease areas from the first dataset.In order to do so, we identified the upper 80 m in height of every avalanche polygon and assigned this a release area.This is quite a conservative estimation of the release area size; however, as the size of the avalanches greatly varied between small and very large avalanches, we decided to rather exclude parts of the release area than to include significant parts of the avalanche track.The result of the release area extraction is shown in Fig. 9b.
Further, we selected two sub-datasets from the extracted release areas, depending on how often a certain area released avalanches.In order to do so, we calculated the overlap of all avalanche release areas and assigned each pixel the number of overlapping polygons.This corresponds to the frequency of how often this pixel was a part of an avalanche release area within 30 years of observation.The third dataset is therefore defined as the area that was at least twice included in an avalanche release area (Fig. 9c).The fourth dataset is defined as the area that released avalanches at least five times within 30 years of observation (Fig. 9d).
In order to evaluate the performance of the algorithm, we compared it to a simple slope approach, which is often difficult to beat.The slope classifier assigned all slopes between 28 and 60 • as the potential release area.Forested area was excluded from being classified as release area in both approaches.Further, we did not consider different snow depth scenarios for the validation process, as this information is not available from the reference data.We used a snow cover scenario with an input snow depth of 3 m to account for all possible situations, from low to deep snow covers.
Figure 10 shows the ROC curves for the algorithm and the slope approach using the entire avalanche perimeter as a reference, and also using the extracted release areas.
We observe that the PRA algorithm provides only little improvement over a slope approach, with a slightly better result using the entire avalanche perimeter as reference.It is also observed that the true positive rate -even in the case of slope -is quite low with around 60 % in the case of the entire avalanche perimeter and around 80 % when the extracted  release areas are taken as a reference.Whilst this can be explained with the significant proportion of avalanche track and run-out zone erroneously included in the reference dataset when using the full avalanche outline, this is not the case for the extracted release areas.In actual fact, a true positive rate of 80 % for the slope classifier indicates that one-fifth of the avalanches were mapped in areas flatter than 28 • or steeper than 60 • .As avalanches are very unlikely in such regions, the deviations must be attributed to errors in the mapping process.As avalanches were mostly mapped from the road in the valley bottom on a 1 : 25 000 m map, deviations up to several tens of metres in the location of the avalanches are likely, explaining the erroneously mapped avalanches.
Figure 11 shows the classification result for the release areas that occurred at least two and at least five times, respectively.
In this case, we observe that the PRA algorithm shows a more pronounced improvement compared to the slope classifier, suggesting that especially frequent avalanches are more reliably detected with the PRA algorithm.The fact that the true positive rate increases to around 90 % in the case of the avalanches that released avalanches at least five times shows that the mapping error is reduced by taking the overlay of several avalanches.Approximately 10 % of the avalanches still remain erroneously classified.In order to quantify the quality of the classification results, the area under the curve (AUC) was calculated.Table 2 shows the area under the curve (AUC) for all classifications.
The AUC calculation confirms that the performance of the PRA algorithm increases with increasing release area frequency.The difference between PRA and slope approach increases from 0.6 for all avalanches, to 3 for zones releasing at least twice and to 3.7 for zones releasing at least five times.This suggests that, despite the limitations mentioned in the reference data, the PRA algorithm can be considered more powerful compared to a classic slope approach -in particular for the definition of more frequent avalanches.

Single event validation
In this section, an avalanche event from current avalanche mitigation practice in Switzerland is presented.It concerns a mountain road situated in the Canton of Uri, in the centre of Switzerland, close to the Gotthard Pass.The mountain road is northeast-southwest-oriented and is threatened by different avalanche paths of the Böschen avalanche (Fig. 12).The avalanche path is oriented northwest and characterised by a lower release area zone (red circle) and an upper, alpine release zone.The release zones are separated by a flatter section in between.The lower release zone is heavily vegetated with low bushes and characterised by many small gullies, which produce the majority of the avalanches.The entire lower release zone is steeper than 30 • , qualifying it as the potential release area.The relative proximity to the main Alpine ridge means that the slope is exposed to storm events from a northwestern direction, as well as to Föhn storms from a southern direction, often in combination with significant snowfall.Generally, storm events from a southern or southwestern direction are the most critical, responsible for the majority of avalanche events affecting the road.One typical avalanche cycle was observed on 2 February 2014.After significant snowfall of around 40 cm, 11 small avalanches were naturally released in the lower avalanche release zone (black outlines in Fig. 13) and reached the road.Snow depth measured at a nearby weather station was 120 cm at the time of avalanche release.Wind influence was rather low.
Adjusting the algorithm to the according wind and snow situation, an input snow depth of 120 cm and no specific wind direction was set.Beyond that, no changes were applied to the algorithm code.The resulting release area calculation for the same area as mentioned in Fig. 13 is shown in Fig. 14.It is observed that the small channels in the lower release area are well detected and are assigned mostly a high possibility (red colour) of being a release area.This is in good agreement with the observed avalanches, which are released in the small gullies.Further, the algorithm detects several ar- eas with medium-low and low possibility (yellow and light green colour) for most of the area in between the gullies, indicating that large, continuous avalanche releases are rather not to be expected under this snow scenario.However, if a snow depth scenario of 2.5 m as opposed to 1.2 m is assumed (Fig. 14b), areas of high possibility expand, suggesting potentially larger release areas with increasing snow depth.
Further, release area scenarios for different wind directions with an input snow depth of 1.2 m were computed (Fig. 15).Main wind directions from southeast to southwest produce the largest possibility of avalanche release (Fig. 15), indicating that these wind directions are the most critical scenarios for this slope.
In contrast, a northwesterly scenario reveals only low to medium-low possibility in the entire release zone, owing to the fact that the release area is completely wind-exposed, which strongly reduces the possibility of avalanche formation.For a westerly wind direction, deeper gullies and the more northern-oriented slopes are assigned a medium-high possibility of being a release area.This reflects the loading conditions occurring under cross-slope winds, potentially leading to avalanching.

Discussion
The validation of the algorithm clearly demonstrated the improved release area identification -in particular for frequent release areas.However, the improvement over a slope approach is relatively small with respect to the entire release area observed within 30 years.This behaviour can be explained by several factors.As we used a rather large snow depth scenario, the effect of roughness is lower compared to lower snow depth scenarios, where roughness has more influence on avalanche release.This explains why the algorithm reproduces only a slightly better result than a slope approach.The result is also an indicator that for extreme avalanches, where most of the potential starting zone is released, slope is still the most relevant parameter to consider.Further, the role of the roughness parameter is rather to identify potential borders between neighbouring release areas, such as ridges.However, the proportion of such features compared to the overall PRA zone is relatively low -at least in our study area.The effect of roughness on a ROC curve is therefore only slightly visible, as the relative contribution of such zones is small.Nevertheless, the identification of such features is crucial -in particular for the demarcation of single release areas -and is one of the main improvements of the algorithm.
Despite the aforementioned improvements of the algorithm, some limitations are associated with the new model design.The implementation of a fuzzy logic approach requires a defuzzification of the model output.This task consists in defining adequate thresholds in view of the purpose of the model, which remains an unsolved task in the model design.It would require the calibration of the algorithm in different topographies and snow climates.Further, terrain smoothing modelling by adaptation of scale cannot model several effects present in a winter terrain.As an example, modifications by the winter terrain due to large avalanches or extensive formation of drift features beyond the normal filling of terrain depressions, such as cornices, are not included in our approach.Such effects are strongly site-dependent and further vary from winter season to winter season.This still needs to be assessed by experts knowledgeable in the local conditions of the study area.Moreover, the function to adjust terrain smoothing is mainly based on measurements from predominantly leeward slopes.Whereas the effect of wind on release area possibility is taken into account by a shelter index, the same underlying terrain smoothing is applied to wind-exposed and wind-sheltered slopes.That might not always be true -especially for consistently wind-exposed terrain, which might show a different smoothing behaviour to that found in our measurements.More studies in windexposed terrain would be necessary in an effort to understand terrain smoothing in such a type of terrain.Still, the practical example clearly demonstrates that terrain roughness, when adjusted as a function of snow distribution, allows the assessment of the influence of terrain smoothing on potential release area size and location.
It was further demonstrated that the integration of a wind shelter parameter allows the definition of release area scenarios to be a function of a main wind direction.Yet it is evident that wind influence is strongly variable and often deviates from the main wind direction (Mott and Lehning, 2011).Local winds strongly affect precipitation patterns near to the ground (Mott et al., 2014).In addition, wind speeds and coarse-scale sheltering effects through neighbouring mountain ridges also affect snow distribution.Such effects cannot be captured by a geomorphometric wind shelter index; this would require the physical modelling of the snow cover.Nevertheless, local experts aware of local wind effects and the loading behaviours of avalanche-prone slopes may integrate this knowledge through adequate setting of the wind shelter parameter.

Conclusions
We presented a new GIS tool for release area definition based on a fuzzy logic classification approach.We implemented a roughness parameter in combination with a simple procedure to approximate winter terrain roughness from a summer DTM.We showed that, for given snow cover scenarios, a characteristic scale with maximum correlation between terrain roughness and snow surface roughness exists.This finding allows us to adapt the scale of the roughness parameter as a function of snow depth.In combination with a continuous classification output, the algorithm improves the definition of release areas, in particular for frequent avalanches.
Further, the algorithm can be applied in any mountainous areas where slab avalanches may occur.By modifying the fuzzy membership functions in Sect.2.4, the algorithm could be easily adapted to other snow climates, such as coastal or Himalayan regions, where avalanches may occur in steeper areas than the Alps or the Rocky Mountains.
With regard to hazard mapping, the algorithm provides additional criteria for the partitioning of the whole potential release in adequate sub-basins, by detecting fine-scale terrain features, which serve as delimiting borders for less extreme avalanches.The algorithm further allows the assessment of snow influence on terrain morphology and, consequently, the potential release area size and location for different snow cover scenarios.Both factors provide additional information for an avalanche expert in charge of designing design events with different return periods.This further enables avalanche experts to objectivise and justify their decisions.However, the challenge of release area partitioning in very homogeneous terrain remains, as the algorithm can only consider terrain changes as possible separation borders.
With respect to short-term hazard assessment, we could show that the new tool is able to potentially reproduce avalanche release areas, where microtopography plays a de-

Figure 1 .
Figure 1.Schematic overview of the release area algorithm showing the input data (blue), processing steps (yellow) and output raster data (green).

Figure 3 .
Figure 3. Fuzzy membership function for wind shelter.

Figure 4 .
Figure 4. Calculation of vector ruggedness measure R. (a) Decomposition of normal unit vectors of a DTM grid cell into x, y and z components using slope α and aspect β.(b) Resultant vector r is obtained by summing up the x, y and z components of all pixels n within the neighbourhood window.Graphics from Sappington et al. (2007).

Figure 5 .
Figure 5. Vector dispersion method used to calculate surface roughness at different scales for a topographical surface.Graphic from Grohmann et al. (2011).

Figure 7 .
Figure 7. Correlation of multi-scale terrain roughness with winter terrain roughness and ratio of mean winter terrain roughness and mean summer terrain roughness in the basins CB1 (a, b) and CB2 (c, d) for three measured snow distributions.Mean snow depth in the basin is scaled with its standard deviation to account for the variability of the snow distribution, influencing its smoothing potential.

Figure 9 .
Figure 9. Maps of observed avalanches within 30 years of observation south of Zuoz showing in red (a) the entire avalanche perimeter of all avalanches, (b) the extracted release areas of all avalanches, (c) the extracted avalanche release area that released avalanches at least twice and (d) the extracted avalanche release area that released avalanches at least five times.

Figure 10 .
Figure 10.ROC curves for the PRA algorithm and slope for (a) the whole avalanche perimeter and (b) the extracted release areas.

Figure 11 .
Figure 11.ROC curves for the PRA algorithm and slope for areas for which release occurred at least (a) twice and (b) five times.

Figure 12 .
Figure 12.Avalanche path of the Böschen avalanche.The red line demarcates the lower release zone, which produces frequent avalanches affecting the road.

Figure 13 .
Figure 13.Observed avalanches on 2 February 2014, as mapped from local avalanche service (black boundaries).Stars represent areas where avalanche control is normally performed.The red boundaries delineate avalanches that were artificially released the day after the natural avalanches occurred.

Figure 14 .
Figure 14.PRA possibility of a snow depth of (a) 1.2 m and (b) 2.5 m.No specific wind direction is set.

Figure 15 .
Figure 15.PRA possibility of (a) a southerly wind direction, (b) a southwesterly wind direction, (c) a westerly wind direction and (d) a northwesterly wind direction.Snow depth is set to 1.2 m.

Table 2 .
Area under the curve (AUC) for the PRA algorithm and slope classifier.