Introduction
Wildfires are important hazards and a major ecological disturbance. In
southern France, 2500 fires are reported each year over the recent period
(1994 to 2016) and burn an average of approximately 12 000 ha
. Burned area (BA) distribution is highly
asymmetric with a large number of small fires and rarely large ones. However,
if the largest fires (>100 ha) represent only 1 % of the total number
of fires, they account for approximately 70 % of the total BA and they
consume two-thirds of the total annual budget dedicated to civil protection
against fire risk . In
addition, such large and destructive wildfires are likely to increase in
southern Europe due to changes in climate and landscape
. As a consequence,
establishing new tactics and strategies for a better prevention and
preparedness to face large fires has become a centerpiece of the European
fire policy
.
Indeed, wildfire hazards
are unique because they are fought and suppressed in real time by firemen, at
least in Europe where almost no fire propagates freely. Hence, the final BA
depends upon the balance between environmental and socioeconomic drivers that
favor fire enlargement remote terrain, strong wind, connectivity, and
management of fuels and landscapes by humans;
see and the efficacy of suppression
tactics .
For many geophysical variables and/or hazards such as rainfall, snowfall, or
river discharge, protective measures are designed to withstand an event with
a given small exceedance probability, i.e., an event that is generally much
rarer than those already observed . As a
consequence, a robust evaluation of the return period of these extreme events
is of the utmost importance to risk mitigation
. Concerning fires, this
information can help each year to predetermine the size of the fire crews
and of fire tactical means such as airplanes and trucks in each region, in
order to support ground forces if extreme fire events occur
. It may also help governmental agencies
and reinsurance companies to evaluate the cost of future fires in the current
context of ongoing changes in climate and landscapes
.
Yet, to date, estimating return levels (e.g., 100 ha, 1000 ha) of fires
corresponding to large return periods (e.g., >10 years) is rarely
carried out
and few dedicated studies are available .
For instance, in southern France, no such study exists although this region
is fire-prone and comprises a vast array of human, economic, and ecological
assets at risk . In the literature, wildfire
quantiles corresponding to large return periods have been fitted using
power-law distributions
.
applies a truncated exponential distribution.
apply a stochastic model for the spread and
extinguishment of fires, which is used to illustrate the deviation of the
upper tail from a simple power-law distribution. However, as for other
geophysical variables, extreme value theory (EVT) should be preferred due to
its strong mathematical groundings .
Specifically, in the univariate case, the generalized extreme value (GEV)
distribution, the generalized Pareto distribution (GPD), or the combination
of the GEV distribution with a Poisson point process (PP) are the
representations of interest
.
Hence, apply different extreme value distributions
to BA of Canadian forests and differentiate wildfires according to the type
of forest (boreal vs. taiga) and the source of ignition (human-caused vs.
lightning). apply the GEV distribution to
BA and fire radiative powers obtained using remote-sensing techniques over a
large box covering the Mediterranean Basin.
This work aims at computing the return periods of wildfire BA in southern
France using EVT. Specifically, we question the efficiency of the new fire
policy set up in 1994 to reduce the likelihood of very large wildfires. This
policy has been based on improved prevention of fires, increased surveillance
of forests, and a massive attack on all ignitions in order to prevent fire
enlargement . Indeed, even if this policy is
probably efficient for reducing the mean BA, its real effect on the largest
fires is still unknown. To this aim, our EVT framework is adapted to our case
study and is nonstationary in time,
i.e., it considers two distinct periods (1973 to 1994, and 1995 to 2016), and
is also non-stationary in space, to account for three distinguished
pyroclimatic subregions. In addition, a Bayesian approach
e.g., is used to assess
uncertainties in return levels. The comparison of the results between the
different time periods and regions is used to assess the significance of the
differences (e.g., how significant are the changes in return levels between
1973–1994 and 1995–2016). We use BA from the Prométhée fire database as a
surrogate for fire risk, based on the assumption that the largest fires
generate the highest impacts and costs for suppression and are among the most
devastative . Our final goal is to provide risk
managers with crucial information on the future likelihood of very large
fires in southern France.
Materials and methods
Study area and pyroclimatic regions
This study covers an area of 80 500 km2 located in the southeast of
France (Fig. ). Figure a presents the
main geographical regions, which can be classified as Mediterranean lowlands
(Provence, Languedoc-Roussillon, Maritime Alps), hinterlands, and foothills
(Southern Alps, Cévennes), and mountainous areas (the Alps, Corsica,
Massif Central, and eastern Pyrénées), which corresponds to high
elevations (>1000 m a.s.l.). Mediterranean areas have a fairly dry and
warm climate (mean rainfall <700 mm yr-1 and temperature >13 ∘C) and, as a consequence, they are conducive to fire activity.
The frequency of wildfires is smaller in hinterland and mountain areas due to
a high mean rainfall (>700 mm yr-1) and medium temperatures
(<13 ∘C). Strong and dry winds also increase the summer fire
activity along the Mediterranean coastline fuel dryness, larger fire
rate of spread;.
This study area encompasses different bioclimates, different vegetation
fuels, and different levels of human activity (which generate about 95 %
of ignitions), leading to different fire activities. For this reason we
define three main homogeneous subregions based on fire activity and
bioclimatic variables, so-called pyroclimatic regions PCr, adapted
from; see Fig. c. PCr-1 contains most fire hot spots of maximal fire activity with many fires
(notably the largest ones) and large amounts of BA. It includes Corsica,
Provence, and the Maritime Alps and concerns 64 % of all fires and
67 % of the total BA in southern France. In addition, it contains
63 % of the largest fires (>100 ha) and 74 % of the total BA of
these fires. This is due to the combination of high fire weather danger,
medium to high fuel amount, and connectivity in the landscape
as well as a very high density of human-caused
ignitions by negligence, by accident, or by intent
. This region is undoubtedly prone to large fires,
which occur mostly in summer. PCr-2 has medium to high fire activity and
covers the Mediterranean hinterland and mid-elevation mountains. PCr-3
has low fire activity and few large fires. It corresponds to high mountains
with a wet and cool climate.
Maps of the study area. (a) Geographical map.
(b) Occurrence of very large fires (BA > 1000 ha.).
(c) Density of the number of fires on a 2×2 km grid and
location of the three pyroclimatic regions (numbered circles in yellow).
Fire database and BA thresholds
We use the national georeferenced fire database called Prométhée
, which contains information on more than
106 000 wildfires. It corresponds to wildfire characteristics recorded by
fire crews and foresters after each intervention from 1973 to 2016.
Prométhée gives standardized information on each fire, including the
day and hour of ignition, its location on a 2×2 km grid (with a
total of 20 142 pixels), and the final BA in hectares.
Figure presents the temporal evolution of the number
of large fires (>100 ha) for each pyroclimatic region. Clearly, the fire
activity is the highest in PCr-1, and PCr-3 has a limited number of large
fires in comparison. From these time series, it is also evident that the fire
policy set up in 1994 reduced the number of large fires in all regions.
Number of large fires (>100 ha) for each pyroclimatic region,
as a function of time.
Extreme value theory
Extreme value theory indicates that, under
some mild conditions, the GEV distribution is the limiting distribution of
maxima applied to very large blocks. In our case, the blocks correspond to
the wildfire samples for each year. We assume that the wildfire samples are
sufficiently large, so that annual maxima of BA can be considered to follow a
GEV distribution. This has already been shown to be adequate in a very large
number of applications, for example in hydrology
, finance and insurance
, physics ,
mountain hazards
, or
coastal engineering to cite a few. The
cumulative distribution function (cdf) of the GEV is
F(x|θ)=exp{-(1+ξx-μσ)-1/ξ}ifξ≠0and,1+ξ(x-μ)/σ>0exp{-exp(-(x-μ)/σ)}ifξ=0,x∈IR,
where θ={μ,σ,ξ} is the set of parameters of
the GEV distribution, μ∈IR the location parameter,
σ>0 the scale parameter, and ξ∈IR the shape
parameter. When ξ<0, the distribution has an upper tail and
x<μ-σ/ξ. When ξ=0, the distribution is unbounded. When
ξ>0, the distribution has a lower bound (x>μ-σ/ξ) and when
ξ increases, the distribution tail becomes heavier and very large values
of x (here the BA) are more likely. The corresponding density function is
f(x|θ)=1σ(1+ξx-μσ)-1/ξ-1ifξ≠0,exp{-(1+ξx-μσ)-1/ξ}1σexp-x-μσ×exp{-exp-x-μσ}ifξ=0.
For the GEV distribution, the quantile Q(p|θ)
corresponding to a probability p and a set of parameters
θ is easily obtained:
Q(p|θ)=μ+σ{(log(1/p)-ξ)-1}/ξifξ≠0,μ+σlog(1/log(1/p))ifξ=0.
For a return period T (e.g., T=20 years), the return level (e.g., 100 ha,
1000 ha) is thus expressed by Q(p=1-1/T|θ).
Bayesian inference and Monte Carlo simulation
In this study, statistical inference is performed using Bayesian methods
see, e.g., monographs
by. A Bayesian framework has
several advantages compared to standard frequentist approaches. First,
explicit a priori assumptions can be made about the parameters to
estimate. Second, Bayesian methods provide a direct assessment of the
uncertainty related to the parameter estimation. Generally, frequentist
methods only provide confidence intervals based on theoretical results that
hold true for very large samples (i.e., asymptotically). Third, in Bayesian
methods, the uncertainty of a quantity of interest can be quantified by means
of the predictive distribution. Here as well, frequentist methods provide
similar theoretical results, which hold true under some restrictive
conditions.
A Bayesian analysis combines the information in the data represented by the
likelihood function with prior knowledge about the parameter. Parameter
estimation is made through the posterior distribution, which is computed using
Bayes' theorem
Π(θ|D)∝Π(D|θ)×Π(θ),
where Π(θ|D) is the posterior distribution of the
GEV parameters θ, Π(D|θ) is
the likelihood function, and Π(θ) is the prior
distribution of θ. In this expression, it is assumed that
the normalizing constant of the posterior distribution is unknown.
The likelihood function represents the conditional density of the data
D, here the logarithms of the maximum BA, given the parameters
θ, and describes the plausibility of observing the data
D given θ. Here, we assume that the logarithm of
these maxima are independently distributed (i.e., the maximum BA in a year y
does not influence the maximum of the year y+1) and that they follow a GEV
distribution. The likelihood is thus given by
Π(D|θ)=∏i=1Nf(xi|μ,σ,ξ),
where xi,i,…,N are the the logarithms of the maximum BA observed
during N years. Here, following , we
choose to work with the logarithm of the maximum BA in order to solve scale
issues. Indeed, raw BAs are measures of surface, i.e., two-dimensional data,
and very large BA values (e.g., 1000 ha) are a lot larger than large BA
values (e.g., 100 ha). This scale issue becomes obvious when the distribution
of the raw BA is represented, this distribution being extremely skewed. As a
consequence of these very skewed distributions, extreme value distributions
with an infinite variance (i.e., when xi>0.5) are often obtained
see
when they are applied to raw BA. Applying a log transformation to maximum BA
is a convenient way to avoid this problem.
The prior distribution Π(θ) represents our a
priori knowledge about the parameters θ. Here, we
choose the following.
Location parameter μ∼N(0,100000): vague prior.
Scale parameter log(σ)∼N(0,100000): vague prior.
Shape parameter (ξ+0.5)∼Beta(6,9). With this prior, ξ lies
within [-0.5,0.5] and values greater than 0.3 are considered unlikely.
motivate this choice for several practical
reasons. First, the variance of the GEV distribution is infinite when ξ>0.5
and the skewness coefficient is infinite when ξ>1/3. Second, numerous
geophysical applications show that, generally, ξ lies within this interval.
The joint prior distribution is Π(θ)=Π(μ)×Π(logσ)×Π(ξ); i.e., the prior distributions are considered
independent, which is usual in the absence of a priori information
about some type of dependence among the GEV parameters see, e.g.,p. 174.
The posterior distribution Π(θ|D) can be sampled
using a Markov chain Monte Carlo (MCMC) algorithm
. Here, we choose to apply the
Metropolis–Hastings algorithm using the
function MCMCmetrop1R from the MCMCpack package in
R software . In this algorithm, candidates are
randomly generated from a proposal distribution and accepted as a new sample
according to an acceptance rate, which is computed using the posterior
distribution Π(θ|D). The main difficulty with the
Metropolis–Hastings algorithm is to find an adequate proposal distribution,
leading to a reasonable acceptance rate. In this work, the multivariate
normal proposal distribution is scaled using the covariance matrix of the
parameter estimates when the maximum-likelihood method is applied, using the
function fgev from the evd package in R. For each
estimation, we produce a burn-in sample size of 100 000, and the retained
sample used to represent the posterior distribution is M=10000,
for which a thinning interval of 10 is applied in order to reduce the
autocorrelation inherent in MCMC chains produced with the Metropolis–Hastings
algorithm.
Draws from the posterior distribution Π(θ|D) can
be used to estimate other unknown quantities. Specifically, the predictive
distribution of a quantity y=g(⋅|θ), function of some
arguments and depending on the GEV parameters θ,
is expressed as seeEq. 1.4
Π(y|D)=∫θg(⋅|θ)Π(θ|D)dθ.
Hence, it fairly propagates the uncertainty related to parameter estimation
on the quantities of interest. However, a closed expression of the predictive
distribution can rarely be obtained, and it is often estimated using the
draws from the posterior distribution:
Π^(y|D)=1M∑ig(⋅|θ(i)),
where θ(i) is the ith draw from the posterior
distribution. For example, the predictive distribution of a quantile
q=Q(p|θ) (see Eq. ) can be estimated by
applying Eq. () with
g(⋅|θ(i))=Q(p|θ(i)), i.e.,
π^(q)=1M∑iQ(p|θ(i)).
Likewise, the predictive density of a maximum log-BA x̃ can be
estimated with the expression π^(x̃)=1M∑if(x̃|θ(i)), which can be used in turn to
estimate specific probabilities, such as the exceedance probability of some
critical log-BA xc. For example, the probability of having a BA x
exceeding 10000 ha in a year is obtained as
Pr(x>10000)=∫log(10000)∞π^(x̃)dx̃,
where x̃ is integrated over all values exceeding 10000 ha, on
the logarithm scale.
Measures of similarity between two distributions
Numerous statistical tests aim at testing an assumption of
equality between two distributions (for example the test of
Kolmogorov–Smirnov). These tests are adequate when two samples are compared
and when we want to test if the sampling distributions can be considered
equivalent. However, it must be noticed that these statistical tests are very
powerful with large samples, which means that in this case, a small
difference will be considered highly significant. As a result, comparing
posterior distributions with these statistical tests generally results in a
rejection of the assumption of equality, even when the distributions are very
similar.
As an alternative, different measures have been proposed by statisticians in
order to quantify the similarity between two distributions, or, in other
words, how much two distributions overlap (e.g., Mahalanobis distance,
Kullback–Leibler divergence). In this paper, we apply the Bhattacharyya
coefficient . This coefficient is easily
normalized between 0 and 1, which facilitates its interpretation. Its
expression is given by
BC(p,q)=∫p(x)q(x)dx,
where p(x) and q(x) are the probability density functions to be compared.
We have 0≤BC≤1, with BC=0 indicating that the
distributions do not overlap and BC=1 indicating a complete
similarity.
For the sake of comparison with statistical tests, when this coefficient is
applied to two normal distributions N(0,1) and N(2,1) (shift of 2
standard deviations), it has a value of 0.61. Thus, two distributions can be considered similar when the Bhattacharyya coefficient exceeds
0.61. This reference value, while subjective, gives a more precise idea of
the interpretation of the Bhattacharyya coefficient.
Results
Relevance of the GEV distribution
For each of the three regions PCr-1, PCr-2, and PCr-3, we collect the annual
maxima of BA. Following the methodology described in
Sect. , we fit a GEV distribution to the logarithm of
annual maxima of BA for the periods 1973–1994 and 1995–2016 in order to
assess the impact of the fire policy established in 1994.
Table presents the acceptance rates of the
Metropolis–Hastings algorithm, which lie between 0.32 and 0.42. These
acceptance rates are reasonable and correspond to common requirements.
Indeed, acceptance rates between 0.23 and 0.5 are usually advised in order to
obtain a fast convergence of the algorithm . A
visual assessment of the traces of the sampled parameters did not reveal any
convergence issue (results not shown).
Acceptance rates from the Metropolis–Hastings algorithm.
Zone
1973–1994
1995–2016
PCr-1
0.42
0.34
PCr-2
0.35
0.32
PCr-3
0.33
0.40
Taking the mean of the posterior distributions as point estimates, Fig.
shows the QQ plots of the fitted GEV distributions for
each region and each period. This figure compares observed maxima and
quantiles from the fitted GEV distribution corresponding to empirical
probabilities of observed maxima. Points are close to the 1:1 line, which
indicates that these adjustments are reasonable.
QQ plot of the fitted GEV distributions for each region (PCr-1, PCr-2, and PCr-3)
and each period (1973–1994 and 1995–2016). Point estimates of the GEV
parameters are obtained as the mean of the posterior distributions. The BA
(ha) is indicated on a log-scale.
Change in fire extremes before and after 1994
Posterior distribution estimates for the extreme value parameters μ,
σ, and ξ, before and after 1994 and for each region, are shown in
Fig. and provide a direct assessment of the parameter
uncertainty. Large differences are obtained for some parameters and some
regions. For example, for region PCr-1, the location parameter μ has
clearly decreased between the two periods, indicating that the fitted
distribution has globally shifted toward smaller BAs after 1994. The scale
parameter has decreased in regions PCr-2 and PCr-3 and slightly increased in
region PCr-1. The shape parameter exhibits a slight increase in all regions.
However, parameter uncertainty is large and the posterior distributions
before and after 1994 overlap, which limits the interpretation of this increase.
Comparison of posterior distribution of the GEV parameters, before
(dashed line) and after 1994 (solid line): location parameter μ (log ha),
scale σ parameter (log ha), and shape parameter ξ (dimensionless)
for each region.
Table quantifies these differences using the Bhattacharyya
coefficient. As indicated above, this coefficient measures the similarity
between two distributions. It is equal to 0 when the distributions do not
overlap and 1 if they are identical. Table indicates that,
for region PCr-1, posterior distributions for μ before and after 1994
are clearly different, whereas they are roughly similar for the parameters
σ and ξ, confirming the global shift of the distribution mentioned
above. For region PCr-2, only σ has a significant change
before and after 1994, while no difference is revealed for region PCr-3.
Bhattacharyya coefficients applied to posterior densities of the GEV
parameters, before and after 1994, for each region. Bold values indicate coefficients below the reference value of 0.61.
Zone
μ
σ
ξ
PCr-1
0.11
0.89
0.98
PCr-2
0.79
0.36
0.87
PCr-3
0.97
0.91
0.98
For a return period T (e.g., T=100 years), the corresponding BA estimate
is directly obtained, for a set of parameters θ, using
Eq. . With the Bayesian approach, we can obtain
the predictive distribution estimate of these return levels by applying
Eq. () to all MCMC samples of parameters
θ(i),i=1,…,M drawn from the posterior distribution
(see Eq. ).
Figure compares the predictive
distributions of BA corresponding to return periods ranging from 2 years to
50 years, for periods before (in red) and after 1994 (in blue), for the three
regions. For all regions, and particularly for region PCr-1, BAs corresponding
to the smallest return periods (e.g., 2 years) have decreased, which
demonstrates the efficiency of the fire policy established in 1994. However,
for region PCr-1, the 90 % credible intervals overlap for higher return
periods (e.g., 50 years). For this critical region, considering the large
uncertainties associated with high return periods, no clear conclusion can be
drawn in terms of decrease or increase in extreme return levels. The
corresponding Bhattacharyya coefficients are given in
Table and indeed show that return levels have only changed
significantly for return periods smaller than 10 years. For region PCr-2, the
decrease in the return levels is more important, even for higher return
periods (see Table ). Finally, there is no noticeable change
for region PCr-3.
Comparison of return levels, before and after 1994, for each region.
The 90 % credible intervals are shown and median return levels are indicated
in plain lines. BA (ha) is indicated on a log scale.
Bhattacharyya coefficients applied to different return levels,
before and after 1994, for each region. Bold values indicate coefficients below
the reference value of 0.61.
Zone
2 years
5 years
10 years
20 years
50 years
PCr-1
0.16
0.46
0.66
0.80
0.90
PCr-2
0.65
0.45
0.49
0.57
0.68
PCr-3
0.95
0.92
0.92
0.94
0.96
The probability of having very large fires (>10000 ha) in a year is
estimated using Eq. () and is given in
Table . These estimates indicate a clear decrease in this
probability before and after 1994 in regions PCr-1 (from 0.076 to 0.037) and PCr-2
(from 0.040 to 0.008). In PCr-2, the probability of having such large fires
remains small (from 0.009 to 0.006).
Estimated probability of having very large fires (>10000 ha)
in a year, by area and for periods 1973–1994 and 1995–2016.
1973–1994
1995–2016
PCr-1
0.076
0.037
PCr-2
0.040
0.008
PCr-3
0.009
0.006
Return levels for the recent period (1995–2016)
Table reports the BA corresponding to high return periods
(20 and 50 years), for the recent period 1995–2016. As an indicator of the
central tendency of the predictive distributions of return levels,
Table reports the medians, i.e., the quantile 0.5. Median
return levels correspond to BAs smaller than 5000 ha in regions PCr-2 and
PCr-3. This is not the case in region PCr-1, for which BAs corresponding to
20- and 50-year return periods are very large (respectively 7050 and
12 510 ha). It can be seen that the 90 % credible interval
corresponding to the 50-year return level of 12 510 ha in PCr-1 is very
wide (from 5400 to 68 000 ha), which limits the interpretation of this
estimate.
BA (ha) corresponding to different return periods (years), by area
and for the recent period (1995–2016). Intervals between brackets indicate
90 % credible intervals of the corresponding return levels.
20 years
50 years
PCr-1
7050
[3560; 22 910]
12 510
[5400; 68 000]
PCr-2
2420
[1380; 6490]
4130
[2020; 17 160]
PCr-3
1070
[520; 3700]
2170
[890; 12 620]
Discussion: using return levels to manage fire risk
This study provides, for the first time, a model of fire return periods in
southern France, taking into account the nonstationary nature of fire activity in
space and time. EVT provides arguments in favor of the
application of extreme value distributions .
We show that the GEV distribution adequately fits the annual maxima of BA for
each region and each period. In addition, we properly evaluate the
uncertainty related to high return level using a Bayesian approach to
estimate the parameters of the GEV distribution. As a result, the
significance of potential changes can be assessed.
It appears that very large fires (>10000 ha) are currently rare but not
improbable in region PCr-1 (probability of 0.037) and less likely in regions
PCr-2 (probability of 0.008) and PCr-3 (probability of 0.006); see
Table . Hence, a major finding of this study is that,
despite the new suppression-oriented policy set up in 1994, very large fires
can still occur in southern France, and especially in the regional hot spots
of Corsica and Provence. Indeed, in region PCr-1, the distribution of maximum
BA has globally shifted toward smaller values between the periods 1973–1994
and 1995–2016 (see Fig. ), but the current
distribution is also more skewed. As a consequence, the BA corresponding to a
return period of 50 years has not significantly decreased.
These findings provide information for leveraging fire risk. In the most
fire-prone areas of France (Corsica and Provence), it is less and less
doubtful that several ongoing changes will create opportunities for extreme
fire events, as in many European countries .
Indeed, a combination of climate change, fuel accumulation, and increasing
human pressure on ignitions in urbanized areas is the root cause for a new
generation of wildfires characterized by extreme behavior
. These new types of devastative fires are
increasingly observed in southern Europe, such as in Portugal in 2017 (the
Pedrógão Grande megafire burned 45 000 ha in June, 2017, and
resulted in 66 fatalities) or in France in 2016 and 2017. For example, the
Rognac-Vitrolles fire burned 2700 ha (August, 2016) and destroyed 39
habitations and a school. A total of 560 trucks and 1800 firemen were
necessary to suppress the fire. Many of those fires are beyond the present
suppression capacity . They often display eruptive or
erratic behaviors, and they are able to propagate through landscape mixes in
the wildland–urban interfaces and in cultivated areas. Recent studies in
southern France have shown that new combinations of prolonged droughts, heat
waves, and windy periods lead to a higher frequency of such extreme fire
events
.
In areas prone to extreme wildfire events, maintaining the pre-positioning of
firemen in the field and massive aerial and ground forces for suppression is
necessary every year. However, this may not be sufficient to control all
fires during certain periods of very high hazard. In this context, managing
landscapes and fuels, limiting unwanted ignitions and the participation of
the public in self-protection (i.e., tackling the root causes of fire risk)
are key to leverage fire risk in the long term.
Concerning other pyroclimatic regions where fire activity is lower, the
probability of very large fires is more limited. In mid-elevation hinterland
(PCr-2) and in mountains (PCr-3), only a dozen fires larger than 100 ha
were recorded each year on average since the 1990s. The ongoing climate
changes favor an extension of areas with high fire weather danger towards the
north and in the mountains , and the season
favorable to fire activity becomes longer. However, ignitions remain limited
and fuels are rarely fully dry. This limits the probability of having very
large fires. In these regions, the aerial surveillance by air tackers has
been increased in order to detect fires as soon as possible.
Finally, it must be remembered that, in this study, we use the occurrence and
BA of very large fires as a surrogate for extreme fire danger because no
reliable and exhaustive database provides information on the real impacts of
fire. This deserves discussion. On average, the largest fires actually have
huge impacts on humans (including fire crews) and the environment because
they are likely to have an effect on a large number of assets
. This is especially true along the
Mediterranean coastline, which is densely populated, densely urbanized, and
very attractive for tourists. In addition, the largest fires need high
expenditure to be suppressed . In this context, BA is
likely the most appropriate surrogate for extreme fire events. However, a
perspective is to better estimate their impacts by comparing the pre- and
post-fire biomass in ecosystems and by building databases dedicated to
impacts on people and infrastructures.