Wildfires are dangerous and incredibly destructive events – difficult to fight and difficult to predict. Despite this, accurate predictions of wildfire behaviour are required for planning, early warning and operational emergency management.
The difficulty in prediction arises from a number of factors. Wildfires can be large enough to change the local atmosphere, causing feedback effects that are currently not well understood. The plume of a wildfire lofts flaming material, or firebrands, many kilometres downwind. These can start downwind spot fires in random, and essentially unpredictable, locations.
The fuel for a wildfire is highly variable – for example, a wildfire moving through a forest consumes leaf litter, live shrubs and leaves as well as dead material. This variability affects the spread of the fire and must somehow be accounted for in predictive models.
This complexity hasn’t deterred fire scientists, who have been experimenting for many years to understand how wildfires spread. The first Australian experiments were carried out by McArthur in the 1920s. Later large-scale experiments were carried out in eucalypt forest for project Aquarius in the 1980s. Modern fire science uses state of the art facilities, such as the CSIRO Pyrotron – a dedicated wind tunnel for experimental studies of wildfire spread.
These studies have resulted in a set of tables and mathematical formulas, known as empirical models, for predicting how fast a wildfire can spread under a given set of weather conditions and fuel characteristics. Other models have also been developed for the effect of terrain on the rate of spread, as fires travel faster uphill than downhill.
The application of these empirical models has had a long history for operational purposes, such as emergency management and planned burning. The first ‘computers’ were simple yet ingenious hand-held cardboard meters. The meters had a number of logarithmic dials which could be turned to input the current wind speed and fuel conditions. The resulting rate of spread could be read from a scale on the outer dial of the meter.
The advent of computers revolutionised our ability to model the world, and fire science was no exception. The first hand-held computing devices in the 1980s were readily adopted and programmed to provide rate of spread predictions for users in the field. Likewise, as the power of computers grew, the complexity and predictive ability of the model increased.
The first modern end-to-end fire prediction system was the US system BEHAVE. One of the first map-based systems was SiroFire, developed by CSIRO in the late 1980s for Australian wildfires. The interface to this was similar to the familiar Google Maps based systems seen today. SiroFire was later re-developed into Phoenix, the operational fire prediction system currently used around Australia.
There are currently two broad categories of computer models for predicting the spread of wildfire: physical models, which aim to model every detail of the fire, and propagation models, which model the evolution of the fire perimeter. The mathematics behind physical models is extremely complex and the equations can take a very long time to calculate, even on modern supercomputers.
Propagation models, on the other hand, are much faster to compute and can be run on a laptop in only a few minutes. These are based on empirical models, in which the complex physics of the fire is encoded into a set of relatively simple mathematical equations.
The empirical formulas give the forward rate-of-spread of the fire in the direction of the wind, but predictions normally need the entire fire perimeter to be modelled. The development of the shape of a wildfire depends on the rate of spread against the wind, or backing rate of spread, and the rate of spread at the sides of the fire, or flanking rate of spread. These have been studied less than the forward rate of spread, but a few key observations have been made.
Early studies revealed that the perimeter of a wildfire tends to become elliptical, aligned with the direction of the prevailing wind. Many current wildfire prediction systems therefore model the growth of the fire as a set of overlapping ellipses. However, recent and more detailed large scale experiments have revealed fire growth is much more complex, especially where there is significant variation in fuel, wind and topography.
The next generation of fire propagation models use newer computer algorithms to simulate the spread of a fire. These don’t rely on pre-defined shapes, such as ellipses, for determining fire growth. These newer models can also handle any number of individual fires, making them able to model complex phenomena such as the merging of multiple spot fires.
Our system, Spark, has been developed by CSIRO using the latest fire behaviour knowledge and computer science. This system allows the user to directly input a rate of spread formula for any fire model. This gives researchers the ability to test new fire growth patterns directly against experimental data, as well as develop new empirical models. The system was also designed with the future in mind – it is specially written to take advantage of the new graphics processing capability found in modern computers.
To run a fire simulation, you need a few ingredients. These include where the fire currently is, or where and when the fire started. These can be direct observations from units in the field, or provided from an infra-red line scan of a current fire. Perhaps the most important ingredient is the weather, as fires are strongly affected by the wind and moisture content of the fuel. Generally, the better-quality data that can be provided to a predictive model, the more accurate the results.
One of the most interesting future directions for fire prediction is the assimilation of very recent data into the model. Satellite coverage is becoming both more frequent and of increasingly higher resolution. This data has multiple applications for fires – for example, the distribution and amount of fuel, water content and land cover can be directly processed from satellite data sets. Satellites equipped with infrared sensors can also directly detect live fires.
Another promising future direction is the combination of wildfire models with atmospheric weather models. Currently, weather data can be sourced from forecasts provided by national weather agencies. However, these forecasts do not include any effect of the fire on the local atmosphere, which can be very significant. These combined models are helping researchers to understand the complex feedback processes between the fire and the atmosphere.
One major difficulty in prediction are spot-fires. Firebrands lofted in the fire plume are jostled by turbulent motion of uplifting hot gases, and their path is essentially random. Due to this we can never predict where a particular firebrand might land and start a fire. Another difficulty is the sheer number of firebrands produced in a large fire, any of which may go on to start a downwind spot fire.
Researchers in the metrological community are, however, making progress with this seemingly intractable problem. The trick is one well known to weather forecasters. For random events, a number of forecasts can be computed and combined into an overall forecast, which gives the probability of an event occurring.
Something very similar can be applied to fire prediction. The random path of firebrands in the atmosphere can be computed to see where they land and start spot fires. This can be done multiple times to produce different sets of spot fire positions. By combining the results, an overall chance of a firebrand igniting a particular location can be calculated.
A similar method can be applied to variations in the wind for a particular weather forecast. Although most weather forecasts are largely accurate, there are error bounds in any prediction. Multiple fire predictions can be run within these bounds and the results can be combined. The same method can also be applied to other factors that affect the fire, such as fuel amounts or moisture levels.
The main benefit of increasing computing power is the ability to run multiple fire simulations to form such forecasts. The more individual predictions that can be computed, the better the forecast becomes, leading to information for better decision making. The resolution and detail of the models can also be improved as computing power increases.
Due to the consequences of an incorrect prediction, the limitations of any wildfire model need to be clearly understood and tested. We’re working towards integrating tests in our system to automatically check the predictive ability of the model against real fire events.
Managers and planners for wildfires need the best possible tools for decision making. Our goal is to provide them a reliable and provably accurate system which can provide the best possible predictions. With the power of modern computer hardware, such a system is finally within reach.
For more information, go to research.csiro.au/spark/