I just stepped away from looking at Sentinel hot spot, a service run by Geoscience Australia, (www.hotspots.dea.ga.gov.au) and looking at the screen the data being collected covers most of the readership of this magazine. Then I look at a new app called floodMapps and all the information this is dragging in about floods and flood predictions. If I walked through any one of a dozen control centres, I’d see staff vacuuming up data to be used in short-term decision making, but I’d also see megabits, no terabits, of information going on to hard drives to be stored for later use. This occurs year in year out for every disaster and every agency.
It is pointed out to me that the data stored during and after a major disaster is nothing compared to the data collected daily in the normal running of the many emergency services around the world. Every time you see a fire truck travelling down the road a report is generated. Where is the black hole gobbling up all this information? Unfortunately, I know where it is all hiding.
It’s in ever increasing data storage bunkers run by each agency and occasionally shared with government departments, other services or legislative/legal requirements. Making it more difficult, the information is stored as paper files, microfiche, all manner of floppy disc and hard drives using various archiving platforms.
This is not the main problem, many services have a storage lifetime, so at the end of a set period the data is destroyed, never to be used by any researcher, historian and/or investigator.
It is a disgrace to see this data gathering dust. Just look at what is happening in the world around you now: data miners are sucking up tonnes of data, analysing it and putting the information to use for commercial purposes. Your data footprint is such that your life is an open book – no sooner do you buy a bottle of wine at the supermarket than your phone tells you where to pick up the same wine at a discount. It also calculates the best route to the shop from where you are standing.
The world is using data, but why is ours hiding? Europe and the USA are attempting to put out meaningful statistics through the NFPA, CTIF and World Fire Statistics Centre, but they suffer the same problem: that they only receive what they ask for and then often sanitised. To gain maximum benefit the ideal would be to open up the data sets for the artificial intelligence (AI) bots to mine for coincidences, trends, comparisons and so on.
One task would be to find a formula for the most effective standard of fire cover to ensure equity. Or optimise the number and size of fire appliances. Or move resources to the calculated risk area.
How about buildings: is the billions of dollars of fire equipment built into buildings cost effective? Do we need as many fire stations in new estates? Then there is the risk equations that insurance companies could use for setting premiums.
Is there an arsonist working in a particular area or using a single or like methods over a decade? Is there a consumer product causing fires?
I could go on listing applications forever; some may come to light by accident, some by specific known research and some by custom and practice.
The fact is that without full and open access to the (deidentified) data across many agencies we will not get the benefits offered in this AI world. Some are nervous that the data will show up flaws in an agency. Yes, this is true, but are we not better off finding the issues that need addressing early?
I would like to dedicate this editorial to Tom Wilmot from the World Fire Statistic Centre, who left us in 2007 aged 93 after dedicating his life’s work to solving this problem.
For more information, email neil.bibby@mdmpublishing.com