The economic and human consequences of natural disasters are enormous. And due to global climate change, in part, they’re increasing dramatically.
It makes sense, therefore, to try and apply predictive simulations so people can better plan and prepare for these events if and when they occur. The effort isn’t to predict the event itself, but to try and predict its consequences.
In the United States, the Federal Emergency Management Agency (FEMA) created a multipurpose hazard modeling system (HAZUS) with the following objective: to predict the potential loss of life, financial cost and structural damage that could occur from an earthquake, major landslide, hurricane or flood.
Of course, running hazard-simulation models is quite different from emergency planners and responders using the results. Simulation modelers are technical people with specialized scientific knowledge with respect to hazard events, with specialization by type of event.
The people who use the results have a different set of priorities and point of focus. Their concern is with the type of events they plan for.
“Is there an event that would kill more than 500 people and cost more than $2 billion in damage?” “How would that event’s impact be distributed?” “Which buildings would most likely suffer the greatest damage?” “Which bridges or roadways might be impaired?” The simulation modeler focuses on events, while the emergency planner focuses on probable consequences.
A recent project in Canada shows how these two worlds can be brought together in a practical manner using rapidly deployable Web-service technology. Canada has a nascent national Multi-Agency Situational Awareness System (MASAS), which now enables simulators to run their models, automatically extract indicators of interest and distribute those indicators to emergency planners through the MASAS network.
Of course, such information is voluminous, so a Web-based capability was needed to enable rapid querying and map visualization of simulation results. In addition, the results have a lifecycle that must be managed (e.g., results need to be verified before they’re released, may be found faulty after being in use, may be slated for revision, etc.).
The solution was quickly built on an open-standard Web-service platform that provides a set of standard features, many of which were used to meet the project’s requirements. An information model for simulation results was created and deployed, incorporating the event description and the various indicator parameters available to emergency planners.
Extraction and ingest of the indicator parameters was automated, a task made straightforward by a simple harvest plug-in that directly mapped extracted data into the model. No additional work was required for lifecycle management, because every object in the platform has that to begin with. Users only need to specify which lifecycle states to use and which state transitions to allow.
From Model to Meaningful
Information in the simulations was spatial in nature (e.g., location of bridges, water mains, etc.), and there was no problem in handling this, because the platform enables the capture of geometric properties in GML.
Out of the box, the platform provided the ability to perform ad hoc spatial, classification and property queries as well as use server-side stored queries, making event-consequences information simulated in the models easily accessible and searchable via the Web by emergency planners. The platform also made it easy to integrate existing mapping clients on which consequences could be displayed, including Google Earth, Google Maps and Esri’s ArcGIS.
The result is a system in which hazard-simulation models can be run and automatically indexed, and the information content can be made immediately available via the Internet in a visual and searchable form meaningful to emergency planners and responders.
This article was originally published on GeoPlace.com.