Each year, the December issue of GeoWorld magazine features its annual Industry Outlook for 2011, where leading experts in the field share their forward thinking thoughts and ideas. The GeoPlace  website publishes the full responses by each respondent.
This year, Ron Lake had the following to say for 2011:
The geotechnology industry has come a long way to add the third dimension, but it seems to still have difficulties with the fourth dimension: time. Do you agree with that? If so, why has time been difficult, and what are the prospects of integrating time into geotechnology in the near future?
“Although I would disagree that the integration of 3-D is much more advanced than that of time, the issues with temporal support are more about mainstream implementation than they are about understanding how to do it. Geography Markup Language (GML) introduced a fairly complete and extensible model for encoding various time constructs in Version 3.0 (based on ISO 19108), and this is being used in a number of application schemas, including the AIXM (Aeronautical Information Exchange Model) being used in NextGen (U.S.) and the European SESAR commercial aviation IT systems.
In addition to providing for various means to express time (time instants, time intervals, durations, various types of clocks), GML provides the notion of a dynamic feature, in which specific properties of the feature are time varying. A dynamic feature instance has a history consisting of time slices, each with an associated time interval or time instant, and which contain the values of the time-varying properties for that time slice.
WFS 2.0 and FE 2.0 further support the GML temporal model by providing temporal operators that enable a wide range of temporal and spatio-temporal queries.
Mainstream implementation support for temporality I believe has been constrained by lack of easy-to-use tools to build and maintain temporal-based spatial data and by the lack of good user interfaces that facilitate the management and manipulation of temporal data. The “time slider” in Google Earth stands out because there has been a paucity of innovation in this area.
As a final note, I think that time will become a larger issue as we move (finally) beyond maps as an objective, and integrate more analytical and simulation capabilities into our urban and other geo-models. Here the time dimension is essential. I believe we have the database foundations and the temporal models as noted above, and better integration of numerical models will drive an upsurge in their exploitation in the near future.”
As the world’s data volumes enter the zettabyte level (1,000,000,000,000,000,000,000 bytes) and soon to the yottabyte (1,000,000,000,000,000,000,000,000,000 bytes), will the amount of geospatial data become too much to possibly use, or will storage and, more important, analysis technology be able to keep up with the data deluge?
“I don’t see the explosion of data volumes as a serious problem from a basic storage and access standpoint, and I believe the basic hardware technologies will indeed keep pace. I think the challenge lies much more in the software domain, especially with respect to the analysis and the “analysis for understanding” of this information. Here I expect a lot of change in several areas.
One area will be information distribution. The second will be in terms of database technology. The third in terms of how analytical technologies are implemented.
We’re building increasingly large and complex models of the world. Distributing these models will remain a requirement. At the same time, although the models are very large, changes to particular parts of the model may be small in relation to the model’s total size. This means that we will require infrastructure for model sharing that’s fine-grained and incremental, moving information to us as we require it. Large-scale, ad hoc copying will not be viable.
The limitations of relational database technology have been exposed in specific problem areas (e.g., large collections of Web pages, Facebook users, etc.), and older and newer database technologies have evolved to meet these challenges (e.g., Cassandra). I believe this will also be the case for large geospatial databases, and that other models (e.g., functional data model) that more readily integrate analytical capabilities will come to the fore.
Finally, I anticipate that the model of “get the data from somewhere and analyze it here,” will give way to more agent-based models, where the processing functionality is sent to the data. I believe we will see a revitalization of these approaches in the near future.
So larger data volumes mean more fine-grained distribution capabilities, combined with new data storage models and agent-based services.”