Print This Post

What are the key issues for geographic information technology?

If one looks at the field of GIS today, what really are the critical issues confronting users?  Theoreticians?  Developers?  

Where are the dominant problems?  

What do we as the broader community require? Better user interfaces?  Better data structures?  Better tools for data analysis?  Easy data sharing and aggregation?  Better measurement and data acquisition?

As you might anticipate, I am going to argue that the dominant issue facing all of these groups today is easy data sharing and aggregation.  To say that this is so, let’s have a look at some of the alternatives.  I won’t claim that there is nothing to do in these other areas however I will claim that in all of them we have adequate solutions today.  Within all of these groups, with the exception of easy data sharing and aggregation, the playing field is substantially different today than it was 20 years ago.

Data Structures:

While there is a concerted move into unifying object-oriented structure models (e.g. BIM, IFC) with more conventional geographic abstractions (e.g. GIS), this is really a solved problem from the data structure point of view.  However the real problem of data sharing and aggregation is under a different guise.  Information created to make building drawings is hard to share with people that are concerned with building operation and maintenance.  The data structures to view both as different views of the same design over time already exist.  While the infrastructure to enable the easy transformation and distribution of this information does not.

More conventional GIS data structures (i.e. conventional to GIS practioners) has changed very little in the past 20 years, and are of course based on geometry constructs from 200 to 2000 or more years ago depending on your point of departure.

It is curious that in spite of the efforts of the OGC and ISO to abstractly define coverages and features in such a way as to blur the raster-vector distinction, real progress in people’s thinking is much less than one might think.  OGC WCS supports only raster structures.  OGC WFS by and large only supports vector structures.  People still use these words even when they don’t mean what they say!

Even wavelet representations which offer a potential unifying approach to the infamous divide of “rasters and vectors” have been with us for some time and are now widely implemented in imaging and computer graphics.  Furthermore the ultimate gain of such representations over current approaches based somewhat on brute force is still unclear.

User Interfaces:

This is an area that always consumes enormous energy and never fails to generate interest and excitement.  “Google Earth is so easy to use!!”  While ease of use is clearly important, I would argue that the success of Google Earth has far less to do with its ease of use and far more to do with “I can see my house”.  The navigation paradigm is smooth and clever for sure, however not enormously more so than previous generations of GIS or CAD tools.  The real trick is that Google has provided a window on a huge aggregated base of data – the key ingredient in being able to “see my house”.

The past several years have also seen important user interface developments from hyperbolic tree structures (e.g. Macintosh user interface) to hyperbolic lenses (e.g. Idelix) and context sensitive viewing.  Many of these are indeed significant advances, although they really reflect the availability of increased compute power rather than truly new ideas.  They are of little use it they cannot be applied to current accurate information.

Tools for Data Analysis:

One of the early and signal characteristics of GIS was the ability to perform geospatial analysis such as overlays, buffer computation and routing to name just a few and have long been well developed.  In the end, this is nothing more than mathematics applied to the geometric characteristics of geographic objects.  Though while we can develop faster and more efficient algorithms, and make these easier to use, there is nothing really fundamental on the horizon that will dramatically impact geographic information utilization. I think we will see more analytical tools in the hands of end users (routing is the obvious example), but that just reflects the restructuring of geographic information technology than any significant developments in data analysis itself.

Easy Data Aggregation and Sharing:

It’s the data that matters!  Nothing could make this point more obvious than the rise of Google Earth and Google Maps.  The availability of relatively accurate high resolution imagery and maps on a global basis has supplanted what we used to think as the task of national mapping agencies.  It has even left some such agencies wondering “what is their mission in life?”.

That being said, Google Earth (and MS Virtual Earth) also reveal what they cannot do – and that is provide current and accurate data from the source.  As a result, we see the “meltdown in the maze”, and big trucks being routed through small towns, and a host of other consequences of missing information.  This is not to say that this is the responsibility of Google or Microsoft – simply that there is no infrastructure for them to leverage to ensure the currency, richness and accuracy of their data.  They do a yeoman’s job under the circumstances.

The local level is no different.  Every day, somewhere in the world, a pipe or cable tray is broken and services are disrupted because someone did not have access to accurate and current information.  The costs are not easily estimated, but I would guess they run into many billions of dollars each year.  Additionally there are construction delays, permitting delays, confrontations and a host of other consequences of misinformation, all associated with the development of just about everything from a new mine in a wilderness area to a new building in the center of a large city.  These costs are even more difficult to estimate, but would run into the tens of billions or more.

All of the fancy visualization and analysis tools are of use and this applies also to the slowly proliferating list of location-aware services.  However, only if they can access or be based on current and accurate information, and this can only be accomplished if we can make data aggregation and sharing as easy as possible.

Some people had hoped that this can be solved by the so called Web 2.0 approach, meaning data is generated “out in the wild”.  While there is no question that this is an important part of the solution, such observations cannot replace the fact that

1)  these are largely observations which must be “validated”; and

2)  lots of information derives from the business processes by which societies govern themselves (and this is more than just government).  

You cannot change the boundaries of your property by sketching them on Virtual Earth. There is a legal process by which this happens, and when it happens it is known in a particular institution first.  This is also true of new building plans, engineering in-progress construction and so on.  So more than Web 2.0 is required!

The drive for easy data aggregation and sharing is an old one – so old that many people blanch at the very mention of the term SDI — including myself.  We have all talked for decades about this need.  We all understand the importance of it.  It is no more important than having an accounting system for the whole company, rather than one for each department with no way of doing an overall consolidation.  The purpose of an accounting system is to make visible what is going on in a company so that those in the company can take appropriate action.  The purpose of GeoWeb/SDI is to help provide this visibility for our society.  Clearly this is the most important issue for geographic information technology!

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>