Print This Post

Stability in Standards Counts… a Lot

In the .dot com era of the late 1990’s, people talked about doing things in “Internet time” and spoke of the Internet, and its associated standards, as “rapidly changing”.  In the popular press, and unfortunately also with many software professionals, the idea that standards should change quickly was often seen as desirable.  I believe, however, that is this not only false, but that it rests on an incorrect impression of software development in general, and of the Internet in particular.

The reality of the Internet and the reality of software development should, I believe, drive our approach to standards development.

To begin with, I think one should think of the Internet as a permanent but evolving information infrastructure, with the evolution generally trending toward higher levels of abstraction and application support.  This means that more and more we can expect the Internet to incorporate layers of software that directly deal with various types of applications.  In this sense, the Internet is more like a building or a wall, with the various layers being added in succession to provide more and more functionality with each passing year.  Of course, just as in the case of physical infrastructure, the lower layers can be replaced or revised in order to support greater functionality higher up.  Also just as in the case of physical infrastructure, successive changes become more difficult, and more time consuming, even with software’s ability to provide information hiding.  Rather than rapidly changing, I think we should see the Internet as slowly progressing, being laid down brick by brick.

The other component of this discussion is the process of software development. This is still very much a human centered activity.  Although various application programming libraries have contributed significantly to productivity, it is nonetheless very dependent on our ability to conceive and understand the structures with which we are working.  These are processes which take time, and which depend on an overall framework of stability.

Some examples illustrate these arguments.

Geography Markup Language (GML) was introduced in 2000 as a Recommendation Paper of the OGC (GML 1.0).  The fundamental concepts laid down in this initial version have not substantially changed between version 1.0 and the current version, 3.2.1.  With version 2, GML was ported to XML Schema and, from that point on, the concrete encoding model has also changed very little.  This is a good thing.  While many people decry the fact that GML was not “widely adopted”, the intervening time allowed developers and architects to embrace the GML model and to build application schemas, and this is now being done in a wide range of communities such as aviation (AIXM), aviation weather (WXXM), geotechnical engineering (DIGGS and GeoSciML), urban planning and development (CityGML), and imagery (GMLJP2), to name only a few.  (Source:  http://www.ogcnetwork.net/gmlprofiles)  This is very substantial progress.

There are many, however, who think that radical surgery to GML will obtain even broader adoption.  This is a shaky belief at best.  What is needed now, in the GML community, is not radical change and new namespaces, but more tools and the ability to manage GML “objects” in a logical fashion (e.g. profiles!).  These items will speed GML adoption far more than doing more tinkering in the standards shop.  Let’s not throw the baby out with the bath water.

As another example, consider the development of Scalable Vector Graphics (SVG).  This standard pushed forward early in the development of XML and is now a mature, and quite powerful, language for graphics drawing.  Unfortunately, SVG suffered somewhat in relation to Flash, and is still only weakly supported in terms of authoring tools and browsers.  I believe, however, that this is about to change.  SVG is supported in HTML 5.0 (again note how long we have been developing HTML – slow and steady wins the race!), which means that uniform first class graphics citizens are coming to the web top, regardless of which platform is chosen.  Furthermore, there is now hardware support available, through the OpenVG consortium.  This will accelerate SVG (and, yes, Flash too) on all sorts of devices, but most especially on those portable ones that people carry in their pockets.  What some people may not be aware of is the development of SVG Map (http://esw.w3.org/SVG_Map).  This standard, in conjunction with renewed support in HTML 5 and OpenVG, means that a new and powerful mapping platform is coming to the web top.  SVG 1.1 is circa 2003.  In 2011, it will begin to blossom.

One final example is that of CSW-ebRIM, an OGC specification for registries.  Registries are a technology which provides some basic mechanisms for managing metacontent using built-in structures such as taxonomies, associations, collections (packages), and slots.  Registries are transactional (unlike most notions of catalogue) and hence can be updated across the Internet.  The CSW-ebRIM specification had a tough time getting accepted in the early days.  Many people opted initially for the simpler but much less expressive UDDI specification, however, over time, the flexibility of ebRIM has proven its worth, and we now have a solid specification.  This is not the time to dive off the deep end into something else; it is the time to incrementally improve the existing specification.  We need to tighten the definition of Extension Packages, and then build (as standards) Extension Packages for various domains   Some of these exist for CRS, for general metadata (CIM for ISO 191939), for Earth Observations, etc., but many more can be created.  This is where we should now focus our energies – on more Extension Packages and more tools.

A few years back, I used the rallying cry “Infrastructure, NOT Interoperability”.  By this, I meant that we should be focusing on the concrete means to share and exchange information, and not on the abstract notion of interoperability.  The infrastructure may not be perfect; the standards on which it rests may not be perfect either.  But interoperability will be defined in terms of the infrastructure itself.  In my view, at least, this is the only way forward.