Last week Dale wrote about the problems surrounding large geospatial datasets. That got me thinking about large geospatial datasets from another angle, delivering them to the web browser.
Google was the first mainstream company to bring large geospatial datasets to the web when they launched Google Maps back in 2005. Google Maps delivers the data seamlessly to the user by serving up many small tiles, the client (web browser) then reassembles them into one big image at the end. All other mapping frameworks have since used similar methodologies.
Serving static map tiles is very fast but just doesn’t cut it for the GIS professional as we often want to see and manipulate our vector data (points, polylines and polygons). Displaying vector data as overlays on our mapping framework of choice is easy with several commercial and open source solutions available which allow you to create a mapping service and free your data as a web friendly format such as KML or GeoRSS. The problem arises once this data reaches the web browser.
So we have established that working with large datasets in the web browser is difficult. However, there are several things we can do on both the client/server-side to help abate the issue:
- Only return data that the user needs to see.
This is the single most important point. When requesting the data the client should send back its current bounding box and the service should be configured to read the bounding box and return the data within/in immediate proximity of the box. You usually need to attach the client request to the mouse move event on the map in some way. Adding the zoom level to the request is also usually required (or you can configure this on the server), as if you send back a bounding box covering the entire of North America, you don’t want to return all points within that area. Instead you may want to just show one point per state if it contains any data and style it differently.
Another way to limit the data returned is to allow the client is to pass filter parameters back, such as ID or feature class, which help further narrow down the selection.
- Only return the exact attributes you need.
Whatever you return to the client, the browser is going to have to store in memory. Therefore, the less you send back the better. Make sure you strip out all unrequired attributes; do your users really need to see the OBJ_ID column?
- Simplify the geometry.
If you are working with polylines or polygons you may want to think about simplifying your geometry. For example, if you have the road network for the entire state in the database derived from survey points you likely don’t need this level of detail for overlaying the road on a map. Using an interpolation technique such as generalization can reduce down the number of points considerably.
In my personal opinion the web is still a long way from being able to handle extremely large datasets (though companies like WeoGeo are doing some innovative things). Using the techniques outlined above,you can however allow your users to work with large datasets by ensuring they are only working with a small sub-set of data at any one time. In my next installment we will look at emerging technologies in HTML5 which will help us bring large datasets to the web and mobile devices.
These are the main techniques I use when bringing large datasets to the web, does anyone else have any other methods?