It’s 0600, I’m in Germany and I am pondering on what I could possibly write about that fellow GIS comrades would find enlightening. Well, why not start with why I’m in Germany. I am here to deploy a web-based mapping solution built on ESRI’s server technology (version 9.3.1). Specifically, it uses the .NET ADF (I know what you’re thinking, NOT THE ADF), but I have to admit I have been impressed. The biggest ticket item that has caught my attention is the Mach 5 renderer and how it works with the new map service definitions (MSD) map services. While I am still working on quantifying them (see Dan Levine’s recent blog), I have to say that the gains have been noticeable.
Unfortunately, we have encountered some limitations that will, by the looks of things, possibly force us to go with a different ESRI development framework. The ADF is actually a really solid solution, but there are a few little things that are causing issues. Things like the use of sessions and the inability to handle switching the viewer’s core map projection – note, this is not to be confused with on the fly projection (want to throw that out there before I start getting pinged by all the super geeks).
This solution isn’t earth shattering, but what it does do is allow us to work with regionalized map services. Within these regionalized map services we have a few core geographic extents that we’re interested in. Each of those core extents have a series of associated features with it. Now, it’s not an eloquent solution– there is some overhead on the administration of the data model (not to fret we plan to build an admin module to it), but it does allow us to do a couple of things:
1 – Dynamically load the available features into the applications tools without overwhelming the tools with all the map services features.
2 – Make the tools map service agnostic, the tools don’t go to the map service to figure out what features are available, instead it goes to the data model and through some quick data filtering identifies the available features.
3 – Start to break the dependency on the map services, focus being to reduce the load on them, which directly relates to their availability.
Unfortunately we did not achieve all of our goals. As most of you are aware, there are limitations with the number of features in any map service before you start to experience degradation in performance. What we were hoping, was that via the data model not only would we be able to control which map features were made available to the tools, but also which features in the map service were rendered. The theory being that if we were able to control the features that got rendered in any map service we could potentially dramatically increase the map services usability. This is how it would work:
One map service would be built out with three times the allowable features, say the magic number was 60. So we’d be looking at a map service with 180 different unique features. The geographic extent for the map services would be Southern California. 60 of those 180 features would be dedicated to some geographic area of interest, say Los Angeles. The other 120 would be distributed between San Diego and Menifee. Well if this worked, then we’d be able to publish a 180 feature map service but the server would render the map service with only the 60 features pertinent to the geographic extent the user chose to visit. I mean come on, how hard can that be, right? Well, it’s hard enough that we haven’t cracked the nut on it….yet.
In any event, let me caveat this by saying I am not a developer. I’m actually an analyst by trade, project manager by day (I think that’s actually up for debate right now), and dare I say it, a technical architect somewhere in between.
Well, stay posted; if (when) we get the breakthrough we believe is possible, I’ll provide an update to the story.