Have you ever had the frustration of clicking on a bookmark and finding it no longer goes to the page you expected? Of course you have. It happens all the time on the Web. URLs are tightly coupled labels that link to a specific resource, and unless the owner of that resource goes to the trouble of redirecting from the old location to the new one, the link breaks as soon as the resource is moved.
Imagine how much more frustrating and complex to manage this all becomes in the context of web services. Web service components live at URIs (short for uniform resource identifier, which are like URLs, but might be, for example, a call to an application function rather than a specific file location). The URI is still tightly coupled if the owner of the resource decides to move it from one URI to another (for example, changing the domain or amending the directory path), then every application that links to that resource will be broken.
This is one of the key reasons for needing directory services and/or UDDI in a web services environment. Nobody has a really pressing need to be able to go out and discover hitherto unheard-of resources on the network (as the creators of UDDI in its first iteration seemed to believe). But we do need something that allows us to move things around from time to time without breaking everything we did before. It's all about effective change management.
There is another way of tackling this problem, of course. Rather than adding change management capability as an optional extra layer on top of the existing architecture, why not recognise that change is a constant, and build it into the architecture itself? This already happens in the domain name system (DNS), which separates the domain name (eg looselycoupled.com) from the physical IP address (eg 188.8.131.52) of each individual web server. This allows the owner to move the domain from one physical server to another (or to cluster it across several servers) without disrupting anyone's ability to link to the resources within that domain.
The same flexibility could be extended to resource identifiers if they had a similar system that separated the name of the resource from its specific location on the Web. Just as the DNS system separates the identification of a web server from the physical machine on which it runs, so this new system would maintain a unique identifier for each individual resource that would not be permanently tied to a specific web server location.
This is the objective of the eXtensible Resource Identifier (XRI) initiative announced this week by OASIS. Its work will build on the XNS protocol, which grew out of work started in the mid 1990s by Seattle-based identity infrastructure company OneName, which has contributed the XNS specification on a royalty-free basis, in accordance with OASIS policy on intellectual property rights. (There's useful background in some Digital ID World articles from last year on the history of XNS, an interview with Drummond Reed, the creator of XNS and OneName's founder, who is co-chair of the OASIS committee, and OneName's company strategy).
The hurdle that XRI will have to overcome is that it aims to supplant the URI. This makes it the most ambitious if also the most elegant of all the various schemes currently being advanced to make URIs more discoverable and transformable. Alongside UDDI and the various directory services schemes of larger vendors, there are more homegrown proposals such as RSD (Really Simple Discoverability) and an approach based on WSIL (web service inspection language). But I would imagine it's possible for XRI, like these other solutions, to layer itself above the URI system initially (in the same way I suppose that physical IP existed before DNS was added), until it reaches sufficient critical mass to bypass URIs where appropriate.
All of this is important because, as I noted when looking back over the events of 2002, resource identifiers are going to continue to be a vital means of accessing and assembling web services. I am surprised that XRI so far seems to have aroused little interest from the REST school of thought. I have a feeling XRI will be something to keep an eye on this year.
posted by Phil Wainewright 4:05 AM (GMT) | comments | link
Microsoft decouples .Net from Windows
The technologists have won a rare victory over the marketeers at Microsoft this week: the latest renaming of the next release of Windows Server 2003 has removed the ".NET" branding that was first attached to the product in June 2001 the original launch date of Microsoft's .Net strategy.
The move is significant because, instead of attempting to push Windows as the default platform for .Net, Microsoft will now promote its flagship server platform as "Microsoft .NET Connected", a badge that third-party vendors will also be able to earn. As the Register's John Lettice explains:
"This logo will indicate 'its ability to easily and consistently connect disparate information, systems, and devices to meet customers' people and business needs (regardless of underlying platform or programming language).' That last bit may have some significance is it perhaps more important that Windows has fallen off .NET than that .NET has fallen off Windows?"
Some significance indeed. The move relegates Windows to the status of just another server platform within the .Net Framework, Microsoft's umbrella architecture for web services, confirming that .Net is now more strategically important to Microsoft than Windows.
How long before some version of Linux earns the "Microsoft .NET Connected" badge? That day may be closer than anyone expects I predict it will be in the first half of 2004.
posted by Phil Wainewright 2:13 AM (GMT) | comments | link
Thursday, January 09, 2003
Invaders on the LAN
In these days of wireless connection and the ubiquitous web, the LAN is no longer a haven of safety. There are too many routes in and out of your network that a firewall won't give you any meaningful protection at all. Here are three good examples:
Secret Web services may pose new risks. In this SearchWebServices interview, Gartner analyst Raymond Wagner tells Eric Parizo how web services can transmit payloads and data through firewalls without anyone being any the wiser.
Supporting Multiple-Location Users. In this Alertbox essay last May, usability guru Jakob Nielsen highlighted the most common reason why network security gets breached : "Users need to access sensitive files from their laptops and home computers, so they transfer these files to their local hard disks. High-level CIA officials have done this; you can bet that average business professionals in your company violate security as well -- or they wouldn't get any work done."
WiFi working. In my ASPnews column this week, I described why WiFi will lead users to turn to web-based applications. But when I downloaded McAfee.com Personal Firewall to protect my PC when working on a WiFi LAN, I was dismayed to discover that the only way to restrict access from other PCs on the LAN was using IP addresses, while the standard option was, "make all computers on your LAN trusted." How can I be sure that my home-office WLAN will always be internally secure? And what about when I go out on the road and forget that I've given several LAN addresses carte-blanche file sharing rights?
All of these factors mean that we must wipe the notion of LAN-level security out of our minds. The only security levels that will work are at cluster level, device level, application level, user level and file level. I include 'cluster' as a concept because an enterprise is likely to have certain 'software fortress' domains where it still makes sense to maintain security at the perimeter. But these are exceptions to the rule, and keeping them secure will mean enforcing much more stringent measures than are either possible or desirable to enforce among the mainstream user population. In a web services world, maintaining good security will take a lot more than simply upgrading to XML firewalls. It's going to mean completely rethinking the groundrules.
posted by Phil Wainewright 7:16 AM (GMT) | comments | link
Wednesday, January 08, 2003
Top 5 analyst reports of 2002
Over the past year, Loosely Coupled has been tracking the research output of the various analyst groups that follow web services and business process topics. We're now able to share the fruits of that work, having published our online research directory. This currently provides two views: an unfiltered list of all reports as they are published, and a shorter list of highlighted reports.
The highlighted list currently contains about 16 or 17 reports, and represents the ones that have caught our eye for one reason or another during the past year. Having selected our favorites over the year, we decided to go one further step and name our top five of 2002, which we've published alongside the highlights list.
If you click on the 'more info' link in any of the tables, a popup window appears with further details of the report, and which links to a fuller description on the relevant analyst's web site, and directly to a download URL if appropriate. I'm sure the popup will irritate some people, but it's set up so that you can view the details of each report in turn as you scan through the directory, which seems like a useful feature. Please give us feedback if you disagree (or agree, for that matter).
Obviously, we recognise that, as the directory grows, it's going to be useful to filter for specific topics, authors, price brackets or analyst groups. Rest assured we'll be adding capabilities along those lines before long. We have similar plans for the events directory, and for other as yet unannounced directories.
posted by Phil Wainewright 1:25 PM (GMT) | comments | link
Monday, January 06, 2003
Is a three-hour Passport outage really a reason "not to depend on centralized identity services?" According to CNET's report, the outage is the first since Passport was down for several hours one Sunday last May. Averaged out to an hour a month over the eight-month period, that's not such a bad record just below 99.5 percent uptime, which is about as good as you'll get from any reputable service provider.
The alternative (other than paying a hefty premium for higher service levels) is to do it yourself, in which case you'll almost certainly do a whole lot worse. Yet most technology users instinctively prefer to endure worse service from their own resources simply because it makes them feel more in control. The only way that third-party providers can compete with this highly irrational and yet thoroughly ingrained sentiment is by surrounding their service provision with as many self-service visibility and control mechanisms as they can reasonably afford.
Unfortunately few providers, even if they recognize the phenomenon, understand why their customers have such unreasonable expectations. Mostly they decide that the only solution is to embark on an education program to tell their clients why 99.5% uptime is so much better than they could ever have achieved on their own resources. Meanwhile, the providers do nothing to prevent customers feeling helplessly isolated whenever there's an interruption to their excellent service record, with the result that the dispiriting experience of those 0.5% hours of downtime completely wipes out all the goodwill earned during the other 99.5%.
Rather than eking out an extra 0.1% to 0.2% in uptime that will earn them no gratitude, providers should invest more in systems and procedures that allow them to keep their customers informed when things do go wrong. Managing availability is as much about managing perceptions as it is about maintaining continuity of service. Electricity and telecoms suppliers have this down to a fine art, but online service providers are still novices at it.
posted by Phil Wainewright 8:05 AM (GMT) | comments | link
Assembling on-demand services to automate business, commerce, and the sharing of knowledge