Does anyone still believe that web services will be published and consumed indiscriminately on the open Internet? I keep on seeing references to that early vision as if it's still alive, but surely everyone realizes by now it was just a geek pipedream, the idea that your servers would just go out on the Internet and 'discover' services listed by all-comers in registries conforming to the pompously-named "Universal Description Discovery and Integration protocol" (ie UDDI).
UDDI in its 1.0 state was a classic case of 'wouldn't-it-be-cool-if' technology the sort of standards effort that Tim Bray rightly lambasts in his blog posting this week, The Atom End-Game:
"... standards organizations shouldn't try to invent technology ... What on earth would give us the idea that we're smart enough to predict what features the world is going to want? Our job is to write down what we already know works, to do it as cleanly and clearly as possible in as few pages as possible, then get out of the way.
By the way, I do hope the Atom WG heeds his wise counsel. Come on guys, I need a finished spec so I can press ahead with further development of Loosely Coupled's blog publishing engine, which uses the Atom schema. If standards bodies were supposed to innovate, they'd be called innovation bodies. Standards, by definition, can't be innovative. This is the trouble with all the work currently being put into the WS-* stack. Talented people are wasting time and resources devising capabilities that will never, ever be used. Only those specs that reflect established, proven practices will successfully become durable standards.
UDDI 1.0 was a classic case of a standard that tried to be innovative. No wonder no one used it. It missed out on a vital ingredient, because the geeks who designed it wanted the machines to do everything without having to involve human beings at all. So they missed out the element of trust.
If web services are ever going to be of any use in the practical, real-life business world, then they have to operate within an architecture that allows their use to be mediated by trust. Service consumers are going to want to set policies that limit the services they'll use to certain trusted providers. This is more than simply a matter of negotiating contracts, because, just like in the real world, how do you know you can trust a provider to abide by its contract commitments? Conversely, sometimes you don't need the hassle of drawing up and reading through contracts. You might decide you're willing to trust Amazon, for example, because you trust its brand, and you'll trust certain partners because you have business relationships with them. You might drop the trust requirement for non-critical services (RSS feeds from bloggers, for example), but for anything transactional or with transactional implications, trust will be essential, backed up by actionable contracts if things go wrong.
UDDI, now at version 3.0, can't do any of this trust stuff and has stopped attempting to be a blueprint for a universal services registry. It is now targeted for internal use by enterprises that have large numbers of services and want to track them. But its future is not yet assured. There are other ways of maintaining services registries, and they may turn out to work better than UDDI. They may also happen to work better with whatever trust mechanisms evolve, but that's still an area where most innovation has yet to happen and therefore no one can know what's going to work. Maybe sometime way in the future, once everyone understands how to manage trust in relation to web services, it will finally become possible to automatically discover and consume services on the public Internet safely. But that's not what web services or SOA are trying to do today, and anybody who still believes that is out of touch with the market as it exists today.