A quick change of plans has cleared the way for me to attend at least the first half-day of Kevin Werbach's Supernova conference in Palo Alto on Monday. Through some bizarre coincidence, Bay Area citizens have the choice of three web services-related conferences next week. Supernova by far is the most radical of the three, with its marvelously eclectic mix of WiFi, weblogs, web services, and much else all of which, as Kevin quite rightly points out, are intimately related but rarely examined in a single forum.
The event promises to be memorable, with a line-up of speakers that reads like a 'Who's Who' of leading thinkers about loosely coupled concepts. Frankly, I'm annoyed that this event didn't get picked up by this site's radar earlier on, but I'm glad to have the chance to be there and wth WiFi support promised on the premises, I hope to post from the event on Monday.
Then it's off to CNET's Building a Web Services Foundation conference at the Nikko in central San Francisco, where I am speaking about ROI on Wednesday. The Nikko is another WiFi-equipped venue, so if the technology works, I expect to make some more postings from there. I shall also be strolling a few blocks every so often to catch some of the sessions at DCI's The Real-Time Enterprise. All in all, it promises to be quite a productive trip, and I look forward to meeting quite a few of this site's readers while I'm in town.
We owe an apology to readers who are signed up to the weekly email bulletin. An intermittent glitch has meant that delivery to some of you has been sporadic in recent weeks. It's taken a while to identify this intermittent fault, which affected a random subset of our mailing list each week, but I'm glad to report that as of tonight it has been fixed, and normal service should resume with this weekend's mailing.
posted by Phil Wainewright 2:33 PM (GMT) | comments | link
The more attention gets focussed on web services, the more claptrap will get written about it, just as I warned back in April. Here are two sterling examples that have cropped up recently:
The absurd statistic The latest quarterly survey of 400 US enterprises by Evans Data, reports WebServices.Org, has found that "80%, are already incorporating the leading web services standards - XML, WSDL, SOAP, and UDDI - into applications ... Incredibly, 98% expect to be working with web services standards within the next two years."
Well, if you include XML in the list, I would have thought the incredible statistic is the eight holdouts who still don't realize how pervasive XML is going to become. I mean, anyone who produces an RSS feed has adopted XML. Anyone who installs Office 11 will be using it. When you ask such a ridiculously catch-all question, of course you're going to get a big number. But only a fool would base strategic decisions on such poorly designed research.
The fireproof prediction I'm not going to single out any particular analyst here because frankly the list of culprits is too long. I'm thinking of the hoary old assertion that whatever market is being discussed will be dominated in five or six years' time by the big vendors. Just as no-one ever got fired for buying IBM, so no analyst will ever lose their job for making this prediction. After all, whoever heard of a market being dominated by a small vendor?
This is almost as easy as predicting that 80 percent of startups in a given sector will fall by the wayside. Well of course they will that's what happens to startups. Most of them fail, get acquired, or switch to a different business model in a new market. In truth, a 20 percent survival rate is massively overoptimistic. The trick for investors, job applicants and customers is picking the two percent that will really fly.
Only a fool, I was about to say, but then I remembered The Fool's Proposition, advanced by Jeff Schneider in his blog last week: "A foolish idea may become reality based purely on the fact that foolish people communicate, listen, believe and act. If a fool is able to deliver his message to enough other fools to create a critical mass the foolish concept has the potential to become reality." Jeff's comment was in response to some of John Hagel's counter-intuitive suggestions, but who's to say that mainstream opinion isn't already the result of fools reaching critical mass?
posted by Phil Wainewright 10:52 AM (GMT) | comments | link
Thursday, December 05, 2002
Intelligence ex machina
Shortly after posting Spontaneous intelligence and the Semantic Web last month, I came across a quote that seemed to validate my position. "It's not artificial, and it's not intelligent," said Eric Miller, who leads the W3C's Semantic Web Activity, in response to suggestions that the project is no more than reheated AI, according to a CNET report.
I was not surprised to find such sentiments voiced from the very heart of the project. Those who are most involved are well aware of what they are doing, and of its limitations. It is less well-informed observers who are inclined to misrepresent its scope in the way that I was lambasting in my posting.
Evidently I should have made that distinction clearer, since a comment from Danny Ayers took me to task for implying that those working on the Semantic Web see it as anything other than a practical tool. As the author of the very useful Semantic Weblog (which I see has also linked this week to the Eric Miller story), Danny clearly knows what he is talking about.
He also brings up a crucial flaw in my original analogy. Pasteur's nineteenth century experiment did not entirely disprove spontaneous generation, he merely showed that it does not take place as a matter of routine. The very existence of life in a universe that is supposed to have started out as a collection of barren rocks and gas surely proves that somehow, somewhere, life did originally arise spontaneously unless it was introduced by some deus ex machina.
We can only speculate about the origins of life. However we are in a much better informed position when it comes to the question of the origins of machine intelligence, because we know very well that we ourselves are the deus ex machina in this scenario. The Semantic Web, like any other machine, will only ever be capable of mechanically reproducing the intelligence that human beings have put inside it, and if that intelligence is flawed, it will mindlessly reproduce the flaws. The people at the heart of the Semantic Web Activity understand this perfectly, but they do not always explain it very effectively.
posted by Phil Wainewright 12:01 PM (GMT) | comments | link
Walking the line
The trade-off for faster design and development time using a loosely coupled architecture is a runtime performance hit, suggests Gordon Weakliem, in response to Doug Kaye's musing on what is different compared to tightly coupled systems. Doug replies that "it's more a question of granularity," but I think the explanation lies elsewhere.
The discussion immediately brought to mind a simple diagram I've used many times in presentations to illustrate the choice users are constantly having to make regarding web services adoption (reproduced left). The vertical scale represents the gains in adaptability that accrue from adopting a highly distributed, componentized web services architecture. By rigorously implementing best practices in standards-based service componentization, you maximize your ability to rapidly reconfigure your systems to take advantage of new capabilities or business opportunities.
The horizontal scale represents the performance gains you can achieve when you build a tightly integrated system that is honed to perform a known process with maximum speed and efficiency.
The diagonal line up the middle represents what happens to your costs when you attempt to achieve adaptability at the same time as retaining the performance benefits of tight integration you'll spend a fortune.
On the face of it, I'm agreeing with Gordon: what you gain in adaptability, you lose in performance (and vice-versa, of course). But that's only true because web services standards are at such an early stage. As standards and best practices evolve, the integration will gradually be built into the architecture, and therefore the inherent performance of a web services-based system will improve over time whereas proprietary integration will remain as inflexible as ever. The line that web services adopters must walk over the next few years is one that strikes a balance between their present-day needs for performance and their future requirement for adaptability.
I don't see this as characteristic solely of web services. I think it's more a case of the difference between the two types of technology innovation identified by Clayton Christensen in his classic book, The Innovator's Dilemma (one of my highlighted books, by the way). The more common type of technology innovation helps us do existing things better, by smoothing or enhancing our existing processes, but every so often a disruptive technology comes along that overturns existing ways of doing things with new alternative processes. Established businesses dismiss disruptive technologies as being less efficient and reliable than the existing way of doing things, but they end up proving more adaptable than the old ways and eventually overtake them.
So I would argue that the trade-off Gordon identifies is only temporarily a characteristic of loosely coupled systems, attributable to their immaturity rather than being inherent in their nature. Runtime performance hits are a consequence of inexperience and lack of refinement, and they will go away once the technology matures.
posted by Phil Wainewright 7:08 AM (GMT) | comments | link
Wednesday, December 04, 2002
Early adopters take the plunge
Enterprises are heeding the advice of vendors and analysts to get started now with web services. Interviews with two early adopters published this week demonstrate they are prepared to put up with the risks and shortcomings of early adoption to ensure they don't fall behind later on. But equally, they're sanguine about the limitations and what technologies and standards still need to be developed.
William Stangel, SVP and systems enterprise architect at Fidelity Investments, "has no second thoughts about placing a huge bet on web services," according to a CNET article published yesterday as part of a trio of interviews with tech visionaries about web services. His "hard-headed business reasons" for pushing the technology center around easier integration of applications, but he sees roadblocks that still need to be cleared in security and workflow.
Another early adopter is Gene Zimon, CIO and SVP at Boston-based energy company Nstar. Although the company's initial web services deployment is internal, the ability to link to external resources is a crucial next step, he says in a NetworkWorld interview: "The real potential benefits will come when we're able to take advantage of external providers of certain types of databases and/or services. There's no reason we need to create all these types of databases internally."
But there are important issues that vendors need to resolve before Nstar can attempt this with confidence, he goes on: "There are issues related to the availability of portals, data latency, security, validity of source and cost ... None of the enterprise portals currently on the market are architected to take advantage of the emerging web services standards, particularly the emerging interoperability and security standards."
Another potential headache he cites is administration: "For example, how will enterprises deal with web service components hosted outside the enterprise that are subject to change? How can you guarantee availability of key business functionality or continuity of service when such functionality is coming from outside the enterprise? How do you ensure that two web services components, both of which are hosted outside the enterprise by different entities, remain in synch?"
posted by Phil Wainewright 3:54 AM (GMT) | comments | link
Assembling on-demand services to automate business, commerce, and the sharing of knowledge