When IBM talks about on-demand computing enabling the on-demand enterprise, the concept of on-demand web services can't be far behind. An article published today at ADTmag.com provides a timely reminder of the work IBM is doing on its Allegro Framework, which is designed to help businesses offer commercial web services on a pay-as-you-go basis.
According to Stefan Van Overtveldt, director of WebSphere technical marketing, "Allegro will provide for what we at one point called 'dynamic e-business'.'' After Wednesday's announcements, presumably that would now be called "on-demand e-business". He goes on to explain how it will work:
"If I have a core competency, I'm an HR management firm, and I manage 401K plans, for example, I'm going to expose this service as a web service and all these companies will be able to make use of it ... while fully doing management of the accesses, user IDs, monitoring, metering, logging and billing.''
Hang on a minute, though. Isn't that exactly the kind of dynamic application assembly that last week IDC was saying had been overhyped and might "never be achievable"? And surely therefore it's a prime example of the kind of impossibly optimistic vision that IBM itself has been lampooning this week with its "time machine" analogy? Funny that the question of time travel should come up at this point, because it seems that someone has been turning the clock back on IDC's website, from which all trace of the press release that gave rise to last week's story has now disappeared. Perhaps they've realized that dynamic assembly of on-demand web services components is not such an absurdly remote prospect after all.
Early versions of the Allegro code are currently in beta testing. The first products based on the technology will be designed to run on the upcoming version 5 of WebSphere Application Server, and are expected to emerge next year.
posted by Phil Wainewright 7:04 AM (GMT) | comments | link
Tightly bound to your web services platform
People who build web services using tools from the big vendors are tied to deploying on the same vendors' platforms. And I'm not just talking about Microsoft here. Despite touting the openness of the J2EE architecture and their commitment to open systems and standards, neither IBM nor BEA support any platforms apart from their own if you build web services using their tools.
This was something I didn't know until I read the transcript today of a recent video interview at TheServerSide.com with Anne Thomas Manes, who at the time of the interview was CTO of web services startup Systinet (she has just struck out on her own as a software industry analyst and consultant). Anne makes the point precisely because Systinet's own WASP development tool deploys to multiple platforms: "The value of our system is the fact that when I build something with WASP, I can deploy it in any different platform. IBM's and BEA's products only support Websphere and Weblogic respectively."
This doesn't really matter if you've already bought and installed your chosen deployment platform, because you won't want to deploy to any other platforms. Well, not unless you're a mixed shop. Or you want to have the option of moving to a new platform later on without having to learn a new set of tools and rebuilding all your services. Or you think you might find yourself having to integrate multiple platforms after a merger or acquisition.
The vendors have this control because, even though the resulting SOAP messages can be interpreted on any platform, the web services container that produces them is completely proprietary, as Anne explains: "People don't realize this but when you build a web service with a particular web services tool, that requires that you actually deploy it into the web services container that comes with that tool ... There's no standard specification for a web service container ... and therefore, each Web service implementation always has proprietary code in there."
It is up to the vendor to decide whether to make that proprietary container portable across multiple platforms. Big vendors that aim to offer a complete, smoothly integrated, end-to-end environment naturally focus on tight vertical integration within their own stack of products, whereas smaller vendors and startups differentiate themselves by offering horizontal compatibility with the widest possible range of deployment environments. This is one of the clearest arguments for opting for the smaller vendors' tools where possible, since even if you believe you've standardized on a homogenous single-vendor environment, events have a habit of conspiring to undermine such neat and tidy arrangements. As Anne mentions elsewhere in the interview, specialists are also likely to do a better job of building in support for emerging requirements such as security, and of honing the performance metrics of the container itself.
PS: SearchWebServices this week has published a brief but useful profile by the451 of Systinet, written in light of its newly launched OEM strategy.
posted by Phil Wainewright 1:46 AM (GMT) | comments | link
Thursday, October 31, 2002
Back to the future with IBM
IBM can't give you a time machine, but it'll deliver the next best thing. That was the upbeat message given yesterday by CEO and soon-to-be-chairman Sam Palmisano, in an address to key customers that was simultaneously broadcast to IBM staff worldwide.
His talk launched the transition to "On-demand computing" as the next phase of e-business, and as the new unifying theme of IBM's proposition to customers. The concept brings together several of IBM's strongest suits, including autonomic, utility and grid computing, a move to open standards platforms, and the need for enterprises to move to more flexible business process management (the latter of course providing an opening for IBM's newly acquired army of former PwC business consultants to march in).
Lest IBM be accused of over-hyping immature technologies, the white paper (8 pages, 2.5MB, PDF) accompanying the new marketing theme starts out with a sideswipe at tech industry participants who offer what it calls "Vision without execution. Imagination without the invention to back it up." It begins with a fable about a company that invented a time machine, which would allow enterprises to "understand what their customers were going to need before they actually needed it" and "undo investments in proprietary technology stuff that slowed them down." Of course, the time machine didn't work, says the white paper but then goes on to imply that the next best thing at predicting the future and undoing the past is IBM, with its business consultants, engineers and research staff, so who needs a time machine anyway?
Some observers have likened Palmisano's talk to the Comdex keynote speech that Lou Gerstner gave in 1995 to launch the theme of network computing. Looking back from the era of web services, hindsight tells us that Gerstner's was a prescient speech, but there were some howlers in it as well, most notably the notion of the Network Computer, which at one time was supposed to make the PC obsolete, and which IBM, Sun and Oracle each invested much energy in developing and promoting their own versions of.
So the time machine analogy is right IBM is no better than anyone else at predicting the future. But as the old saying goes, no-one ever got sacked buying from IBM. At least if you let the company's strategists second-guess the future for you, you can blame them when it all goes pear-shaped rather than having to carry the can yourself.
Far better, of course to make up your mind for yourself. But it will do you no harm to at least listen to what IBM's people have to say. One of the company's best visionaries, Irving Wladawsky-Berger, has been put in charge of the new initiative. I made a note a while back of an interview he gave on ZDNet, which gives an interesting insight into where he feels things are going.
UPDATE [added 2:10 PM GMT, also clarified wording of original posting]: One other resource I forgot to mention earlier was a Summit Strategies report by Tom Kucharvy, published in August. The New Blue: Can IBM Burn Down the IT Industry and Rebuild It in Its Own Image? outlines how IBM is evolving "from Big Blue the products company, to New Blue the services company." Tom is an accurate and close observer of IBM strategy to the extent that one sometimes wonders whether IBM is fulfilling his predictions or following his advice. It provides an intriguing context to the changes taking place at IBM as it makes its own transition into an on-demand services company.
posted by Phil Wainewright 1:21 AM (GMT) | comments | link
Wednesday, October 30, 2002
Loosely coupled with John Hagel
Reviews have started to appear of John Hagel's new book on the impact of web services on business strategy, Out of the Box. The book is a great antidote to the recentwave of web services cynicism.
John himself predicts a long journey before web services reach the promised land of "dynamic composition of applications from many micro-services". But, as he explained in an interview last week with InternetWeek's Richard Karpinsky, "that doesn't mean web services aren't valuable today. In fact ... they are more valuable today because they offer near-term practical benefits while setting the stage for longer-term more revolutionary gains."
The Karpinsky interview is an excellent introduction to the concepts outined in Out of the Box. As befits a visionary thinker, John Hagel explains the long-term context that web services fit into, and the likely effect on how businesses organize themselves in the future. But he does so without losing sight of today's pragmatic business imperatives.
Of course, John hasn't always got it right Richard notes that in his last book, Net Worth, which came out near the peak of the dot-com bubble, he got carried away with the potential for Internet-based "infomediaries". Doug Kaye, who himself is writing a book about web services strategies, sums up the problem in a review of Out of the Box published in his own email newsletter last month: "The reader's challenge is to separate the solid strategy from the hypothetical futurism. It's often hard to find the boundaries in John's books, and the latest is no exception."
Doug's review usefully highlights some of the book's more questionable assumptions, but without detracting from its strengths. If you want to get your head buzzing with ideas about how web services might change the way your company does business in the future as well as making a strong case for starting to adopt web services today then this book should definitely be on your reading list.
On the other hand, if reading a whole book doesn't appeal, John's website is a mine of useful articles and papers indeed, if you were to download and read all the white papers on his web site you would probably cover most of the ground set out in the book, and save yourself the $20.97 it will cost you from Amazon.
John posted an excellent article earlier this month called Loosely Coupled: A Term Worth Understanding. "This is a term that will reshape the business world in profound ways the next several decades," he writes, which of course is a sentiment I very much agree with. The concept is fundamental to the thinking in Out of the Box, and the article is a lucid exposition of what loosely coupled means, why it will have such an impact on business strategy, and why most business managers and management systems are ill-prepared for it.
"The old, hard-wired approaches to business practices just won’t cut it any more," John concludes. "The real winners in the race to create economic value will be those who understand the need to move to more loosely coupled operations, organizations and strategies. This is not a technology issue, it is a business management issue."
Harnessing the power of decentralization doesn't have to mean distributing computing all the way to the desktop. The web isn't a two-dimensional entity like client-server; it's infinitely tiered. Wi-fi, weblogs and web services are three thriving examples cited last week by Kevin Werbach in his CNet column on Tech's big challenge: Decentralization. Each of them is based, not on clients, but on a network of distributed servers.
Using distributed servers brings the benefits of decentralization (autonomous control, user choice, multiple redundancy) without having to sacrifice the advantages of centralization. Unlike clients, servers can be designed to remain permanently connected, and can therefore benefit from network resources such as remote management, backup and mirroring. They can gain economies of scale through aggregation into shared-server infrastructures technologies such as Sphera's HostingDirector and Interland's blueHALO architecture, for example, share big-system resilience across thousands of autonomous server instances.
Rather than thinking of the network as an army of clients ranged around a few massively centralized server generals, think instead of rank after rank of distributed server, each making their own individual contribution. There are workgroup servers on company LANs, application servers in enterprise data centers, personal servers in individuals' homes, and a multiplicity of large and small shared servers scattered all over the Internet at providers' data centers.
Decentralization works because it mirrors our own nature, wrote Werbach: "It's the human element that is really driving the pressure for decentralized solutions. This shouldn't be too surprising. Biological phenomena like the human body and the global biosphere have had billions of years to evolve, and they are the most complex decentralized systems we encounter."
The systems infrastructure for decentralization is maturing now, not least through the rapid advance of PHP on distributed Linux platforms and the evolution of web services architectures. The challenge now is to develop application infrastructure that harnesses this powerful new layer of distributed servers. To use a topical example, the hacking of Blogger's weblog publishing servers on Friday demonstrated the vulnerability of centralized systems (though the fact it only affected publishing of new content and not already-published material also illustrated the strengths of a distributed system). Yet moving all the way back to a client-based system like Userland's Radio is a step too far in the opposite direction.
The answer is to distribute the application infrastructure to a tier of shared-server and workgroup hosting that is specific to individuals, teams and businesses retaining all the benefits of centralized management yet none of the penalties of loss of control for users.
Within this distributed application infrastructure, each individual will access several overlapping computing domains, consisting of a desktop at work, a desktop at home, a laptop, a mobile phone, a shared-server personal website, perhaps a section of an intranet, plus a Notes, Exchange or Groove workspace ... the list goes on. The missing technology piece is a service (or services) that can co-ordinate and tie together all those diverse application resources so they can be appropriately accessed from each device. While employers have a clear role in relation to some of these domains, others need to be under the direct control of the individual, which seems to open up some interesting service provider opportunities.
posted by Phil Wainewright 4:35 AM (GMT) | comments | link
Assembling on-demand services to automate business, commerce, and the sharing of knowledge