The company's new co-president Chuck Phillips presented this change of tack in typically robust Oracle style, maintaining that customers would still be better advised to heed the database vendor's prior insistence on a single data model (its own). But Oracle now recognises that some customers are just too hidebound to admit the heresy of their ways, and has decided to indulge what it sees as their perverse attachment to operating a "hodgepodge" of lesser vendors' products albeit only insofar as it eases their subsequent migration path to the pure nirvana of an all-Oracle setup.
The Oracle plan, reports CRN in its story Oracle Hints At Things To Come At OracleApps World, is to introduce an integration package "with formally defined business processes, thousands of Web services, 800-plus business events, 150 transport messages, industry-specific protocols and a repository of interfaces." There will also be a "data hub" that opens up Oracle's data model (dubbed the "single source of truth" by Phillips) to third-party portals and applications.
However Oracle tries to dress up this announcement, it marks the beginning of the end for the vendor's hitherto fundamentalist line on all-or-nothing single-suite integration. The competitive pressure of web services standardization has irreversibly cracked open Oracle's citadel, as it will triumph over every proprietary redoubt. Even if its bid for PeopleSoft finally succeeds, there will never be a time when Oracle (or any other vendor) commands the enterprise application landscape as undisputed champion. In the emerging loosely coupled world of web services, the way to win customers is by demonstrating how well your products connect to those of your rivals or, to forge a new mantra from an old adage, if you want to beat 'em, join (to) 'em.
It's time to disabuse people of the notion that utility computing is stuff that comes out of a socket in the wall, just like electricity. This very common misconception is founded on a false analogy, and it's one that leads to some dangerously misleading conclusions.
The false analogy I'm talking about goes something like this. It starts innocuously enough with the observation that power points and network points are often found side by side, in the wall or beneath a hatch in the floor. From this, it's very easy to make the erroneous premise that, just like electricity from the power points, you could have computing coming out of those network sockets. A few more steps of faulty logic then rapidly bring you to the conclusion that, since electricity comes from massive, centralized power-generation plants, so should computing.
It would suit certain computer vendors if that were how utility computing is going to work. They are the vendors who sell massive, centralized computer systems. It would suit their customers, too, if utility computing really did obey such a straightforward, centralized distribution model, because that would give them a stranglehold over pricing and supply. But it's not going to be as easy as that.
The truth is that computing isn't at all analogous to electricity. It doesn't radiate out along spokes from a central hub, distributing its output to the periphery (nor, actually, in this post-industrial age, does electricity distribution need to be so centralized but that's a separate argument). Computing can happen anywhere on the network. It already is distributed. The role of the network is to establish the communications links that allow it to be shared. The sockets in the wall are for two-way traffic.
Electricity provides a more accurate analogy if you look at the applications of electric power. Raw power is converted into heat, light or motion only at the point of delivery. Nobody (unless they live right next door) expects the utility company to supply their hot water from the power station. People prefer their own individual appliances, chosen and regulated to meet their specific requirements. The notion of delivering word processing (for example) to your desktop from a central computing utility is almost as absurd as the idea of illuminating your desk by having the power company shine a light down a wire from the power station.
On the other hand, there are economies of scale to be had from, for example, manufacturing popular appliance designs in their millions, and from standardizing their interfaces such as adopting a single design for the jacks that interconnect audio devices, or producing audio recordings to a standard format that can be read by all audio players.
All of these proven techniques combine to make the utility power grid successful because it has found the right balance between connectivity, choice and standardization. This allows the centralization of those elements that benefit from economies of scale, namely power generation and appliance manufacturing. Just as importantly, it distributes those elements that benefit from individual customization, namely the deployment, configuration and operation of individual appliances.
The same is going to be true of utility computing, with the exception that there is going to be much more scope for achieving economies of scale at multiple levels. Instead of thinking in terms of monolithic computing services, think of choosing among an almost limitless universe of service options. Connecting to the wall socket will open up access to a global market, in which every resource can find its most efficient level. For some resources desktop productivity software for example mass distribution of retail packages will remain the most cost-effective model. For others a really obvious example is web content search a single, centralized resource will provide unbeatable economies of scale. Much more significantly, there will be innumerable examples where small, specialist shared resources will find a market for example, online information providers who focus on emerging tech industry sectors (or so we at Loosely Coupled hope).
The utility element of utility computing, then, is the provision of the infrastructure that enables this resource-sharing. It's going to be more complex and sophisticated than the electric power or telecoms utility infrastructures. It's going to have to evolve to be a lot more mature than what we have at the moment. I think Nick van der Zweep, HPís director of virtualization and utility computing, made an important point earlier this month in InfoWorld's article, Getting down to grid computing when he told Ed Scannell that, "Right now grids are just APIs, and the management systems available canít reach in to understand what is going on inside of them." Web services management products will play a significant role in monitoring and policing the grid infrastructure that comprises utility computing.
But be clear on one crucial point. Utility computing will never be about the provision of applications out of a wall socket. The utility providers will operate the infrastructure. But the applications will sit on top. Rather than being a component of the infrastructure, they will be delivered across it by independent providers.
Last week, IBM published a set of specifications that aim to unify grid computing and web services. Most of the media chose to ignore this landmark event and instead reported 'yet another' standards battle between IBM and Microsoft.
Media attention was focused on the evident overlap between WS-Notification, one of the IBM-backed specs, and WS-Eventing, announced by Microsoft, BEA and Tibco the week before. But this was really just a sideshow to the main event. True, both specifications deal with publish-subscribe mechanisms for communicating information about changes in status of loosely coupled network participants. Eventually, they'll probably be combined into a single standard. But for the moment, there's no conflict between the two, because they're designed for use in quite distinct environments.
WS-Eventing has been conceived to add a publish-subscribe dimension to even the most loosely federated of service-oriented architectures. It neither expects nor demands any other relationships to exist between the resources concerned, beyond their adherence to basic web services standards like SOAP and HTTP.
WS-Notification, on the other hand, is explicitly part of a much broader set of specifications called Web Services Resource Framework. It is concerned with exchanging updates and alerts within a much more closely knit, grid-style services architecture. Strictly speaking, of course, this differentiation is only a matter of degree, and will eventually dissolve. But in today's enterprise architectures, the two specifications are likely to operate at entirely different levels and thus can happily coexist because they'll never cross each others' paths.
Much more interesting than this non-conflict is the proposal to marry web services and grid computing through the Web Services Resource Framework (WSRF) set of specifications. As its name suggests, WSRF makes grid resources available via web services, thus combining the powerful resource sharing of grid architectures with the loosely coupled ethos of service-oriented architectures.
WSRF comprises a number of separate sub-specifications, such as WS-Resource Properties and WS-Resource Lifetime. WS-Notification got separate billing only because it has additional supporters. As well as IBM, HP and the Globus Alliance, who teamed up on WSRF, WS-Notification has had contributions from Akamai Technologies, SAP, Sonic Software and Tibco. I find the inclusion of Akamai on this list particularly meaningful, since Akamai has been a pioneer of resource sharing over the Web since before the dawn of web services.
To wrap up, here's a quick rundown of news stories and resources relating to WSRF and WS-Notification:
Companies Seek to Marry Grid, Web Services As its title suggests, internetnews.com just about manages to keep its eye on the big story: "Grid infrastructures and applications can now be built using Web services specifications with the guidance of WS-Resource Framework and WS-Notification."
IBM proposes Web services spec The coverage at CNET News.com spells out in plain language what the WS-Notification spec aims to achieve: "provide a standards-based way to program business applications to automatically respond to events such as a drop in inventory level or hardware server failure."
IBM, Microsoft on opposite sides of standards fence Delving into the differences between the overlapping specs, searchWebServices obtains some useful comments from HP's Mark Potts on why WS-Eventing didn't fit the bill: "... from a management perspective, we really needed something that was a little bit richer than a peer-to-peer event mechanism."
I'm resuming posting to my weblog today after a brief hiaitus, which was due to several overlapping demands on my time coinciding with the transition to a new, in-house weblog publishing engine. Today is the first opportunity to resume publishing with enough leisure to make sure that everything is working as it should.
We're still in transitional mode in the sense that some new features are not yet live, and we are not yet ready to introduce our upcoming site design. But the core functionality is in place, which means that for the first time our weekly, monthly and yearly archive pages will all update each time I publish a new blog entry. Under the old Blogger based system, the monthly and annual archive pages were a custom-built add-on that involved manual updating, which relegated it to a weekly task. Introducing the new system has given the opportunity to automate the process at the same time as expanding the monthly archive to give more details of each entry, while the annual archive page includes titles of every entry.
Still ahead is the task of importing all the content from previous years into the new system, so these archive improvements only apply for the time being to 2004 entries.
More work is going on behind the scenes to implement an XSLT-based publishing system for the entire site. On that note, I was intrigued this week to note the launch of xBuilderSMS, a commercial site management package that "combines industry-standard XML and XSL/XSLT to define a single site structure with virtually limitless output presentation options that can be extended to meet any needs." I'm glad to see a product like this appearing. But I'm wary of its use of on-the-fly processing to serve all pages dynamically. My personal preference is to create pages in advance and store them on the server ready to serve to clients especially high-traffic HTML pages. Then if your page processor fails for some reason, your web server can still serve the previously created pages to clients while the page processor is being fixed.