Vendor strategies to promote grid computing as the IT backbone for service oriented architectures are missing a vital element: standards.
| ||express delivery|
| print comment|
|Immature and incomplete standards for sharing grid computing resources could leave enterprises locked into vendors' proprietary technology stacks: |
- IBM, CA, HP, Sun, Microsoft and Oracle each have grid strategies
- All aim to dynamically offer IT capacity to meet business needs
- But vendors' proprietary grid environments aren't interoperable
- Standards for resource sharing and management are emerging
- For now, implementing grid means accepting vendor lock-in
Glossary terms: grid computing, WSRF, SOA, on demand, interoperability, lookup tool
Grid computing is the vision of an IT environment where network, computing and storage resources can be dynamically managed to automatically support business processes, often likened to the way an electric power grid delivers energy on demand. It underpins IBM and CA's vision of on-demand computing, HP's Adaptive Enterprise, Sun's N1 and Microsoft's Dynamic Systems Initiative (DSI). Oracle is calling its strategy simply Grid Computing, and has rolled grid-like virtualization technologies into the latest 10g release of its application server and database.
All these initiatives are founded on "the same basic concepts," says Don LeClair, senior vice president in the office of the CTO at CA: they are centered around the ability to dynamically source IT capacity to meet business needs. In common with other vendors, CA sees a service oriented architecture forming the application infrastructure above, which then sources the grid's virtualized IT capacity to meet business process demands. LeClair says: "OASIS and the W3C have defined the standards for the management of web services and the evolution to a grid computing platform."
Unfortunately for users, the story is not as complete as LeClair's comment implies. It is true that the foundations for SOA have largely been agreed in the basic web services standards, which today enables a utility-computing model within the application layer. LeClair gives the example of a loan application process operating within a composite application infrastructure. The BPM system managing the infrastructure knows the application needs to be processed in a certain number of seconds in order to meet an agreed service level. If for some reason a service that makes up the business process is unavailable, say a credit scoring service from TRW, the system will dynamically switch over and source the service from another provider. In theory, that model can be extended to include IT capacity provided from the grid infrastructure: for example the BPM system can make a call to a web service that sources extra storage capacity.
However, CA is the first to admit there is a gaping hole in this argument. While the business process standards are being hammered out today, stovepipes still exist within the grid backbone itself. Each vendor's approach is proprietary within its own infrastructure stack, meaning resources cannot be shared from one environment to another, for example from a Microsoft DSI implementation to a Sun N1 architecture. A full set of mature standards to enable interoperability could be up to ten years away. And so the perennial issue for enterprises raises its ugly head: they operate in heterogeneous environments, struggling to manage platforms that vendors have designed assuming a single-source hegemony.
The need to share resources goes to the heart of what grid computing is all about and why it is a live issue for corporates. While academic and NASA research projects have harnessed grid technologies to create virtual supercomputers that can address massive computational challenges, commercial applications are motivated by the more prosaic aim of saving money. As data proliferates to ever greater levels, some method of maximizing usage of spare IT capacity is becoming increasingly important. "People have racks of servers stacked up to meet application demand but most of the time they're not doing anything, so there's a low-level utilization. It's difficult to make more of the resources from application A to B," says LeClair.
Corporate data centers bring this problem into sharp relief, since their primary purpose is to maximize economies of scale across different applications and management functions. This becomes a systems management problem, as well as taking in storage and application management. Many data centers have responded to the problem by writing their own management programs to exchange data, or even by manually rekeying data from one IT resource to another. In large installations such an approach is impractical and ineffective. Service oriented architectures, with their often unpredictable swings between peaks and troughs of application demand, only serve to exacerbate the scale of the challenge.
The problem is a familiar theme for EDS hosting services, where Darrel Thomas is chief technologist: "If you think about what EDS does we are a manager of managers," he says. "The key problem in managing multiple solutions is customers have all built their own operational models. We have to consume that into our environment. From day one, they build their own proprietary models, so instead of having a cross-functional view, we have silos of utility computing."
Emerging standards aim to supersede those proprietary models with globally agreed specifications for sharing grid resources, but all are at an early stage of development. In January this year, the Globus Alliance and IBM announced WS-Notification and Web Services Resource Framework (WSRF), two key specifications for making grid resources available as web services within an SOA. They have been endorsed by the grid movement's Global Grid Forum as a component of its Open Grid Services Architecture, but neither has yet been handed over to an independent standards organization.
DCML use casesThe Data Center Markup Language interest group highlights seven sample use cases which it is hoping to solve:
- Monitoring newly provisioned servers: cross system communication typically relies on manual data entry; when communications break down, as manual systems inevitably do, customers are often the first to notice
- Asset inventory across multiple management systems: synchronization of data between different asset systems is again manual, sporadic and prone to errors
- Providing standard application server services to developers: With a wide range of app server standards, developers tend to build their applications on different platforms. Reusing servers from one application to another would result in significant savings
- Data center migration: migrating systems from one data center to another can be a laborious manual process
- Automated failover: when a device fails, information about configuration, customer, SLA, application, location and break-fix plan exists across multiple systems and in people’s heads
- Dynamic sourcing: adding new devices to cater to increased demands for capacity such as web servers can mean manually entering all the configuration information
This week, another standards initiative moved a step closer to acceptance. EDS, together with CA and other vendors, handed over Data Center Markup Language (DCML) to ecommerce standards body OASIS, which already has stewardship of many web services standards.
Whereas WSRF focuses on making grid resources available as services, DCML is concerned with managing resources across multiple grid environments. It provides a standard data format, based on XML, for sharing information between management systems and codifying management policies to enable the cross-platform automation that is essential for utility computing.
DCML has a broader remit than earlier data center standards initiatives such as storage network standard SNIA. It goes back to basics to propose an outline framework for exchanging data on management issues. Its backers hope that submitting it to OASIS will encourage various parties especially the major systems vendors to extend its basic 1.0 outline framework to apply it to different environments and problems (see box). For the moment, though, the ability of DCML to solve its targeted sample use cases is limited and still requires proprietary coding.
Research group IDC has estimated that the market for grid computing will grow to $12 billion across both technical markets and commercial enterprise. There is growing interest in using grid as a platform for service oriented architectures, which is expected to give extra impetus to commercial deployments. But until standards like WSRF and DCML gain substance and momentum, the reality today is that the majority of commercial grid initiatives will be tied to individual vendors' proprietary grid environments, without the ability to share or manage resources across separate grid architectures. Enterprises who are counting on coupling SOA with grid to unleash the hidden resources of a multi-platform IT infrastructure may be dismayed to find their progress snarled by vendor grid lock-in.
More on this topic
Interest Group supporting Data Center Markup Language