The vision of easily assembled, on-demand software automation has come a small step closer today with the confirmation of the WSRP 1.0 specification as an OASIS standard. But there's still a lot further to go. It's always been deceptively easy to visualize a computing environment where all the information and processes we need to do our work are right there in front of us on the screen. It's a great deal harder to conceive of all discovery, innovation, debate and education required to make it a reality.
Getting the technology infrastructure in place is the most obvious of those hurdles. WSRP defines how to embed software functionality that's running on a remote server into a web page hosted by your own server, which is a significant advance. But WSRP isn't enough on its own. For one thing, it's technically quite complex, so developers will need the help of tools to take full advantage of it. On top of that, it's just one element of the whole picture. The remote services need to exist before they get delivered into someone's portal using WSRP. If those services require behind-the-scenes integration, then other web services technologies will need to come into play, too.
Then there's the rest of the story. Once the technology pieces are in place, a big part of the education task is a relearning exercise, which involves looking beyond the technology to the business context. This week's Loosely Coupled article, Building real-time sales channels, looks at the experience of several companies that have combined functionality from several different sources into a portal-based, composite application. Each has found themselves thinking about business issues in a new light, even though they've been integrating in three completely separate different ways:
Amex has been delivering a single set of services out to multiple partners;
TV broadcaster BSkyB has been consolidating services from a variety of suppliers into a single provisioning system;
Telecoms provider MidWest Wireless has linked legacy systems into a customer service portal accessed by internal and partner sales staff.
The similarities arise because automating a previously disjointed proposition (sometimes one that was hitherto inconceivable) forces you to think in new ways about what you're offering to your customers and how it relates to your business objectives. What early adopters are finding is that overcoming the technology barriers usually prompts more questions than it solves, because it forces organizations to confront questions about business processes that they had previously been able to leave on one side.
The lesson to draw from that experience is not to embark on integration simply because it's now technologically possible without thinking through the business implications first. Systems that delivered something useful to internal staff may not be suitable material to put in the hands of customers.
The real value of WSRP then is not so much that it overcomes technology barriers (although this is a useful advance in itself) but more that it forces companies to think anew about what they're offering to customers and how they should be presenting it. In the past, the customer interface has been presented by individual sales people, doing the best they can with the resources made available to them. When WSRP enables companies to interact direct with their customers, it often exposes all of the shortcomings that have previously been masked by the personal initiative of those individual salespeople.
WSRP's role then is not merely a technical one. It also represents an important threshold that brings companies face-to-face with some of the realities of putting hitherto hidden processes directly in the hands of users especially when those users are customers. Solving the integration challenges at a technology level is really just the start of the journey, and not at all the final destination.
posted by Phil Wainewright 1:02 PM (GMT) | comments | link
Tuesday, September 09, 2003
XKMS is key
Excuse the pun. XKMS looks set to be a fundamental cornerstone for simple, effective security in web services messaging. DataPower's Rich Salz explains why in a Network World article, XKMS does the heavy work of PKI.
Public-key infrastructure (PKI) has always been the best way of authenticating and encrypting communications between autonomous correspondents in a highly distributed network. It avoids the reliance of private-key systems on a potentially vulnerable centralized key issuer, and supports delegated trust systems that allow clients and servers to interact without having to set up arrangements in advance.
But it's been complex to implement, with the result that few have bothered (the only exception being the widely used digital certificates system that secures most online shopping transactions). Rich's article explains the difference made by XML Key Management Specification (XKMS):
"With XKMS, a client and application server share an XKMS service to validate each other and to process requests between them. XKMS replaces many PKI protocols and data formats, such as Certificate Revocation Lists, Online Certificate Status Protocol, Lightweight Directory Access Protocol, Certificate Management Protocol and Simple Certificate Enrollment Protocol, with one XML-based protocol
"... trust decisions are given to a common server so they can be centralized and applied consistently across platforms. The only configuration information an XKMS client needs is the URL of the server, and the certificate the server will be using to sign its replies. Different trust models can be supported by using different URLs."
XKMS embodies many of the virtues of the web services model, in my view. It packages up complex functionality into easily accessible services that are attuned to practical needs, and offers a wide range of simple configuration options that allow those services to be tailored on-the-fly to the needs of specific applications. All of this is implemented within a highly distributed, shared-server architecture. Best of all, the services are available to, but kept separate from, all applications that use the same services infrastructure (which is exactly how OSI Layer Six says it should be).
posted by Phil Wainewright 1:45 AM (GMT) | comments | link
Monday, September 08, 2003
The only way to make applications adaptable to changing business requirements is to build them using a horizontally-segmented model that permits plug-and-play interchange of functional elements. One of the most challenging aspects of achieving this is the separation of operational functionality from business functionality. You may want your business capabilities to turn on a dime, but not if doing so is at the expense of system security or transactional functionality.
The early idealists of the open systems movement understood this just as well as they understood the need for horizontal segmentation lower down the interconnect stack. Although their early idealism proved over-optimistic, their recommendations, outlined in the OSI Reference Model, have stood the test of time, and illuminated the development of interoperability in networking standards.
But as Sekhar Sarukkai and David Cohen explain in a new opinion article just published on Loosely Coupled, the seven layers that make up the OSI stack have not all reached the same level of maturity. Standardization on Internet technologies has helped towards a clean implementation of the lowest four layers, but the picture is much more confused further up the stack. In particular, Web services and the forgotten OSI Layer Six highlights the failure to respect the boundaries between operational policies and business logic:
" ... many layer-six operations that do not belong in the application layer are, in fact, implemented as an integral part of the application code. This intertwined relationship between application logic and application policies have made it very difficult to clarify if layer six is merely a theoretical concept, or if it can be achieved in practice. In this context, web services standards are perhaps the first instance of a technology with the promise to make layer six a reality."
It's been a long journey since those early, idealistic days, during which many have chosen to dismiss the OSI model's demarcation of the upper layers as an unnecessary embellishment that serves no practical purpose. Now at last the emergence of an architecture that enables truly distributed n-tier computing is starting to enable the separation of operational functions from business functions and recognizing that it's feasible, we are now free to acknowledge how desirable it is.
posted by Phil Wainewright 10:17 AM (GMT) | comments | link
Assembling on-demand services to automate business, commerce, and the sharing of knowledge