Adam Bosworth has set out to explore complex issues in his blog, and keeping that up over a series of sequential postings can be hard work. Today he finally resumes the thread he left hanging earlier this summer about what sort of browser platform web services need. Two points leapt out at me, even though they're at a tangent to Adam's core message:
One way of presenting information in a user interface, he writes, is to "Attach a lot of metadata to elements. Then have ... a meta-data driven layout engine. This model works really well when you want to let people view and/or edit information without requiring developers to build layouts for each and every data element ... AboveAll seems to be building an interesting flavor of [this model]."
I highlight this because a) it's all about empowering the user to control what information they see and how they see it; and b) Above All Software is a company to watch. Here's how co-founder and CEO Roger Sippl (yes, the Roger Sippl who founded Informix) explained to Silicon Valley Biz Ink what the company does earlier this summer:
"'Web services is an approach where each system can have its business logic exposed as little software robots,' he explains, then flips open his laptop to run a demo. 'We're giving them a new kind of browser that lets them talk to multiple little software robots and link them together.'"
Where, then, to implement the rendering engine, Adam wonders? "Today ... we render pages on the server and ship rendered user interface (HTML) up to the client." But that's no good, he says, because with users becoming increasingly mobile, the client needs to be able to operate when disconnected. "That leaves two models: Have a cache that talks to web services or have a cache that talks to another cache. I'm seriously torn here and I suspect that, in the long run, both will be required." Amen to that. Of course both will be required, because in a loosely coupled environment, neither end of the conversation can assume the other participant will be listening at any given time. That's why web services have to use asynchronous messaging which of course is simply a highly robust form of cacheing.
On the same day that Adam rejoined the fray, the Linux Universe website covered related ground in a column intriguing titled, How Netscape Beat Microsoft. There were two factors that made Microsoft's victory a pyrrhic one, according to this writer. I'm going to reverse them so that I can conclude with the more interesting one:
"Microsoft became so obsessed with defeating Netscape at any cost that they actually did just that. No tactic or strategy was off limits. Even the illegal ones. So they got slapped with a huge antitrust suit ... MS could not fight back as vigirously as they needed to. IBM, sensing the opportunity, threw a billion dollars and an incalculable amount of credibility in GNU/Linux' direction. And suddenly GNU/Linux was too big to squash like a bug."
Meanwhile, "Slowly but surely, more people are doing more of the things they used to do on their PC through the web instead. There are at least three areas of computing that have moved or are moving very rapidly to being totally web based: a. Online banking and bill paying ... b. Scheduling, conferencing and whiteboarding ... c. Many CRM based functions ..."
The Linux Universe article highlights that what Adam has written about as the future of the web services client platform is already happening as more and more business services become available over the Web. Having Adam writing about these concepts is an inspiration and will help cement the progress that is being made. But, despite appearances, it's not a theoretical discussion about what might happen. Adam is just catching up with stuff that's already proven, practical reality.
posted by Phil Wainewright 3:10 PM (GMT) | comments | link
Thursday, September 25, 2003
Less solder in software
Every so often, we discover a simple, elegant solution to a previously complex problem, and it becomes possible to eliminate a lot of effort and hassle. We quickly advance, and soon we forget how much trial and error it had taken to arrive at the solution that now seems so obvious. We encounter new problems, and once more we begin adding new layers of complexity as we improvise solutions to these new challenges.
Sean McGrath last week called for a simple, elegant new solution to the complexity of software integration. In Putting the 'soft' into software, he recalls how, as a young software engineer, he used to solder connections on a type of cable known as an RS-232, which, in the days before USB, was what you would use to plug devices into the serial port on your computer.
His point about the RS-232 interface, which had 25 separate connections each allocated specific roles in the IEEE specification that defined the standard was that in the end his software never used more than two of them. All the functionality that had previously been (quite literally, in this case) hard-wired into separate connections in the cable ended up being implemented in software, because it became so much easier to change the software than it was to resolder the connections to different pins in the plug.
Sean then takes this analogy and applies it to object-oriented software:
"A traditional object from object-oriented programming might have anywhere from 2 to 400 individual pins. Each with its unique function, each of which will need to be lovingly soldered (metaphorically speaking) to a pin on a receiving object.
"What would a simplified software interface look like? Well, by analogy with the hardware world, it would be one that just supported the basic receive and transmit functions. A real world example? For 'receive' substitute 'GET', for 'transmit' substitute 'POST'. In other words, HTTP.
"Am I stretching the comparison too much? Probably, but there is more than a grain of truth here in my opinion. Traditional objects, with their multi-pin interfaces have proven to be brittle, to lack in the all important area of flexibility. Along comes HTTP which is the software analog of a two pin interface and it takes the world by storm.
"Conclusion? Maybe HTTP is onto something here. Maybe before we buy too many soldering irons (WSDL's are a common brand), we should take a closer look at how we can avoid soldering altogether?"
Sean is saying, in effect, that complex software interfaces are causing more problems than they solve. They set up complex connections that are difficult to unravel if they need to change, the software equivalent of soldering those RS-232 connections.
That's all very well, but Jeff Schneider points out that a lot of hard work has to go into creating the simple, elegant connections that Sean is calling for: "Simplicity in interface doesn't just happen by creating a magic 'do(x)' you have to actually have some meat behind the operation. Web service protocol designers are attacking both of these fronts (NFR protocol negotiation and ubiquitous, shippable languages like XQuery)."
Replacing all those inflexible software connections with an easily reconfigured, loosely coupled set of connections requires a shared set of expectations and specifications at each end of the line that the two parties can refer to. Sean of course is well aware of this, since he clearly remembers how he and his colleagues ended up reducing the 25-way RS-232 link down to a 2-pin connection. But perhaps he has forgotten just how much trial and error went into working out how to put all of that functionality into software at each end of the link.
While that work was being perfected, there was for a time a surge in sales of RS-232 adapters that had various switches built into them, so you could alter certain connections simply by flicking a switch instead of having to open up and resolder. I was put in mind of this by John McDowall (CTO of Grand Central), who notes in his blog: "Jeff Schneider makes (perhaps unknowingly) a very good case for a service such as Grand Central. To do the dynamic negotiation that he illustrates you need a service to do the mediation, otherwise everyone must support all known technologies."
The point of web services standardization, of course, is that ultimately everyone will be able to support all known technologies, simply by supporting the established standards. But that ultimate objective is still very far off in the future; some of the standards we'll need haven't even been thought of yet. In the meantime, the case for many varieties of mediation service remains strong, and the dream of effortless reconfiguration will remain unfulfilled in many instances of software interconnection. Patching those broken connections will still need the software equivalent of a soldering iron; but use it sparingly.
posted by Phil Wainewright 11:30 AM (GMT) | comments | link
Monday, September 22, 2003
Who funds infrastructure?
"Shouldn't infrastructure formats, 'standards', software and to a certain extent hardware be like roads?" writes Jean-Jacques Dubray in a comment to an earlier posting of mine.
He asks: "Why should a document format that everybody uses to exchange documents be something you pay for eternity? What about a piece of software that everybody uses like an operating system or a browser. So if say somebody has already collected 50 billion dollars in tolls. Should not this be enough?"
This comparison to roads raises some interesting questions about private vs public vs common ownership. Today, it's generally accepted that roads should be publicly funded. They're owned by the state, and funded by taxes. But it wasn't always that way. Look back to Britain in the seventeenth and eighteenth centuries, when the nation was first developing its modern road system, and most new roads were built by private enterprise, who charged their users tolls. That model was extended to canals, and then to the railway system. In the US, private enterprise built the railroads, the telegraph and telephone systems, and electricity generation and distribution. Except in a few isolated cases, all of these infrastructure systems were funded largely by private investment, who charged their users tolls of one kind or another.
Today, the global connected computing infrastructure has sprung up entirely from private initiative and investment (the only exceptions being the military and academic beginnings of the early Internet infrastructure). The Internet, the PC architecture, Ethernet, Windows, Linux, Java, XML and Office file formats (to name just a few of the vital ingredients) have all been developed because private corporations have invested in their development, either directly or indirectly, and because users have paid their tolls.
But already the model has started to crack. Open source software promotes the view that infrastructure should be free, and so users pay no direct tolls, even though talented individuals and their employers invest significant resources into its development. The W3C has a royalty-free policy for all technologies adopted as web standards. Why shouldn't these principles be extended to other widely used elements of that connected infrastructure, including Windows and the various Office file formats?
In Britain, many earlier forms of infrastructure have passed from private into public ownership. Supporters of the transfers spoke a lot about the public good, but the main trigger was often a perceived loss of quality brought on by falling profitability. It's easy to nationalize an industry that's on its last legs. The US road system is publicly owned for a similar reason it would be simply impossible to operate it as a profitable venture.
Microsoft's near-monopoly of certain elements of the connected computing infrastructure make it unlikely to face similar financial difficulties, unless it can be successfully challenged one day by an open-source or standards-based rival. Meanwhile, nationalization is out of the question nobody in their right minds would argue that Windows would be better if responsibility for its development were to pass from Redmond, Washington to Washington DC and lawmakers recently failed in their efforts to pose any significant restraints on Microsoft's freedom to charge tolls to its users.
Nevertheless, the historical trend is that private investors usually fund the initial development of infrastructure, and then later on, once the infrastructure becomes important to the community at large, the burden of operation has a habit of transferring more into public ownership.
Peter Drucker once made an interesting observation in an interview in Wired magazine, when he pointed out that "the computer industry, as an industry, hasn't made a dime." If you add up all the profit and loss of the entire computer industry from its inception, including all the failed startups, you arrive at a negative figure. Individual people and companies have made occasional fleeting profits, but in the aggregate those profits are more than balanced by the losses others have incurred. The same applies, I'm sure, to most other infrastructure industries, many of which only survive by institutionalizing their lossmaking in the form of public subsidies.
What this all adds up to is that, one way or the other, we all pay for infrastructure in the end. Whether it's as investors losing our shirts, as consumers being ripped off, or as reluctant taxpayers, sooner or later we pay our dues. Most of the time we can live with that. It is when there seems to be an imbalance in the system that we start resenting it, especially if an alternative exists that will deliver what we need more efficiently and equitably. That is one of the reasons for the growing popularity of open source and open standards today they offer an effective means of correcting a perceived imbalance between public funding and private profit.
posted by Phil Wainewright 5:09 AM (GMT) | comments | link
Assembling on-demand services to automate business, commerce, and the sharing of knowledge