to LooselyCoupled.com homepage
 
 Weekly emails: how to advanced search
 
 Glossary lookup:

 

Loosely Coupled weblog


Thursday, November 14, 2002

Spontaneous intelligence and the Semantic Web
Amazingly, it was not until the mid-nineteenth century that French chemist Louis Pasteur finally dispelled the fallacy of spontaneous generation. His award-winning 1859 experiment for the French Academy of Science conclusively disproved the notion that life could arise spontaneously in the presence of non-living biological material. Until then, it had been widely assumed that microbes, maggots, insects, and even mice and rats, could all be bred spontaneously simply by assembling together the necessary ingredients in a suitable open container.

Such beliefs rightly seem absurd today, and yet despite the clear superiority of contemporary popular scientific understanding, many highly educated and otherwise extremely clever people still hold to a notion that is every bit as risible as the one that Pasteur disproved. It is the notion of Spontaneous Intelligence — the absolute conviction that the act of linking several computers together on a network can give rise to the spontaneous generation of original thinking.

Here is a typical quote from an InfoWorld interview last week with Dave Hollander, CTO of Contivo, and chairman of the W3C Web Services Architecture and XML Schemas working groups: "'The Web relies on humans to add intelligence. Web services in its long-term, ultimate vision takes computers to do that,' he said."

I do hope that one day someone will devise an experiment that puts this astonishing statement to the test. We will all sit there, patiently waiting for a cluster of web services-enabled computers to spontaneously generate even the smallest amount of intelligence without human intervention. Perhaps then people will stop spouting this nonsense — heard most often in the vicinity of discussions of the Semantic Web, which is merely a repository for storing intelligence, not a crucible for creating it.

Fortunately, the article redeems itself in a concluding paragraph on the role of XML Schema in the Semantic Web: "XML Schemas express shared vocabularies and allow machines to carry out rules made by people, according to the W3C." This statement more accurately conveys the simple truth that the only way intelligence gets into a computer is as a result of humans putting it there. The Semantic Web will not replace human intervention, merely displace it to the design and configuration stages of an automated process.

In the main, this automation of intelligence will be a marvelous advance in efficiency and reliability. But it's notthing miraculous. Nor is it always going to deliver the best results, especially in cases when the logic of that earlier design and configuration is flawed. Understanding that the system is founded on intelligence put into it by fallible humans, rather than imagining that the intelligence has spontaneously arisen within the confines of some superior, computerized entity, will be essential for an effective diagnosis and fix of those flaws when they appear.
posted by Phil Wainewright 6:46 AM (GMT) | comments | link

Wednesday, November 13, 2002

You never believed it could be this good
Real-time computing is so utterly unexpected that when we actually experience it, it feels like something must have gone wrong. Even though it's what everyone ideally wants, if you deliver it without proper preparation, your customers and frontline staff may not be able to handle it. Here's what happened to me today. You'll see what I mean.

Six weeks ago, I attended BT's launch of its web services strategy. As part of the presentation Pierre Danon, CEO of BT Retail, described some of the ways the telecoms carrier is using web services to streamline its own service delivery to customers. One was a project to allow its call center staff to give instant answers to customer enquiries about work in progress. "It puts together a number of our legacy applications and our new front-end applicatoin ... through at least four web services touch points," he said.

Today, I called BT as a customer to follow up an order I had placed the day before, and I experienced the astounding results of BT's deployment of web services. I had ordered an additional phone line, which I'll be upgrading to ADSL, and I wanted to transfer my existing business number to the new line, which is due to be installed tomorrow. The operator started to action the transfer on her system ... and at that very moment, while she was still in mid-sentence, the line went dead.

My instant reaction was that there had been a fault in the call center. What I later discovered was that, thanks to BT's slick new web services infrastructure, the number transfer had happened there and then, in real-time. We had been cut off because the line I was using to request the transfer was transferred instantly, at the very moment the operator filed the request.

The operator hadn't been expecting this. She was used to a delay of several hours before number transfers take place, and she was concerned to action it so that it would be in place by the time the engineers arrived at the local exchange the next morning. No-one had warned her or trained her to expect it to happen on the spot, and no-one has yet thought to build a scheduling step into the process, because the possibility that you might need to control the timing to as precisely as within a few seconds is so alien that frankly it hasn't yet occurred to anybody.

As a result, I was left hanging on the line, musing on the absurdity of being cut off by what I assumed at the time was a fault in BT's own call center. It was only later that I recalled Danon's presentation and began to suspect that the real explanation was that BT's web services upgrade been successful beyond anybody's wildest expectations — a suspicion later confirmed by subsequent enquiries.

The moral of the story is to be prepared for any and every outcome when implementing web services — especially for the eventuality that it really will live up to all its promises. Trouble is, we're so used to computer technology failing us that we simply don't expect it to work on demand, especially not the front-line operators and consumers that have so often had to live with the consequences of ill-advised past misadventures. Convincing them that it really is going to work now is a whole new challenge for technologists to confront.
posted by Phil Wainewright 1:01 PM (GMT) | comments | link
XML for the masses
The aim of Office 11 is to put the power of XML into the hands of regular users, says Microsoft's Jean Paoli, one of the co-creators of XML, interviewed by Jon Udell for InfoWorld: "The result is not for developers. We are for the masses ... we are not here to enable XML developers to be happy creating XML. We are putting XML generation into the hands of people who do not understand XML at all."

The biggest win here is gaining Excel as a front-end tool for analysing XML data: "We have this great toolbox which enables you to analyze data," Paoli is quoted. "We can do pie charts, pivot tables, I don't know how many years of development of functionality for analyzing data. So we said, now we are going to feed Excel all the XML files that you can find in nature."

Jon counterpoints the interview with his own commentary*. Jon notes that he would like to be able to create and edit XML in other tools besides Word, XDocs and Excel: "If the goal is to enrich as much user data as possible, the browser's TEXTAREA widget and the e-mail client's message composer are arguably the most strategic targets for XML authoring support."

Then comes the catch. To manipulate this XML data effectively, it means marking up your spreadsheet or Word document with schema information, explains Paoli: "Give names to the data. The data is about the user's name and e-mail address, for example. I don't want to call it cell 1, cell 2, or F1 or F11. The whole thing about XML is to give names to things which are in general not named."

On the one hand, this is to be welcomed, for as Paoli goes on to say, "Who knows how to create a data model better than the financial or health care company who uses the data?" It is about time we had users defining XML schema rather than having to rely on an expert going away and modeling it (inevitably an imperfect procedure). But the experience of user-defined document templates in Word and the difficulties of propagating and then maintaining a corporate standard even at this elementary presentation level of document structure doesn't augur so well for the rapid creation of consistent, sharable XML DTDs and schema across the enterprise.

Putting XML directly into the hands of the users is an important and necessary step, but it is only the beginning of the journey. There is a long and winding road ahead to complement the raw capability with usability and understanding that can properly unleash its potential.

UPDATE: [added 11/15/2002] *In my original posting, I'd added this comment: (this is cutting-edge journalism, by the way — neither a finished article nor a weblog entry but something in-between that would never have happened without the influence of weblogging or the convenience of online publishing — an analytical journalist publishing his interview notes accompanied by his reflections on them). Jon subsequently noted that the article was his weekly column, so I shouldn't really have implied that it was less than a finished piece. But I almost wish that I had been right, because the idea of supplementing traditional published formats with new ones appeals to me, and Jon is already pushing the boundaries in interesting ways with his concurrent weblog and published articles.
posted by Phil Wainewright 1:24 AM (GMT) | comments | link

Monday, November 11, 2002

AmberPoint raises $13.6m second round
VentureWire reports today that web services management startup AmberPoint has raised $13.6m in its second round of equity financing. With top management drawn largely from alumni of former tools developer Forte Software, the company's latest round confirms its status among the most-watched elite of web services hopefuls in the San Francisco Bay area.

Originally named Edgility Software, AmberPoint launched its new identity in June this year. The company had already raised $9.1m in its first round in August 2001 from Norwest Venture Partners and Sutter Hill Ventures. The new round, led by new backer Crosslink Capital with support from the two existing investors, brings total funding to date to an impressive $22.7m.

The company says that its flagship product, AmberPoint Management Foundation, adds a "non-invasive" management layer to web services implementations. It allows enterprises to monitor and track activity at both a business and a systems level, while managing access control, versioning and change management. Its distributed architecture can be implemented across multiple web services environments, including both J2EE and .NET.

AmberPoint launched with reference customer implementations at insurance company MetLife and energy company TransCanada, but also emphasizes its partnerships with systems integrators such as IonIdea, Kuvera and ThoughtWorks, and with web services software vendors including Cape Clear and The Mind Electric.

The company also announced a new version of its software "optimzed for Microsoft .NET," including "components that execute in native C#." The new version will be available next month. Future revisions of the Management Foundation will be released simultaneously for J2EE and .NET, the company said.
posted by Phil Wainewright 7:39 AM (GMT) | comments | link

Assembling on-demand services to automate business, commerce, and the sharing of knowledge

read an RSS feed from this weblog

current


archives

latest stories RSS source


Headline news RSS source


 
 


Copyright © 2002-2005, Procullux Media Ltd. All Rights Reserved.