As the new universe of Internet computing begins to cool after its explosive beginning, some of its most prominent features are beginning to take shape. Google's Supercomputer is an important early example, writes Jon Udell. Network power is coalescing to this powerful hub because it is one of a small minority of "privileged vantage points" from which "much power can and will be wielded."
Jon makes the interesting observation that, within the loosely coupled fabric of the Internet, such power nodes will often employ more tightly-coupled architectures internally. But that doesn't make them any less a part of the wider cohesive Internet computing fabric:
"So should we say that the computer is the network, or that the network is the computer? Both statements are true. A supercomputer, operating at global or merely enterprise scale, creates its own internal network of services. But supercomputers also federate with their peers and converse with their myriad clients to enact computation on a grander scale. There's no single right architecture or topology."
In other words, the federated Internet fabric is a heterogenous landscape, a patchwork of everything from ultra-light loose coupling to super-dense integration. The same, of course, is true of each individual enterprise infrastructure within it, each behaving as a microcosm of the whole, in fractal symmetry. This applies just as much to the business processes that are being automated as it does to the underlying infrastructure that performs the automation. Coincidentally, enterprise architect Melissa Cook describes this other side of the coin in an opinion piece this week for Computerworld, The Enterprise Architecture Challenge: Integration:
"Our enterprise "parts" are not arbitrary. If we do a good job, we will make it crystal clear what our enterprise "parts" are, where the integration points are, which ones should be common or standardized across our enterprise and which ones can be unique for each business unit. For example, you may want common financial and HR business processes, but different manufacturing processes for each business unit. There is some brain-busting work that has to done in deciding what is in and out of each business process and how common you want that part of your enterprise to be. And it is critical to get the breakpoints between the parts right."
Finding those breakpoints is crucial, and while enterprise architects will prefer to design them in from the start, the uncomfortable truth is that, although an informed guess will often come close to the mark, many of them have to be found by more laborious trial and error. That of course is what dynamic networks are really good at, and thus much of the important work is going to simply emerge out of the primordial soup of the Internet fabric.
Jon's column took several blog entries as its starting point. Rich Skrenta's essay, The Secret Source of Google's Power introduced the important notion of Google's server farm as closely-coupled supercomputer: "... a massive, general-purpose computing platform for web-scale programming." Jon then quotes Tim O'Reilly's response that, "Once Internet apps truly get to scale, they'll make the network itself disappear into the universal virtual computer." Or, as Jon ripostes, vice-versa.
Once this happens, it'll matter less and less where the computing physically takes place, since computation will simply migrate to where it's most effective (measured in terms of either cost or value). Tim O'Reilly says that what will matter then is "Who will own the data?" but I think that much of the data will be floating about for free, and the really important question is not going to be the data itself but how you view it. In short: Who will own the context? Jon is right to end up by highlighting vantage points, because owning data won't get you very far unless you can put it in a context that adds value. Doing exactly that of course is what has already made Google so much money.