Waking at 5:30 in the morning on Thursday August 13th hoping to beat the rush of commuters, I got dress and on the road to San Francisco from San Jose to attend the 2009 OpenSource World, Next Generation Data Center and CloudWorld conference at Moscone Center. It is a shadow of the O’Reilly OSCON event in San Jose from July 20th to 24th, I overheard a fellow Open Source attendee say as we waited for the first keynote of the morning from Lew Tucker, VP and CTO at Sun’s Cloud Computing operation. (Without his aviator frame glasses, Tucker bears a resemblance to the actor Steve Buscemi—the talkative kidnapper in the movie “Fargo.”)
After a welcome and introduction from Jeff Kaplan, THINKstrategies and CloudWorld Conference Chair, Tucker took the stage to began his keynote “If Cloud Computing is the Answer, What is the Question?” Tucker comes with the right credentials for discussing the topic. He started out as director of advanced development at Thinking Machines Corp.—the massively parallel processor (MPP) company founded in Waltham, Massachusetts in 1982; their bankrupt assets acquired by Sun in 1994. MPP was one of parallel processing various schools, containing the symmetric multiprocessor branch—today found in its simplest form in the dual core processor in PCs and Macs—and the massively parallel branch—found now in the blade servers of the large-scale compute farms populating the Internet.
The blade server MPP architecture is the workhorse of Internet commerce. Every time a user logs onto Amazon.com and places an order, he’s talking to one of the e-tailer’s several geographically dispersed compute farms. The large-scale deployment of these computing resources is reducing the cost of computing to on the order of 10 cents per CPU hour, according to Tucker. This economic reality is making cloud computing attractive to Fortune 1000 companies looking to reduce their IT costs, by adopting a “pay as you go model” rather than the large upfront equipment investment amortized over time. Ironically, in the early days of computing, the high cost of computers made it manditory to centralize the computing resource and make it available by remote terminals. Today, the opposite is true, the low-cost of computing is making it more practical to distribute low-cost cloud computing via remote terminals. (See afterword below.)
Tucker stated that the ubiquitous availability of broadband is the other factor contributing to the desirability of cloud computing. The widespread adoption of self-service e-commerce and the large accumulation of data on the web have also combined to validate the cloud-computing model. Tucker sees this only increasing with the expansion of machine-to-machine communications—On-Star calling in upon detecting air bag deploying, vending machines reporting low inventory, a building computer system monitoring and reporting on equipment operation, energy use and maintenance requirements, and the list goes on. This availability Tucker said is tempting large corporations to consider renting cloud computing resources rather than building the capacity in house. He points to Amazon’s success with SmugMug as evidence for cloud computing.
Started three years ago, SmugMug is a photo-sharing site that hosts the photos of professional photographers (they looked professional to me). The company of 50 employees uses Amazon S3 (Simple Storage Service) cloud solution to store its 686,256,409 photos (adding at a rate of 10 terabytes of new images each month). According to Amazon, the company has saved roughly $500,000 in storage expenditures and cut its disk storage array costs in half—all with no increase in staff or datacenter space. A most high profile example is salesforce.com, which offers a cloud platform that customer develops applications on. Adtran Inc., for example, created an app for mobile devices that allowed its sales force to access customer information. The crown jewels of a corporation—its sales information—residing in the cloud.
To listen to Tucker, you begin to see information technology as a set of Lego blocks that anyone with software expertise to provide the connection can put together to achieve a desired solution. The server farms provide the physical plant. A data center OS deals with this physical plant and an applications OS deals with the software plant. For example, Google’s cloud computing platform, Google App Engine, is essentially, “HTML 5, web browser applications, with a back-end server that uses TCP/IP and RPC (Remote Procedure Call (RPC),” according to Google CEO Schmidt. Developers create applications on Google’s infrastructure free up to a point.
Information technology development is a continuous work in progress and cloud computing is the latest incarnation. Its greatest adherents are companies—salesforce.com—emerging to serve new needs (increasing the productivity of sales teams) that didn’t exist before. When and if mainstream enterprises decide to follow suit en masse is anyone’s guess, but I suspect it’s not a matter of if but only when if history is any indication.
Afterword:
Mainframe time-sharing found its first commercial success at Dartmouth College in 1964 in the form of DTSS (Dartmouth Time Sharing System). Students submitting programs to be run on the college mainframe, a GE-235, could enter the program using a Teletype (TTY) machine (an electro-mechanical printer and keyboard that had a communications facility to talk to other TTYs. The DTSS system used another mainframe, a GE DN-30 (Datanet-30) to handle communications to and from the TTYs. It was a one to many architecture with the TTYs at the ends and the mainframes, emulating a TTY machine in the center. DTSS was the creation of Tom Kurtz and John Kemeny. The web site http://www.dtss.org/ has been set up to recreate the first DTSS for those interested in seeing what the precursor to cloud computing was like in the early 1970s. The site offers web-based emulators for both Mac and Windows.
Plus ça change, plus c'est la même chose.
No comments:
Post a Comment