COMMENTARY – The virtual machine image, a powerful driver of cloud computing, may be described as a tiger few can easily ride. The VMs are proliferating. Earlier this month, no less a personage than IBM’s Daniel Sabbah forecast that virtual image sprawl would outgrow IT’s capacity to keep pace.
“Virtual images are tripling every two years, outpacing the doubling in compute power and essentially flat IT budgets,” IBM Tivoli Software General Manager Sabbah said in a statement coinciding with IBM’s Pulse Conference.
“With current operating practices, every two years you’d need 1.5 times the physical infrastructure to support cloud and twice the labor. That’s an unsustainable cost and management problem which is the exact opposite of the promise of cloud,” he continued, as he outlined benefits of IBM’s new SmartCloud Foundation offerings. While public cloud providers can be expected to ramp up to manage ultra-large configurations, it is more difficult to see how labor issues will affect the much discussed user-side cloud type known as private cloud.
Will work for cycles
The labor issue is a stubborn one, and it must be factored into cloud computing ’what if?’ analyses that enterprise architects are now undertaking. Cost savings are crucial to the dream of cloud, but greater experience with this architecture leads many to downplay cost savings.
Various companies have been working to address the labor issues of cloud, which is a massively scaled architecture that calls for sophisticated and on-demand provisioning of increasingly complex configurations and many virtual images.
The effort suggests that this goes back a long ways. It certainly has been of concern as distributed computing and rack-based blade servers have multiplied. The movement toward grid and autonomic computing looked to address the challenge, and now cloud and even dev ops can be seen contending to solve the problem, but remedies have yet to take hold.
The poster children for the first rush of cloud – Google and Amazon – can be said to have “thrown people at the problem” as they both employed high head counts of developers to service vast farms of servers. And the developers are very advanced developers at that. The classic Google ranch hand is a math and algorithmic wizard who is also adept at systems programming. In Google’s early days, at least, this person combined development and operations skills to a startling degree.
Is cloud computing hugely labor intensive?
We wondered if other companies can repeat this model. So, when we caught up with Skytap’s Brian White at this week’s EclipseCon 2012 in Reston, Virg., we asked for his take. As vice president of products at cloud provider Skytap, White is responsible for product strategy and product management. Before this, he was director of developer resources for Amazon Web and launched the AWS Elastic Beanstalk platform-as-a-service offering. We asked if private cloud labor was not labor intensive.
“It’s hugely labor intensive,” White answered. This is for a reason. “There are things that make [public cloud] a challenge. One is keeping it up and running all the time.” Another, he said, is the fact that the number of servers you can deploy may be relatively modest. “You don’t have unlimited capacity for scaling,” he said.
Where cloud approaches have the most value, White and others have concluded, is where resource needs are unpredictable or irregular. That is why Skytap and many other cloud providers focus on the development and test markets.
Development and test tasks make for a dynamic workload, he said, adding “from a cost perspective you don’t need to have these projects running 24/7.”
For cloud, “there is a huge amount of hype around cost,” said White. “The real benefit people are getting out of it is agility – much more than just pure cost reduction.”
Continuous deployment
While it is largely a beneficial trend, the move to Agile development becomes a factor that further exacerbates the cloud planning dilemma of architects. This was borne home in conversation with Dave West, analyst, Forrester Research, who spoke on Lean development at EclipseCon 2012. He showed that deployment was no longer an end-of-the-Waterfall development lifecycle event. It is now a constant companion. That is because part of the Agile of goal is to deliver bits of functionality as they become available.
The new styles of deployment requirements are certainly an issue with which cloud computing administrators – as well as developers and architects – are going to have to deal. Here, cloud may drive change. It is shedding light on dark problems.
“Cloud is an interesting phenomenon,” said Forrester’s West. “I am excited about what I is doing to drive internal IT to think about its systems in a different way.” – Ryan Punzalan and Jack Vaughan
In the face of fairly rampant fear of placing data on a public cloud, much attention has been placed on private cloud – but labor and cost issues may unsettle such undertakings. What do you think?