In response to a few nominal price drops from public IaaS providers of late, as well as a few pundits noticing that those drops have become less frequent than they once were, I’ve seen more than one piece questioning whether the providers can keep dropping their prices. They say things like “margins are already thin” and “the cost of the labor to support public cloud isn’t going down.” There appears to be a narrative under construction here that the cost to provide you with, say, an hour of access to a compute instance or a GB of disk space, is fixed for the provider unless they can find some innovative way to reduce it.
That narrative is false. Here’s why.
Remember the sample configuration we used to compare IaaS pricing in my earlier post? The internal IT costs that would be replaced by public cloud service for that configuration break down as follows:
Notice that almost half of the cost is the infrastructure hardware itself – servers and storage. This is using large internal IT shops as a model, and since the leading public cloud providers tend to use their own infrastructure software, cheap hydroelectric power and labor that gets more efficient with scale, it’s reasonable to assume that the hardware is actually more than half of what you are getting with public cloud.
Now remember that unit costs for IT hardware tend to go down significantly over time. By “unit” costs I mean the cost for each unit of capacity, such as a GB of disk or an EC2 Compute Unit of processor capacity. Moore’s Law has decelerated a bit since its inception, but we’re still seeing a doubling of processor power around every two and a half years at an equivalent price point. Disk storage gets cheaper for equivalent capacity even faster than that. The end result is that the 100 AWS compute instances and 1000 TB of disk you signed up for a couple years back now cost Amazon significantly less than they did back when you started using them. And that means that they have a choice: they can either let their service get more and more profitable, or they can drop their prices.
The providers generally don’t like to talk about things like Moore’s Law, for the same reason that IBM Global Services didn’t like to talk about it back in the early days of traditional IT outsourcing. Too many customers were looking at cost savings on day one of the deal only and didn’t understand that they’d be overpaying a few years later if they were still being charged the same amount. It wasn’t until the advent of benchmarking clauses in outsourcing contracts that this began to change. Nobody is benchmarking cloud deals since barriers to switching providers are so low, but those barriers are pretty meaningless if the dominant providers price services in lock step with one another.
The one exception among the cloud leaders right now is Google, sort of, since they do mention Moore’s Law on their site as being a part of their pricing philosophy, but it seems to get little attention, and I’m not convinced they are still walking that talk. If they are then they’ll drop prices very soon. In any case, all of the providers are very, very aware of this situation, and they track it very closely. Many of the customers, not so much. They are historically awful at preparing business cases, and the pundits that make a living writing about what the providers do tend to be focused on easy things like the next great provider innovation rather than hard things like cloud service financials. It’s now been over a year and a half since I priced the configurations we’ve been discussing in this blog, and the pricing from Amazon, Google and Microsoft still hasn’t changed in all that time. But business is good, and the money is just rolling in, so why would they change a thing?