Throwing bandwidth at applications is never the answer
Ben Worthen of the Wall Street Journal has raised some excellent points about the challenges facing Google’s bold broadband experiment due to the lack of big bandwidth applications. While there have been many “brain storming” sessions on how to solve these problems, the reality is that the problem is fundamental and insurmountable.
I remember speaking to Geoff Daily of App-Rising about some of these “brain storming” sessions called “CampFibers” at Lafayette where a bunch of developers where placed inside a room to brain storm. The developers were asked “what kind of apps would you create if bandwidth was not a limitation”. The room fell silent until one brave soul said “why” which actually summed up the answer perfectly.
Why is this the case? The reality is that software developers and system architects have to abide by a crucial design criteria for all Information Technology (IT) systems or else the whole system is a nonstarter. This design criteria is called “scalability”: which is the ability of a system to scale up and support thousands to millions of end users.
We can certainly build several expensive arrays of servers with a total of 400 gigabits per second (Gbps) of capacity which would cost us $400,000 per month (at the lowest bulk rate) in bandwidth. I can even offer an example of a fat bandwidth application today that requires 3 Gbps of capacity which is uncompressed 1080-60P video (1920×1080 resolution at 60 frames per second) which is beneficial for lower latency video conferencing, but would anyone ever deploy such a system?
Our $400,000 monthly bandwidth bill would allow us to serve a whopping 133 customers and that’s just about how many customers we’ll have out there that can actually afford to buy 3 gigabit broadband. But even if a hundred people could afford to buy 3 Gbps service, would they actually waste their money on such a novelty that has minimal noticeable advantages over a 40 Mbps Blu-Ray system? Probably not. At a more practical level, we can develop an alternative low-latency compression algorithm for video conferencing that doesn’t use inter-frame compression and get the video down to 16 Mbps.
Scalability is an absolute requirement because any $400K/month system must support at least 1 million concurrent end users. So If we wanted to support 1 million concurrent users with 400 Gbps of capacity, our application must have an average bandwidth of not more than 400 kilobits per second (Kbps) which is pretty stingy for video. It turns out that this is what it costs to run a low-end YouTube service with only low resolution 400 Kbps video. If we wanted to serve up the kind of 3.75 Mbps “HD”* content that YouTube serves up, that would raise our costs nearly 10-fold or cut our concurrent viewers by 90%.
The suggestion that we could unleash a torrent of new bandwidth intensive applications if only we had faster broadband is laughable. Once we understand the economics of video, we quickly begin to understand why applications will always lag broadband which is the exact opposite of “conventional wisdom”. More fundamentally, the notion that application innovation or innovation in general must somehow involve higher bandwidth is simpleminded. True innovation lies in lower bandwidth applications that solve real world problems, and Google’s biggest (and probably only) money makers are proof positive of this.
* Pseudo-HD is worse quality than DVD at times with high complexity video due to the relatively low bandwidth.