Home » CurrentHeader, Internet

Throwing bandwidth at applications is never the answer

By 9 March 2010 10 Comments

Ben Worthen of the Wall Street Journal has raised some excellent points about the challenges facing Google’s bold broadband experiment due to the lack of big bandwidth applications.  While there have been many “brain storming” sessions on how to solve these problems, the reality is that the problem is fundamental and insurmountable.

I remember speaking to Geoff Daily of App-Rising about some of these “brain storming” sessions called “CampFibers” at Lafayette where a bunch of developers where placed inside a room to brain storm.  The developers were asked “what kind of apps would you create if bandwidth was not a limitation”.  The room fell silent until one brave soul said “why” which actually summed up the answer perfectly.

Why is this the case?  The reality is that software developers and system architects have to abide by a crucial design criteria for all Information Technology (IT) systems or else the whole system is a nonstarter.  This design criteria is called “scalability”: which is the ability of a system to scale up and support thousands to millions of end users.

We can certainly build several expensive arrays of servers with a total of 400 gigabits per second (Gbps) of capacity which would cost us $400,000 per month (at the lowest bulk rate) in bandwidth.  I can even offer an example of a fat bandwidth application today that requires 3 Gbps of capacity which is uncompressed 1080-60P video (1920×1080 resolution at 60 frames per second) which is beneficial for lower latency video conferencing, but would anyone ever deploy such a system?

Our $400,000 monthly bandwidth bill would allow us to serve a whopping 133 customers and that’s just about how many customers we’ll have out there that can actually afford to buy 3 gigabit broadband.  But even if a hundred people could afford to buy 3 Gbps service, would they actually waste their money on such a novelty that has minimal noticeable advantages over a 40 Mbps Blu-Ray system?  Probably not.  At a more practical level, we can develop an alternative low-latency compression algorithm for video conferencing that doesn’t use inter-frame compression and get the video down to 16 Mbps.

Scalability is an absolute requirement because any $400K/month system must support at least 1 million concurrent end users.  So If we wanted to support 1 million concurrent users with 400 Gbps of capacity, our application must have an average bandwidth of not more than 400 kilobits per second (Kbps) which is pretty stingy for video.  It turns out that this is what it costs to run a low-end YouTube service with only low resolution 400 Kbps video.  If we wanted to serve up the kind of 3.75 Mbps “HD”* content that YouTube serves up, that would raise our costs nearly 10-fold or cut our concurrent viewers by 90%.

The suggestion that we could unleash a torrent of new bandwidth intensive applications if only we had faster broadband is laughable.  Once we understand the economics of video, we quickly begin to understand why applications will always lag broadband which is the exact opposite of “conventional wisdom”.  More fundamentally, the notion that application innovation or innovation in general must somehow involve higher bandwidth is simpleminded.  True innovation lies in lower bandwidth applications that solve real world problems, and Google’s biggest (and probably only) money makers are proof positive of this.


* Pseudo-HD is worse quality than DVD at times with high complexity video due to the relatively low bandwidth.

10 Comments »

  • Wes Felter said:

    If we take this thinking to its logical conclusion, only P2P can use huge broadband, because it avoids the cost of servers and datacenter bandwidth.

  • George Ou (author) said:

    Wes, this is precisely what is happening in Japan. Their server bandwidth costs are nearly 10x more expensive which limits their “Tube” sites to 1 Mbps. So P2P is literally the only thing that bypasses the economic constraints of bandwidth and it’s by far the most dominant thing on the network. However, even P2P has to be constrained by the ISPs because the back haul costs are killing them. They don’t like capping the downstream so they tell everyone the downstream is unlimited, but they cap upstream to a 3% duty cycle.

  • Jeffrey W. Baker said:

    The fact that they some guys in a room couldn’t think of what to do with more bandwidth only serves to illustrate that they had too few people in the room. Give millions of people ludicrous bandwidth and then you’ll see some real brainstorming. Tons of interesting applications have come out of universities where the hackers and researchers generally enjoy bandwidth tens or hundreds of times more plentiful than that available to the public at large.

  • George Ou (author) said:

    Jeffrey, I think you are still missing the whole point. Bandwidth costs money, and you don’t go out of your way to develop high bandwidth applications.

    P2P manages to bypass the true cost of bandwidth by overusing and saturating shared broadband resources, but it’s limited by either slower upload speeds and/or it’s limited by upstream usage caps. 100 Mbps broadband services in Japan will limit upstream duty cycle to 3%.

    Speaking of Universities, they’re no different there and bandwidth is primarily used for piracy with P2P. They typically end up blocking or severely throttling P2P because it makes the campus network unusable.

  • Michael Baumli said:

    Bandwidth is no doubt a constraint, but we should turn this into a positive thing instead of a negative thing.

    Think about this from a computing perspective. For years developers were constrained by resources and fought to keep their programs small yet functional. As time progressed, constraints eased and developers began to devour computing resources as they were made available to them. Now, years after that, we are learning that we have to try and conserve power, reduce our application foot print and stop thinking that we can declare everything as a global variable because quite frankly, not everything needs to be a global variable.

    Do we need unlimited bandwidth? No
    Would we like unlimited bandwidth? Yes
    Is this a realistic view? Not Really.

    From my time in Japan, having 100mbps bandwidth was a great experience for sharing media online.

    I managed to upload hundreds of pictures to Facebook, upload 3 HD videos to Facebook, Download OpenSUSE 11.2, and surf the web pretty religiously. While I had a great time, I don’t see this as something that I would do on a nightly basis. At home, I watch a few shows on Hulu and talk to my girlfriend on Skype. Past that, there are very few things that I do that would even utilize bandwidth to the point at which I am constrained.