Putting American bandwidth caps into context
Recently, Chiehyu Li and James Losey of the New America Foundation put out this comparison of Internet usage caps between the U.S. and Japan which paints a dire picture of how screwed up broadband in America is. While there’s little doubt that Japan is way ahead on the technology curve on broadband deployment compared to just about any other nation, Li and Losey’s analysis is problematic and it fails to put the usage caps in a global context. But before I fill in the shortcomings of their analysis, I’ll first define some key terminology.
Usage cap: This is the number of gigabytes used by a broadband customer per month before the Internet Service Provider (ISP) applies additional fees upon the customer. Li and Losey mistakenly refer to “usage cap” as “bandwidth cap” since bandwidth is a measure of data throughput, and the throughput is not what is being capped. It is the “usage” of that bandwidth that is being capped.
Duty cycle: This is a ratio or percentage of the usage cap compared to the maximum transmissible data that a connection speed would allow if the connection were operated non-stop at full throttle. So if a Japanese IPS has usage cap of 900 GB, and the 100 Mbps fiber broadband connection has a maximum transmissible data volume of 32,400 GB per month, then the duty cycle is 2.78%. That means the broadband customer either only uses their broadband connection at full throttle for no more than 40 minutes per day, or they use their 100 Mbps broadband connection at no more than 2.78 Mbps by using some throttling mechanism, or they use some combination of the two ensure that they consume no more than 30 GB per day to avoid getting their Internet access suspended.
Upstream: Data you are sending to others over the Internet.
Downstream: Data you are receiving from others over the Internet.
Li and Losey point out that while Japanese ISPs caps the upstream; they are generous with unlimited downstream while American ISPs are beginning to cap both the upstream and downstream. But this is a flawed analysis because capping the upstream effectively cuts to total downstream peer-to-peer (P2P) traffic to the same levels. And because P2P is one of the most heavily used application on the Internet accounting for the vast majority of Japanese Internet traffic, cutting upstream usage greatly reduce all P2P traffic and all Internet usage which was necessary because their Internet backbones were severely congested. I’ve argued that it is far more efficient to manage the network but until then the caps are needed.
Another problem with Li and Losey’s analysis is that it only looks at the usage cap without an analysis of the duty cycle and its ramifications. When we compare the usable duty cycle between ISPs in Japan compared to ISPs in the U.S. derived from Li and Losey’s data, we see a completely different picture. By splitting the U.S. ISP usage caps (some of these caps are only in proposal phase) into an upstream and downstream cap proportional to the upstream/downstream connection speeds, I was able to generate Figure 1 below. What it actually shows is that U.S. broadband providers have usage caps that allow users to use their Internet connection far more frequently than users in Japan. So while a user in Japan is capped to 40 minutes a day of upstream Internet usage, which indirectly caps download speed because it severely trims the number and generosity of P2P seeders. AT&T’s proposed DSL usage caps (similar to other DSL providers) allow for 1111 minutes of usage per day on the upstream and 97 minutes on the downstream per day. So broadband consumers who are dissatisfied with their tiny Time Warner usage caps can simply switch to their DSL provider.
Now in no way am I suggesting that American broadband is better than Japan because the speed and absolute usage cap size Japanese ISPs offer is nice, but we need to understand that the usage caps in proportion to the advertised bandwidth is more usable in the United States. So the reality is that usage caps isn’t what Americans should be focusing on and the priority should be to encourage more next generation broadband deployment. American broadband providers are in the process of rolling out next generation broadband technologies like DOCSIS 3.0, Fiber to the Node (FTTN), and Fiber to the Home (FTTH) and it is safe to say that the majority of homes will have one or more of these next generation technologies available to them within the next three years.
Li and Losey also paint a dire picture that Japan has 10 or more times the connectivity speed as the US, but the most accurate real-world measurement of Internet throughput in Japan according to the Q1-2009 results from Akamai’s State of the Internet report indicates that Japanese broadband customers only average about 8 Mbps. That’s still a great average considering the fact that it is nearly twice as good as U.S. broadband throughput average of 4.2 Mbps, but it’s nowhere near the 10 to 13 times faster that this and other commonly cited reports suggest. Those inflated charts that show Japan averaging around 60 Mbps simply can’t be backed up with any reliable statistical data of any meaningful sample size. By comparison, Akamai is the leading Content Delivery Network (CDN) company in the world handling as much as 20% of the world’s total Internet traffic, and their data is based on measurements of files that people actually download to use rather than some artificial sample on a speed test server.
Furthermore, Li and Losey’s paper doesn’t give any kind of context of where American bandwidth caps are compared with the rest of the world. I grabbed the latest data from OECD Broadband Portal and generated figure 2.
Source: OECD Broadband Portal data, Comcast, AT&T, TimeWarner
So when we look at this from a global perspective, it is clear that the U.S. has some of the most generous usage caps in the world. While we won’t argue that much improvement needs to be made with U.S. broadband, some of the more dire warnings are simply not accurate.