Home » CurrentHeader, Wireless

Effects of BitTorrent on a Starbucks-AT&T hotspot

By 25 October 2010 16 Comments

Image credit: Starbucks

I recently discussed the phenomenon of a single application accidentally taking down a T-Mobile cell and how BitTorrent would have an even worse effect.  I wanted to test this with BitTorrent, but doing so on a cell tower isn’t feasible at this moment and it’s not something that any wireless carrier would want me to test on their production network.  So I found the next best thing during off hours at a Starbucks hotspot operated by or in conjunction with AT&T and Wayport.  I conducted the tests well after store closing in front of the store from my car when no one was at the store, and the BitTorrent test only ran for 141 seconds so that it wouldn’t disrupt any background activity that the store might be running.

There are a lot of similarities between a Wi-Fi “hotspot” (effectively a cell) and a wireless 3G cell, but the Wi-Fi cell has a lot more wireless capacity because it has 20 MHz of spectrum compared to a typical 5 MHz or 10 MHz cell on a 3G cell and because Wi-Fi is working at closer range with hundreds of times higher radio frequency (RF) field density.  Public hotspots like the ones operated by Starbucks usually employ a T1 (symmetrical 1.554 Mbps) backhaul as confirmed by Figure 1 below which is similar to many cell tower backhauls.  The backhauls on 3G and LTE towers are likely faster than a T1, but the 3G tower has less capacity than a hotspot running 802.11g.  That means what harms a Wi-Fi hotspot will also harm a 3G cell in a similar manner.

Note that the measured speed is below “1.554″ because it is measuring data throughput excluding overhead and because they’re using the binary definition of megabit which calls for 1,048,576 bits/sec instead of a decimal definition of 1 million bits/sec.

Figure 1: Starbucks performance indicates T1 backhaul

First I had to get a baseline measurement when nothing was running.  Figure 2 shows that the Starbucks hotspot had a baseline latency of 148.8 milliseconds (ms) to the first pingable hop (IP address = 209.85.243.160) on the network.

Figure 2: Starbucks Wi-Fi – baseline latency

Next, I fired up BitTorrent and it was uploading and downloading at around 1.4 Mbps including overhead bandwidth which is about as fast as I could push it with no user defined bandwidth constraints in the BitTorrent application.  Figure 3 shows that the average latency with BitTorrent running went up to 772.3 ms which is an increase in jitter of 623.5 ms.  From further testing, I verified that the jitter was induced on the T1 backhaul and it didn’t stress the 54 Mbps capacity of the Wi-Fi access link, but it could easily stress the wireless link of a 2G or 3G cell.  It’s also noteworthy that even faster 3, 5, or 15 Mbps broadband downstream links can be harmed by BitTorrent.  This type of application would certainly bring down a typical cell tower backhaul which would harm dozens of wireless cells served by a single tower.  That would affect hundreds or thousands of active users.

Figure 3: Starbucks Wi-Fi – Latency with BitTorrent active

So how would this affect other users during operational hours?  They probably wouldn’t know what hit them but they would be angry with the massive increase in jitter and bandwidth starvation due to the fact that BitTorrent has the ability to hog most of the bandwidth at the expense of other users and applications.  Even when BitTorrent is capped at a lower bandwidth, the sheer volume of signaling and payload packets per second can overwhelm a wireless network and even low bandwidth BitTorrent can cause a lot of jitter.  This is why I ran the tests off hours.

On a production cellular network, there is likely little to prevent a user from running bandwidth hogging and jitter inducing applications like BitTorrent other than the “Acceptable Use Policy” (AUP) that forbids these network disruptive applications and possibility some technical mechanisms that throttle overactive users.  But there are Net Neutrality advocates who want the regulators to forbid these network management practices and application restriction policies because they want to pretend that wireless networks are the same as wired networks.  But it’s clear that from an engineering perspective, such policies would be harmful to the vast majority of paying customers on the network and it would prevent the network carriers from maintaining an operational network.

Now some would argue that carriers shouldn’t have blanket prohibitions against an entire set of applications because those applications might be able to operate in a non-disruptive manner.  But that doesn’t address the reality of how the applications perform in real world tests like the ones conducted above.  The better approach to blunt regulatory force would be if application providers like BitTorrent worked with the wireless carriers on application certification where the application provider demonstrates that their application avoids generating jitter and avoids taking a disproportionate amount of bandwidth.  The alternative is a more actively managed data network where the wireless base station actively distributes bandwidth fairly between subscribers and prevents a single application from inducing jitter.  That level of network management doesn’t exist today so until it does, the networks do the best they can to keep the networks running which includes prohibiting certain applications.

16 Comments »

  • Paul Wilson said:

    How would this look different if using a wired LAN rather than wireless?

    I mean, you are trying to push 1.4 Mbps through a 1.5 Mbps backhaul, so that looks to me like a bandwidth issue.

    Interested to see a comparison.

  • George Ou (author) said:

    Paul, most of my previous tests were conducted only on wired broadband connections with 3, 6, and even 15 Mbps broadband connections. In all those cases, BitTorrent will take all the bandwidth and generate a lot of jitter. Even when BitTorrent is set to only use a small fraction of the bandwidth, it will generate jitter.

    Here’s a wired test.
    http://www.digitalsociety.org/2009/11/analysis-of-bittorrent-utp-congestion-avoidance/

  • Veli-Pekka Ketonen said:

    Latency graph looks weird. Is each bar really a separate ping test result? Why is latency increasing so much in equal increments over the period of several seconds and then coming back down?

    3G radio networks have initially lacked advanced QoS functionality to prevent one user eating up a large chunck of the throughput capacity. This happens especially if a user near to base station uses a lot bandwidth. By equalizing radio air time between a user near the base station and a user far away, the latter user suffers significantly more in available throughput. To my knowledge, vendors are working on offering this capability and some might have it already available.

    Some wifi vendors already offer capping max downlink throughput per user. This functionality helps to prevent this kind of problems. As well, critical Wifi networks need continuous performance management to take good care of the service.

    WiFi in a cafe is not top critical. Should this happen in a hospital then it is quite different story.

  • Brett Glass said:

    George, one tech fact that it is important for you to point out is that Wi-Fi (and most wireless) is half duplex – that is, it cannot transport data upstream and downstream at the same time. This has several important implications. Upstream and downstream traffic compete for airtime. And the competition and arbitration cause considerable overhead, especially on point-to-multipoint (rather than point-to-point) networks. Thus, the amount of jitter and congestion created by P2P is unexpectedly high.

  • George Ou (author) said:

    @Brett

    Wi-Fi 802.11g at 54 mbps shared-directional raw sync speed is still faster than 3G even if it’s operating in bidirectional mode. This is especially true when you’re comparing optimal range Wi-Fi at -50 dB to typical 3G coverage at 90 dB because 3G doesn’t get to operate at optimal encode rates at that lower signal level.

    @Veli-Pekka Ketonen

    The baseline latency measurement is like that probably because of some application the shop is running after hours.

    The problem with BitTorrent isn’t just high bandwidth. Even in low bandwidth operation where the user manually sets a low ceiling for bandwidth, there is still a very high packet count from BitTorrent and it still induces relatively high jitter.

  • Dean Bubley said:

    I’m not an RF engineer, but it strikes me that there a lot of differences between WiFI and the various forms of 3G like HSPA, EVDO, WiMAX, LTE etc

    1) WiFi is TDD, not FDD, which may have a variety of effects

    2) Packet-scheduling algorithms (eg favouring users in better signal conditions) will be different

    3) Cells will usually have >1 T1 or E1. I’m not sure how they’re balanced though.

    4) May make a difference that cells are sectorised & may well have multiple parallel channels. Not obvious to me that a single device could clog 3 separate 5MHz HSPA channels.

    5) May all be limited by RNC capacity for signalling, not theoretical bandwidth

    Dean Bubley

  • George Ou (author) said:

    @Dean Bubley

    There are differences for sure, but the differences aren’t enough to overcome something like BitTorrent. As noted in the first reference to the story where an application took down a T-Mobile cell tower even though the application worked fine on a Wi-Fi network, the point is that what can hurt a Wi-Fi hotspot can definitely hurt a cell.

    Even though Wi-Fi is Time Division Duplexing (TDD) (and some variations of LTE will be TDD as well I believe since it offers more flexibility in changing upstream and downstream demand), Wi-Fi 802.11g still runs faster than 3G. Realistically speaking when we consider real-world signal levels, many LTE coverage areas will run slower than 802.11g as well.

    The biggest limitation on all wireless networks is a high packet rate from different devices, and applications like BitTorrent can generate a tremendous packet rate due to its high concurrent connections. Advanced scheduling can greatly alleviate this problem, but there are still limits. On an unmanaged 802.11b network for example, we’re limited to 200 packets per second from 4 VoIP calls and a 5th call will overload the network and cause problems for all the VoIP calls. A BitTorrent client can easily open up 40 concurrent connections with lots of chatty overhead packets in addition to payload packets.

  • Dean Bubley said:

    @ George

    No, if you re-read that article, the T-Mobile incident did not “take down a cell”. It took down the RNC.

    It is nothing to do with speed or “capacity” of WiFi vs 3G. It’s to do with where the bottleneck is.

    It is to do with the regularity of the connections for *radio power control* and especially the “state” of the HSPA connection (active vs. idle mode, network timeouts and so on). The way HSPA works is to set up a data channel to a device when requested, and then back off to a lower power state after (say) 20 seconds.

    Absolutely, totally different scenario using BitTorrent, because you have the HSPA or LTE connection active the whole time, as the traffic is continual. The IM app being discussed fired up the network connection regularly, and the load came from the *radio link* being flipped on & off. If you’ve got 100,000 phones in a city, a timeout of 20secs, and an app that checks the server every 21secs (say), you get a ton of excess signalling *against the RNC* which is the bottleneck

    Now it’s possible that BitTorrent doing 40 IP connections might overload the DNS server or something else, but if the radio connection is simply nailed open the whole time, the RNC doesn’t get hammered in the same way.

  • George Ou (author) said:

    @Dean Bubley

    Are you claiming that BitTorrent doesn’t cause a problem on 3G wireless link? I’ve seen BitTorrent cause some serious problems for an 802.11g network on a 15 Mbps FiOS broadband connection. The wireless link was seriously hammered and the same thing can happen on a 3G cell.

  • Dean Bubley said:

    No, I’m not claiming that BT couldn’t cause problems – I’m just pointing out that the comparison with WiFi may well be spurious for a variety of reasons.

  • George Ou (author) said:

    Dean, nothing short of a test on a production network would be the real thing. But that would obviously make some people very angry if I disrupt the network. The difference here is that the tests would be short duration while users actually run these applications for prolonged periods of time.

  • Dean Bubley said:

    George – no, tests can also run be run on 3G networks outside a live environment, for example by the guys at NSN’s Smart Labs, who do a lot of this sort of stuff.

    The point here is that different radio networks react to devices, connection patterns, power management and applications in extremely different ways.

    What congests (or causes jitter) on an ethernet network – wired or wireless – can be very different to what causes it on 3GPP cellular. The radio resource management is completely different.

    That said, somewhere on the web there’s a video of a conference of mobile users in the Netherlands, where they crashed a cell, with about 20 people accessing the same video on their smartphones from the same location. I can’t find it now though.

  • George Ou (author) said:

    Dean,

    I’m sure 3G tests could be run, but I don’t have access to NSN’s Smart Labs.

    I have generated jitter on an 802.11g link (not the backhaul) with 10 Mbps of BitTorrent traffic. The “backhaul” was a 15 Mbps FiOS connection.

  • SiliconANGLE — Blog — Genachowski Pushing Ahead with Net Neutrality During Lame Duck said:

    [...] alarming to think that regulators might force carriers to carry network-destructive applications like BitTorrent or other streaming video services that force wireless networks to support video on [...]