This post continues my analysis of Rogers’ caps and rate card, and adds a more general look at the myths served up by the industry as to why subs should pay on a usage basis.
We begin with an interesting (anonymous) comment I got about burn rates…
Not so fast!
Last time we covered two of the four Rogers access tiers (Ultra-Lite and Express) and the effects on burn rate of data caps and line speed. Then, using tracking data from Ipsos Reid, we estimated that Canadians now spend on average about 85 hours a month online, or 2.9 hours a day. We took average time online as a proxy for “customer needs.” (In its 2010 CMR, the CRTC notes that as of 2009, Anglophone Canadians were spending 14.5 hours online weekly, or about 2.1 hours daily: CMR chart, p.100.)
I suggested that the burn rates derived from the nominal (maximum) bandwidth offered on a tier translate directly into time available to a subscriber before she hits her cap – time that looks especially paltry when compared to the time Canadians are spending online these days. As an anonymous commenter has pointed out, however, I’m using a theoretical value that looks different in the real world:
“Caps should always be increasing, not decreasing just like you said. But this is totally off: ‘How much time did you spend enjoying a gigabyte’s worth of data on your particular connection? The answer to that question is a function of your line speed as well as your data allowance, which together determine the burn rate of your cap.’ Welcome statistical multiplexing and average utilization!”
I stand corrected on two points. First, average utilization, a further variable affecting available time (in addition to speed and cap); second, statistical time division multiplexing (STDM) or other methods used by engineers to accommodate use of a network by a large number of people.
“The burn rate and therefore burn time you calculated is based on a user using 100% of their connection during the entire online period. Much like other networks (hydro, road, water perhaps), there are peaks of 100% load but the average load is lower. It could take me 60 minutes to read a boring 100 page article from SSRN, but it uses less than 1-2 seconds of a 15Mbit connection at 100% load. Likewise, to eat up 15Mbit/s, I would need to simultaneously digest a lot of audio or video.”
This is what engineers do a lot of: load levelling. STDM and the like are effective because different subscribers are using up network resources at widely differing rates (this is reflected in the contention ratio, i.e. the ratio between actual bandwidth in an access line and the maximum potential demand for bandwidth when all users are online at top speeds). I’m not sure I agree that getting to a 100% load on a broadband connection would require “a lot of audio or video,” but certainly reading email isn’t going to max out a 15 Mbps line.
“You also neglect upstream usage, be it requests to a server, ACK or uploading images to facebook.
“It doesn’t it make it any less sad, but it is a bit misleading to suggest that you can only surf, listen or watch for the period of time you calculated. Our online and offline lifestyles will shape our usage patterns, but I doubt if even 1 in 1000 users reaches 100% load averages.”
Agreed and agreed. Now, how do we accommodate these complexities in a way that does justice to the basic observation that caps create unacceptable constraints on paying subscribers? The only way to know for sure is to go out and do the research, measuring what Canadians do on their connections and how their “lifestyle” choices affect available time under a cap. While we wait for that happy day to arrive, let’s return to the other two Rogers tiers in the accompanying table – Lite and Extreme.
The red highlighted numbers in the table denote two instances of really over-the-top behavior (pun intended, as in over-the-top Web content). Unlike the other two tiers, the Lite and Extreme caps underwent a dramatic and widely covered change this past July. They were decreased in both cases – from 25 to 15 GB for Lite, a drop of 40%; and 95 to 80 GB for Extreme, a drop of 16%.
On July 22, Rogers announced the lower caps and in the case of Extreme, a higher speed (existing subs were grandfathered). The table shows the drop inflicted by the cap cuts on the Lite and Extreme burn rates. In theoretical terms, the 300% speed increase and 40% cap cut on Lite has pushed its burn rate from about 57 hours to less than 8, which would make the combined impact on the burn rate about 86%. On the Extreme tier, the burn rate has dropped by an estimated 43%.
As just discussed, these burn rates have to be looked at in relative not absolute terms: relative to the maximum nominal bandwidth and relative to how much subscribers could get for their access dollar in the past. Rogers’s subs aren’t doing so well on either count.
Rogers’ rate card is punishing light not heavy users
We’ve talked previously about Rogers’ rates. But in the chart below, I’ve graphed a bird’s eye view of all of Rogers’ six access tiers from Ultimate on the left to Ultra Lite on the right, against two variables: the cost (Cdn $) of each tier in Mbps per month (purple line); and the maximum monthly fee payable on each tier when usage hits the highest penalty point, which is $50 a month for all tiers (blue line).
We know from industry analysts like Dave Burstein that, once equipment is installed at the CMTS, the incremental cost of raising a service tier from a few megabits per second to to 30 or 40 is very small. Rather than take a cost-based approach, North American ISPs seem to be charging a premium for higher speeds. To make the Rogers rates comparable with one another, I converted them to a unit cost – one Mbps per month.
This conversion reveals something unusual. Far from even a modest surcharge for the privilege of fast connectivity, the four highest (i.e. fastest) tiers cost far less per megabit than the two entry-level tiers. In fact, UL subs are paying 28 times more per unit of bandwidth than Ultimate subs ($56 vs $2); 20 times more than Extreme Plus; 14 times more than Extreme; and 12 times more than Express. The question is: why? UL is a 500 Kbps service with a cap of 2 GB. Yet it’s priced at $27.99 – and comes with cruel penalties. At $5, UL has the highest penalty per GB of over-usage. The penalties per GB actually drop as speed goes up.
Paying the price for hogging bandwidth
Rogers’ rate card seriously undermines any rationale that might exist for the economic ITMP approach to fighting off congestion. The first flaw is the system punishes the wrong end-users. The second flaw is the cozy assumption that whoever is hogging the bandwidth, they ought to pay because bandwidth costs money and light users shouldn’t have to subsidize the hogs.
North America’s incumbents have done a great job of making this logic seem like a law of physics. As Nate Anderson wrote in Ars Technica last July, this “logic” covers up real broadband economics, which lead to a very different set of conclusions – starting with how little incremental cost is incurred by spurts in subscriber draws on network bandwidth (Should broadband data hogs pay more? ISP economics say ‘no‘). Anderson’s focus here is on the cable side of the business, and the open secret is that most costs are fixed costs:
Big ISPs usually rely on peered connections to other major ISPs, connections which incur no per-bit cost. As for the cables in the ground, they’ve been there for years. The equipment back at the headend must be installed once, after which it runs for years. Cable node splits and DOCSIS hardware upgrades are relatively cheap. Requesting one additional bit does not necessarily incur any additional charge to the ISP.
I also like the point made in the article about all that money the ISPs have to invest in their networks – billions and billions of dollars, woe is me it’s so capital-intensive. Not quite. Citing big scary numbers about investment is meaningless outside of the context of company financials. Relative to the rest of the ledger, at least one provider, Time Warner Cable (TWC), has seen its bandwidth costs drop, and has invested less in infrastructure, even as the sub base has grown. Anderson looks at the books:
“TWC’s revenues from Internet access have soared in the last few years, surging from $2.7 billion in 2006 to $4.5 billion in 2009. […]
“But this growth doesn’t translate into higher bandwidth costs for the company; in fact, bandwidth costs have dropped. […]
“What about investing in its infrastructure? That’s down too as a percentage of revenue. TWC does spend billions each year building and improving its network ($3.2 billion in 2009), but the raw number alone is meaningless; what matters is relative investment, and it has declined even as subscribers increased and revenues surged.[…]
“In fact, CapEx has declined for the industry as a whole. As the National Broadband Plan noted, the big ISPs invested $48 billion in their networks in 2008 and $40 billion in 2009″ [my emphasis].
In his comment for the article, S. Derek Turner, director of research for Free Press, had harsh words for the ISP habit of moaning about costs. Here’s his warning:
TWC’s data capping trial in 2009 featured “literally ridiculous overage amounts that had no relation to underlying costs,” Turner said. And the danger isn’t just to consumer pocketbooks, it’s to the entire Internet ecosystem. Who will start using the next high-bandwidth YouTube or Netflix when doing so results in big fees? If not done right, consumption pricing “will cripple innovation” [my emphasis].
Will caps keep you from trying new services like Netflix?
In speaking of the US market, Turner is referring to a problem that is still largely hypothetical. In Canada, however, the bad news bears have already arrived. Data caps aren’t merely gouging subs for their money. They’re also going to be used as a weapon in the battle to keep over-the-top Web video services from reaching Canadians. The CRTC has, in other words, created a policy framework that is both anti-consumer and anti-competitive.
For further details on the Netflix issue, please see the item I’ve posted next, Netflix and the coming war over Web video (yes, it’s still the data caps). A longer version will be published shortly by Telemanagement under the title “Data Caps: Traffic Management Tools or Easy Money?” I’ve excerpted sections from the TM piece that cover the anti-competitive implications of data caps. We’re not calling it Part 6.