Data caps: industry myths, policy failures (5)

Part the nth. Here’s the late lamented Part 5 of my data caps series. I’m not sure how helpful this numbering system is in capturing the big picture, but I think five is enough for this round.

This post continues my analysis of Rogers’ caps and rate card, and adds a more general look at the myths served up by the industry as to why subs should pay on a usage basis.

We begin with an interesting (anonymous) comment I got about burn rates…

Not so fast!

Last time we covered two of the four Rogers access tiers (Ultra-Lite and Express) and the effects on burn rate of data caps and line speed. Then, using tracking data from Ipsos Reid, we estimated that Canadians now spend on average about 85 hours a month online, or 2.9 hours a day. We took average time online as a proxy for “customer needs.” (In its 2010 CMR, the CRTC notes that as of 2009, Anglophone Canadians were spending 14.5 hours online weekly, or about 2.1 hours daily: CMR chart, p.100.)

I suggested that the burn rates derived from the nominal (maximum) bandwidth offered on a tier translate directly into time available to a subscriber before she hits her cap – time that looks especially paltry when compared to the time Canadians are spending online these days. As an anonymous commenter has pointed out, however, I’m using a theoretical value that looks different in the real world:

“Caps should always be increasing, not decreasing just like you said. But this is totally off: ‘How much time did you spend enjoying a gigabyte’s worth of data on your particular connection? The answer to that question is a function of your line speed as well as your data allowance, which together determine the burn rate of your cap.’ Welcome statistical multiplexing and average utilization!”

I stand corrected on two points. First, average utilization, a further variable affecting available time (in addition to speed and cap); second, statistical time division multiplexing (STDM) or other methods used by engineers to accommodate use of a network by a large number of people.

“The burn rate and therefore burn time you calculated is based on a user using 100% of their connection during the entire online period. Much like other networks (hydro, road, water perhaps), there are peaks of 100% load but the average load is lower. It could take me 60 minutes to read a boring 100 page article from SSRN, but it uses less than 1-2 seconds of a 15Mbit connection at 100% load. Likewise, to eat up 15Mbit/s, I would need to simultaneously digest a lot of audio or video.”

This is what engineers do a lot of: load levelling. STDM and the like are effective because different subscribers are using up network resources at widely differing rates (this is reflected in the contention ratio, i.e. the ratio between actual bandwidth in an access line and the maximum potential demand for bandwidth when all users are online at top speeds). I’m not sure I agree that getting to a 100% load on a broadband connection would require “a lot of audio or video,” but certainly reading email isn’t going to max out a 15 Mbps line.

“You also neglect upstream usage, be it requests to a server, ACK or uploading images to facebook.

“It doesn’t it make it any less sad, but it is a bit misleading to suggest that you can only surf, listen or watch for the period of time you calculated. Our online and offline lifestyles will shape our usage patterns, but I doubt if even 1 in 1000 users reaches 100% load averages.”

Agreed and agreed. Now, how do we accommodate these complexities in a way that does justice to the basic observation that caps create unacceptable constraints on paying subscribers? The only way to know for sure is to go out and do the research, measuring what Canadians do on their connections and how their “lifestyle” choices affect available time under a cap. While we wait for that happy day to arrive, let’s return to the other two Rogers tiers in the accompanying table – Lite and Extreme.

Rogers, caps

The red highlighted numbers in the table denote two instances of really over-the-top behavior (pun intended, as in over-the-top Web content). Unlike the other two tiers, the Lite and Extreme caps underwent a dramatic and widely covered change this past July. They were decreased in both cases – from 25 to 15 GB for Lite, a drop of 40%; and 95 to 80 GB for Extreme, a drop of 16%.

On July 22, Rogers announced the lower caps and in the case of Extreme, a higher speed (existing subs were grandfathered). The table shows the drop inflicted by the cap cuts on the Lite and Extreme burn rates. In theoretical terms, the 300% speed increase and 40% cap cut on Lite has pushed its burn rate from about 57 hours to less than 8, which would make the combined impact on the burn rate about 86%. On the Extreme tier, the burn rate has dropped by an estimated 43%.

As just discussed, these burn rates have to be looked at in relative not absolute terms: relative to the maximum nominal bandwidth and relative to how much subscribers could get for their access dollar in the past. Rogers’s subs aren’t doing so well on either count.

Rogers’ rate card is punishing light not heavy users

We’ve talked previously about Rogers’ rates. But in the chart below, I’ve graphed a bird’s eye view of all of Rogers’ six access tiers from Ultimate on the left to Ultra Lite on the right, against two variables: the cost (Cdn $) of each tier in Mbps per month (purple line); and the maximum monthly fee payable on each tier when usage hits the highest penalty point, which is $50 a month for all tiers (blue line).

Rogers rates caps

We know from industry analysts like Dave Burstein that, once equipment is installed at the CMTS, the incremental cost of raising a service tier from a few megabits per second to to 30 or 40 is very small. Rather than take a cost-based approach, North American ISPs seem to be charging a premium for higher speeds. To make the Rogers rates comparable with one another, I converted them to a unit cost – one Mbps per month.

This conversion reveals something unusual. Far from even a modest surcharge for the privilege of fast connectivity, the four highest (i.e. fastest) tiers cost far less per megabit than the two entry-level tiers. In fact, UL subs are paying 28 times more per unit of bandwidth than Ultimate subs ($56 vs $2); 20 times more than Extreme Plus; 14 times more than  Extreme; and 12 times more than Express. The question is: why? UL is a 500 Kbps service with a cap of 2 GB. Yet it’s priced at $27.99 – and comes with cruel penalties. At $5, UL has the highest penalty per GB of over-usage. The penalties per GB actually drop as speed goes up.

Paying the price for hogging bandwidth

Rogers’ rate card seriously undermines any rationale that might exist for the economic ITMP approach to fighting off congestion. The first flaw is the system punishes the wrong end-users. The second flaw is the cozy assumption that whoever is hogging the bandwidth, they ought to pay because bandwidth costs money and light users shouldn’t have to subsidize the hogs.

North America’s incumbents have done a great job of making this logic seem like a law of physics. As Nate Anderson wrote in Ars Technica last July, this “logic” covers up real broadband economics, which lead to a very different set of conclusions – starting with how little incremental cost is incurred by spurts in subscriber draws on network bandwidth (Should broadband data hogs pay more? ISP economics say ‘no‘). Anderson’s focus here is on the cable side of the business, and the open secret is that most costs are fixed costs:

Big ISPs usually rely on peered connections to other major ISPs, connections which incur no per-bit cost. As for the cables in the ground, they’ve been there for years. The equipment back at the headend must be installed once, after which it runs for years. Cable node splits and DOCSIS hardware upgrades are relatively cheap. Requesting one additional bit does not necessarily incur any additional charge to the ISP.

I also like the point made in the article about all that money the ISPs have to invest in their networks – billions and billions of dollars, woe is me it’s so capital-intensive. Not quite. Citing big scary numbers about investment is meaningless outside of the context of company financials. Relative to the rest of the ledger, at least one provider, Time Warner Cable (TWC), has seen its bandwidth costs drop, and has invested less in infrastructure, even as the sub base has grown. Anderson looks at the books:

“TWC’s revenues from Internet access have soared in the last few years, surging from $2.7 billion in 2006 to $4.5 billion in 2009. […]

“But this growth doesn’t translate into higher bandwidth costs for the company; in fact, bandwidth costs have dropped. […]

“What about investing in its infrastructure? That’s down too as a percentage of revenue. TWC does spend billions each year building and improving its network ($3.2 billion in 2009), but the raw number alone is meaningless; what matters is relative investment, and it has declined even as subscribers increased and revenues surged.[…]

“In fact, CapEx has declined for the industry as a whole. As the National Broadband Plan noted, the big ISPs invested $48 billion in their networks in 2008 and $40 billion in 2009″ [my emphasis].

In his comment for the article, S. Derek Turner, director of research for Free Press, had harsh words for the ISP habit of moaning about costs. Here’s his warning:

TWC’s data capping trial in 2009 featured “literally ridiculous overage amounts that had no relation to underlying costs,” Turner said. And the danger isn’t just to consumer pocketbooks, it’s to the entire Internet ecosystem. Who will start using the next high-bandwidth YouTube or Netflix when doing so results in big fees? If not done right, consumption pricing “will cripple innovation” [my emphasis].

Will caps keep you from trying new services like Netflix?

In speaking of the US market, Turner is referring to a problem that is still largely hypothetical. In Canada, however, the bad news bears have already arrived. Data caps aren’t merely gouging subs for their money. They’re also going to be used as a weapon in the battle to keep over-the-top Web video services from reaching Canadians. The CRTC has, in other words, created a policy framework that is both anti-consumer and anti-competitive.

For further details on the Netflix issue, please see the item I’ve posted next, Netflix and the coming war over Web video (yes, it’s still the data caps). A longer version will be published shortly by Telemanagement under the title “Data Caps: Traffic Management Tools or Easy Money?” I’ve excerpted sections from the TM piece that cover the anti-competitive implications of data caps. We’re not calling it Part 6.


2 thoughts on “Data caps: industry myths, policy failures (5)

  1. Stop the Cap! was instrumental in organizing a consumer revolt against Time Warner Cable’s Internet Overcharging experiment in 2009. Our group was created specifically to fight these schemes, which include usage limits, so-called “tiered pricing” and speed throttles.

    We were so successful, our local congressman at the time introduced a bill in the House of Representatives to ban this kind of pricing without real evidence it was financially necessary, and I stood next to Sen. Charles Schumer (D-NY) on the lawn of the cable company in Rochester, N.Y., to bid the experiment goodbye two weeks after word broke it was forthcoming.

    We’ve lived and breathed this issue for more than two years now, starting with Frontier Communications introducing a 5GB usage limit on DSL in the summer of 2008 (we fought and got that rescinded as well). Since that time, we’ve battled these schemes in both Canada and the United States because they are simply not financially justified for wired broadband.

    As you’ve already noted, there is no real “pay per use” model in effect in either country. Users are crammed into tiers with usage allowances they dare not exceed without fear of overlimit penalties that are punitive. Bandwidth and traffic costs are dropping, often less than a dime per gigabyte in the States, costs to deliver the service are falling as well, yet these companies claim congestion forces them to implement such schemes, even as their investment in network expansion has plummeted.

    Canada’s schemes have become ubiquitous because Bell can force them on the wholesale market and cable companies are never ones to leave money on the table. If Bell caps and throttles on the phone side, Rogers, Shaw, and Videotron won’t miss the opportunity to do likewise.

    Thankfully, at least Canadians have upper limits on overage penalties. For now. In the United States, proposed overages were either unlimited or topped out at a whopping $100. If that came to pass here, you can bet a shiny loonie they’ll do likewise in Canada.

    Time Warner’s original proposed pricing would have literally tripled pricing for unlimited service and especially punished so-called “lite users” — those placing the least demand on their network, with ridiculously low limits and huge overlimit fees. These paltry allowances delivered with low speeds called out provider arguments caps are about controlling congestion.

    In reality, they’re about monetizing broadband service to a new degree — speed AND usage.

    As we’ve reported this week, Wall Street analysts are predicting providers will slap caps on to prevent customers abandoning video packages. One went as far as to suggest package re-pricing — dramatically increasing broadband pricing while lowering video package pricing to get people to keep their video service.

    We’ve also covered the fact Netflix has seen a lower take rate than expected for its streaming service north of the border. Once customers see what high quality streamed video does to their usage allowances, Netflix can become a dangerous proposition for households on a budget.

    We’ve argued with providers from phone companies to cable companies about this issue and asked them to prove their case. We are handed provider-financed studies predicting “exafloods” and innovation and job erosion unless providers can “get creative” with pricing. But the innovation was already lost in today’s North American duopoly market, where phone and cable companies deliver the least amount of service for the highest possible price.

    If providers continue to insist they cannot survive with flat rate pricing even with the incredible margins they earn on broadband service, it’s one of the best arguments around to nationalize broadband service to deliver world class speeds at reasonable prices. Just as public highways made an enormous difference in our manufacturing economies, broadband for the public good will mean a lot to our digital economies of the future.

    But this future is threatened if phone and cable companies get to erect tollbooths and impediments up and down the line.

    Phillip Dampier
    Editor, Stop the Cap!

  2. Thank you for the shout out! Love your blog by the way, very detailed analysis of things going on in the broadband world. I think you knew your burn rate is all relative, but it is important to state that it isn’t a real world measure because actual use varies so much.

    On your new post: I’m in agreement that the caps being tied to speed is an effort to get users to move up to a higher tier with a punitive overage rate. If I’m buying a lite service whose speed is OK but paying $30-50/month for overages, I will be told to move up to a higher tier for only ~$15-$20 more per month. Wow, what a savings! But the higher the tier, the higher the absolute cost if you max out your caps, so the really heavy user is better off getting a low tier and just racking up the overage charges until he hits the limit.

    It will be interesting to see how things play out in the long run. Will incumbents increase prices so much that there is enough room for a competitor to move in with a slower but uncapped service beyond the reach of the retail and wholesale bit cap schemes? If they do, perhaps they will regret their current strategy, although the risk of a swift price cut in response to a facilities based competitor is huge, as seen by Novus vs. Shaw.

Comments are closed.