The CRTC’s annual report is out: the good, the bad, the weird (2)


London, East End council flat, August 2013


Last time, I described some of the ways in which the CMR has fallen short. I added that the current incarnation of this document marks a sharp departure by reflecting a greater concern for end-user behaviors and consumer welfare. I’ve been looking at the CMR less as a source of information about particular trends, and more as a window through which to gauge how the Commission is allocating priorities (the CMR page is here).

I grouped the details into four areas, and covered 1 and 2 in the previous post:

  • 1- The emphasis on industry vs consumer welfare. That emphasis has changed quite dramatically in the 2013 edition, which stems from the pro-consumer tilt the Commission has taken under the current Chair.
  • 2 – The reluctance to report bad news. This entrenched timidity is still holding back critical discussion. That’s one great advantage the FCC’s structure has: open and sometimes quite vocifeous partisanship, since appointees must include a balance of Democrats and Republicans.

So how about the other two?

  • 3 – The role of consumer research
  • 4 – The use of international rankings.

survey-clipart-13 – Survey research, MIA. I’ve written about this issue a lot (for some thoughts on how not to do research, try this from last fall: CRTC’s wireless code: educate us, poll us, don’t consult us). My overall assessment: the Commission has failed in its research mission because of a stubborn reluctance to conduct tracking surveys on consumer attitudes and behaviors.

As Pete Nowak shows in his recent post on the CMR, the Commission takes good advantage of its long-standing relationship with Statistics Canada, as shown e.g. by the data on communications spending. But that initiative still leaves two serious gaps in the data-gathering. One is that the Commission has become hooked on online consultations – a version of which we can look forward to in the proceeding on TV in the digital era (Vice-Chair Peter Menzies has called it a “conversation”). These casual and unrepresentative online exercises are no substitute for represesentative, random-probability surveys, especially the kind that looks at failures in the system.


The other problem, not of the CRTC’s making, is that Stats Can itself had to get out of the Internet survey research business a couple of years ago because its Canadian Internet Users Survey was defunded by Industry Canada. I often relied on the CIUS and got valuable advice from the Stats Can staff who looked after it. According to an email I just received from the IC Toronto office, there is no current funding in place to continue the CIUS. If you think Harper and Moore suddenly found consumer welfare to their liking this summer, don’t expect any serious policymaking to flow from the political posturing.

4 – Cooking the international books. There’s another kind of problem with the CMR that has reappeared this year and involves much more than just a sin of omission. That would be the jerryrigging of international comparisons between Canadian telecomm services and those in “select” other countries. The purpose of this intellectual chicanery, which started years ago, is to show our telecom services in a favorable light by producing a set of comparative data based on as few as four other countries.

This year, it’s up to five: the United States, United Kingdom, France, Australia and Japan. They comprise the sum total of what the CRTC includes in its International section, tucked away at the very end of the main text (pp 199-209). Here’s what I wrote on this subject in August 2011, in reference to the just-then published edition of the CMR:

In June of last year [2010], I wrote an opinion piece for Telemanagement entitled “What the FCC and OECD can tell us about Canada’s broadband prospects.” I wrote some comments (referring to the 2009 CMR) about a familiar-sounding conclusion:

Depending on your expectations, you might have been surprised to read that “Canada compares favourably for low-use broadband Internet service, and reflects a median price point for medium- and high-use baskets.

Two years ago [2009], it turns out, the Commission’s broadband Kool-Aid recipe was based on comparing Canada not with eight but four other countries! The authors chose to bury the gory details way back in Appendix 5, where even policy wonks might fear to tread…

These CMR comparisons are not merely arbitrary and tendentious. Even more amazing, the Commission has paid an external consultant to compile these figures, which include no justification for why we’re asked to take the use of these five countries at face value (see Appendix 4, International pricing assumptions).


And yet the Commission could have had its cake and eaten it – i.e. done something much more honest and comprehensive without spending money on a contract. There are many public data sources on telecom services that are free, have been around for a long time and use a much larger pool to tease out comparisons.

In broadband, for example, the OECD surveys all 34 of its member countries. (I see its Broadband Portal says today information on the site was updated less than a week ago.) A couple of posts back, I looked at another source for broadband data, Ookla’s Net Index, which covers up to 186 countries, depending on the variable at issue. For its part, the FCC takes its regular reports on international broadband far more seriously than our regulator. Its 2012 edition of the “International Comparison Requirements” runs 155 pages. The FCC also takes its methodologies very seriously, using both OECD and extensive Ookla data from 15 or more foreign cities. Here’s what it said last year about efforts to improve how it goes about setting international benchmarks (p.6):

“In an effort to standardize the methods countries use to collect broadband data, the Commission, working together with the State Department and the Department of Commerce, and through the OECD, started an initiative to collect more reliable and granular international data on broadband deployment and adoption internationally. The first concrete result of these efforts was a workshop hosted by the Commission at its Washington, D.C. headquarters in October 2011.”

Not only is the FCC trying to make the best possible use of empirical evidence. As you can see, it also makes a habit of consulting constituencies that have a stake in such exercises (the 2012 report is uploaded in pdf here).


Why doesn’t the CRTC do this? What is it afraid of?

The latter is, of course, a largely rhetorical question. If the CMR were to take an honest look at wireless using international comparisons, the results would look even more invidious than those for broadband. Take any one of a gaggle of posts Pete Nowak wrote this summer to prove that point: like The future may be friendly, but the numbers aren’t, from last July. Pete shows that our wireless incumbents have the highest ARPU in the world… plus the 3rd highest profit margins in the developed world.

While these numbers aren’t directly about retail prices or other consumer-facing variables, they belie a very unpleasant fact about the Commission’s concoction of international benchmarks: they’re not just inaccurate, they’re cooked to make Canada look better than we really are.

The UK’s Ofcom is another national regulator that promotes a high-profile consumer research agenda. That includes reporting on the communcations landscape in ways that are accessible and relevant to a wide audience. Take the annual report Ofcom issued two years ago (which shares the same CMR initialism, for Communications Market Report). Here’s a snippet from the 2011 Ofcom CMR cover…


Which would you rather read about: how much money Rogers is making from your data caps or how many of your fellow citizens are having sex while using their smartphone?

(conclusion in part 3…)