What is the NBN for?

The NBN is primarily a machine for winning elections.

Conroy created it (I know the people who put the idea in his head but he did the political leg work) and used it to get Rudd elected.

Conroy then had to build it and he used it to get Julia elected.

It didn’t work a third time for Rudd because the slow progress of the NBN just confirmed the perception that the government were hopeless.

Abbott didn’t understand the NBN and told Turnbull to demolish it.

But Turnbull kept the NBN because he knew what it was really for and he eventually used it to help win his election.

Now Fifield has been given control of the machine with strict instructions to not touch anything.

I think Turnbull’s plan is to get millions of people migrated to the NBN then reduce the CVC charge in 2018 to ensure users are finally happy with the performance.

Late in 2018 he will announce a sale process and TLS shares will jump in value which will make a lot of Liberal voters happy.

Then he will have yet another go at riding the NBN machine to victory in 2019.

Or ScoMo or Dutton will.

Should NBNCo be scared of gigabit LTE?

I saw a media article today about Telstra demonstrating one gigabit LTE and claiming this was somehow superior to the NBN. I can’t let a claim like that go unchallenged.

First, some history. In the early days of analogue mobile phones (AMPS) operators broke their radio spectrum into 21 blocks of voice channels and designed their networks to services “cells” from three“base stations” connected to antennas with overlapping coverage on three towers or buildings.

Every cell could see one to three towers and this allowed for a “handoff” between towers. There was no data service. Every call required a channel and that channel would have little interfere from another call on the same frequency in a distant cell.

I found a picture that illustrates this:


(Source: https://en.wikipedia.org/wiki/File:CellTowersAtCorners.gif )

The frequency reuse factor, k = 21 for this scheme. So you can use no more than 1/21 of the spectrum you bought in any one cell. By the end of AMPS carriers had dropped k to 7 at the expense of more interference but it improved the commercial yield on their spectrum investment by a factor of three. The CFO loved that. 

GSM, being digital, improved things such that nine or twelve sets of channels of digital data could form cells in a similar manner. Frequencies can be reused much more readily and the overlap doesn’t need to be so large. 114kbps data services can be delivered via this architecture from 200KHz data channels in 25MHz blocks. (You need 2 x 25MHz blocks, one for data UP and one for date DOWN.) You still need _relatively_ low interference so ironically this can work BETTER in a built up area where buildings block the interference. Combining two channels doubles the data rate and that is the service known as EDGE.


(Source: http://www.rfwireless-world.com/Tutorials/gsm-radio-frequency-planning.html)

The reuse factor, k = 12 for this scheme so you can use 1/12 of the spectrum you bought to service customers in any cell. In practice this meant you could service 124 channels / 12 = ~12 X 114kbps or a bit over a megabit from each sector of your base station. This is why mobile tower backhaul was only 2 megabits! 2 megabits for an entire tower with three cells but given the theoretical maximum data was 124 / 4 = 31 X 114 = 4.4Mbps it wasn’t a bad contention ratio. 

3G introduced “spreading codes” or code division multiple access aka CDMA which encode data on a radio carrier by multiplying it by a random digital number at a much higher bitrate than the data. To decode a specific data stream you multiply the received signal by the same random digital number and get the original data back. It’s like black math magic but it works!


(Source: https://goengineer.wordpress.com/2009/10/08/frequency-planning-gsm)

With 3G you can drive the frequency reused down to k=4. You take the 2 x 25MHz spectrum you bought for GSM and turn it into 2 x 4 x 5MHz. (2 x so you have UP and DOWN links.)

Data is transmitted on the entire 5MHz channel in frames that are 10ms long with 15 slots that contain data and control information directed towards users.

The 3G UMTS standard delivers 15 x 384Kbps data slots to the users in each cell. That’s a total of 5.8Mbps which was quite a thing from 5MHz of spectrum at the time.

HSDPA increases that to 7.2Mbps per user largely by squeezing more out of the 5MHz of spectrum  and joining a bunch of data slots together. Over the life of UMTS the modulation rate has been increased to get THREE times more bits out of the spectrum and to use spectrum from TWO towers so the pinnacle of 3G is DC-HSPA (Dual Channel / Dual Carrier HSPA) which is 42Mbps (six times the basic 7 Mbps service). That 7, 14, 21 or 42Mbps is shared by every user within a cell.

LTE is the logical evolution and convergence of these technologies.

Cell towers traditionally use three antennas per sector. Originally the strongest received signal from each user was selected. Then some beam steering was added to concentrate the signal to and from each user. This is called Multiple In and Multiple Out (MIMO). In LTE this beam steering is done for every user UP and DOWN continuously. 

The available spectrum is broken into numerous 180KHz channels (called resource blocks) with the maximum allocation being 100 blocks over 20MHz. These resource blocks are in turn modulated on subcarriers using orthogonal frequency division multiplexing which is rather like vectoring on VDSL2. It’s high-tech and it can carry 1 megabit over 180KHz which was theoretically impossible when I went to school but yields 100Mbps over 20MHz.

Together with MIMO,  LTE’s modulation technique you can set k = 1 i.e. you can use the same block of frequencies everywhere.

That’s “LTE”, a hundred megabits more or less shared by all the users receiving their service from a given sector (or cell) on a given tower.

What if you had lots of money and could buy lots of spectrum? Well you would find 5 lots of 20MHz blocks of spectrum and send the data in parallel over 500 resource blocks at once. That’s known as “LTE Advanced Cat15” and it delivers a gigabit shared by all the users receiving their service from a given sector (or cell) on a given tower.

Is this enough to scare NBNCo?

No! Because even using fibre to the node NBNCo can deliver a separate 100 megabits to each and every house served by a sector on a tower, potentially hundreds of houses, which would be tens of gigabits.

LTE off mobile towers and roof tops isn’t going to put the NBNCo out of business just yet.

What if you could make your cells quite small? What if they were mounted on power poles or on top of NBN Nodes, if you happened to own them?

Well that’s something that should scare NBNCo and I think it’s one of the NBN defeating or eating endgame options Telstra have in their kit bag.

BUT, all those basestations serving all those cells need backhaul and that needs gobs of fibre. Physically small LTE basestations will emerge as a fibre to the node or distribution point technology but at the same time fibre will always be able to drop infinite bandwidth to individual end users. This suggests that fibre to the premises will inevitably return to favour as bandwidth demands keep spiralling up. 

In the mean time mobility is the killer app and the mobile operators can charge a premium for it. LTE is going to satisfy low volume users but the price will be prohibitive for hundreds of gigabytes per month. That said, they are the very users NBNCo is counting on to cross subsidise the roll out. 

We live in interesting times.

20 amp 3 phase adaptor box for EVSE

So, you have a Tesla with dual chargers and an EVSE with a 32 amp three phase plug that can deliver the 23 or 32 amps your car can draw but the only three phase outlet you can find is 20 amp. This is a problem because the 32 amp plug does not fit in a 20 amp socket.

You could make an adaptor with a 32 amp socket and a 20 amp plug but that would be dangerous. You could make a mistake and trip the 20 amp breaker and that is probably behind a locked door. This leads to the 20 amp adaptor 3 phase adaptor box. It has the plug and the socket but adds a circuit breaker.

A word of warning: if you are not a licensed electrician or you do not have experience with mains electrical wiring don’t build this. If you get it wrong you may create a deathtrap or a fireball. Examples of “wrong” include any metallic path from inside the box to outside like a mounting or fixing screw, that a bare or burnt wire inside might contact that exposes someone outside to mains voltage. You have been warned! 

I bought the parts needed from ElectricianSupplies

  • 25mm CABLE GLAND (screws into mounting box and retains 16mm outside diameter cable)

The only difficult part is that the 3 module mounting base doesn’t have anywhere for the DIN rail to be attached. I cannibalised the 1 module base the socket came with and glued two pieces of plastic to the base of the 3 module base and screwed the DIN rail to that.


I drilled small pilot holes for the screws taking extreme care to not drill through to the outside. I didn’t take a photo but here is how it lines up in the original box.


Strip the length of the mounting box of the orange insulation and connect the wires to the socket. Green/yellow is Earth and brown is neutral. Remember to twist the copper and fold it over before inserting in the holes. Be careful to not over-tighten the screws.


Then line everything up and cut the red, white and blue wires to length to fit into the circuit breaker. Cut around the centre line of the circuit breaker. Do NOT cut the earth and neutral wires! They go around the circuit breaker.

Screw the cable gland into place. Use some PVC solvent to fix it.

Route the orange cable out through the gland. Originally I used a cable tie as extra protection for the cable. I have retrofitted a clamp now.

Attach all the wires remembering to twist the copper and fold it over.


Now clip the circuit breaker into place and put the black plastic bar in place in the mounting box. You can screw the socket into place now. Note how it is oriented. You might like to rotate it 90 degrees to avoid fouling the circuit-breaker cover. Don’t forget the black plastic bar! It is vital to keeping the box waterproof.


Position the breaker in the middle of the DIN rail and set the filler pieces on the cover with a half on each side.


Seal the unused ports on the mounting box with solvent. I happened to have some nice clear solvent for swimming pool plumbing.


Finally screw everything into place. You are going to need to find slightly longer screws for the circuit-breaker cover because it doesn’t really match the base. I used some screws I found in my big box of recycled screws but I think GPO mounting bracket screws would do the trick.

Wiring the plug is left as an exercise for the reader. If you don’t already know how to do it DON’T DO IT. My iPhone or iCloud ate the photos I took. The important things: the big nut where the cable goes into the plug unscrews and releases the rubber cable grommet. The plastic of the cable clamp is pretty rubbish on these cheap plugs. Remember to twist and fold the wire.


What you need to remember when you use this box is to either dial your EVSE box down to 20 amps or dial you car down to 20 amps. If you don’t the circuit breaker will trip but don’t depend on that because the breaker feeding the 20 amp outlet you are using might trip too!

Things I would do differently second time around:

  • Use an angled plug – they are easier to get in and out of the socket
  • Use a proper cable clamp (have already retrofitted original)

Good luck and happy EV motoring!

If you own or are buying a Tesla come visit the Tesla Owners Club page on Facebook.

Broadband Speed Claims

On July 27th Ry Crozier published an article about the ACCC consultation paper on Broadband speed claims. It contains a number of quotes from this short essay I wrote in response to some questions he put to me. I have edited that essay a bit and publish it here. It may form the basis for a response to the consultation paper. 

Over the more than twenty years I have been designing, building, operating and managing ISPs the constant demand from users has been “make it faster!”

The pinnacle of this fetishising speed was the National Broadband Network, which would deliver fibre, the thing everyone saw as synonymous with speed, to everyone’s home.

Sadly selection for one characteristic often comes at cost to another. While everyone obsessed with how many megabits or even gigabits of bandwidth would be delivered to their houses very few people listened to those of us who questioned what other parts of the network would cost to use and how they would be monitored and managed.

The NBN’s 121 points of interconnect added great cost and complexity to retail service provider networks as well as the duplication of cost during the migration from legacy networks to the NBN.

I believe the key issues for RSPs since NBNCo started designing and building it are:

  • The high cost of Concentrating Virtual Circuits per megabit creates a huge incentive to provision them scarcely.
  • The numerous points of interconnect scattered all over the country impose large standing costs with very few customers, certainly at first, again creating a huge incentive to provision them scarcely.
  • NBNCo refused visibility to the utilisation of their Passive Optic Network or their “transit network” but still insisted that RSPs didn’t need to see that information because they would manage it to suitable SLAs.
  • The worst case scenario would be that a change in NBN technology or project goals would see an RSP with customers spread across their own DSLAMs, wholesale DSLAMs, NBN FTTP and any other access technology in the same region, each with its own overheads.

We are seeing the effects of scarcity of NBN RSP POI backhaul and CVC bandwidth in many parts of Australia. For every example of an end user with a high speed port seeing slow downloads there is an example somewhere else of a user with brilliant performance. We are seeing the effects of the worst case scenario where few customers are downstream of a POI and the initial “free” 150 megabits of CVC hasn’t been exhausted. Those customers are the lucky ones! They have abundant bandwidth in their access network so their Internet experience is governed largely by the quality of their ISP’s network.

In my time as a regulatory manager for an ISP I had numerous arguments with the enforcement branch of the ACCC about “speed” claims. They hated “up to” and could not understand that ISPs had no visibility to the Telstra copper that would be used to provide the Telstra wholesale DSLAM service.

Readers may remember iiNet and Internode publishing line sync heat maps of Sydney and both organisations published network traffic graphs for much of their lives showing how hard the network was being used. Ironically those heat maps became one of the reasons the NBN came into existence and with the NBN there is no transparency to access network performance at all.

Readers may also want to ponder the limited availability of prawns and oysters at the all-you-can-eat salad bar.

While I have great sympathy for the ACCC’s position that ISPs should be able to inform customers of what “speed” their Internet service will work at even with an abundance of bandwidth in a private access network the entire global Internet is not under the ISP’s control and the performance of individual services on the Internet will vary massively. While this is not a reason for ISPs to not disclose their traffic management practices and utilisation it is certainly something that makes it a “wicked problem”.

Traffic shaping is all about picking losers. All that giving a packet “priority” means is it experiences less delay within the network. The least worst case for a packet is to be pushed back in time until a hole in the data stream can be found. This is like a spatula applying icing to a cake. The worst case for a packet is to be shaved off like a plane flattens a piece of wood by removing all the bits that aren’t flat. The effect of this on your Internet is to slow your page loads and file downloads, cause your video player to “buffer”,  your videoconference stream to break up and your voice over IP to stutter. It might even cause some applications to stop entirely or be unusable.

In our with-us or against-us world it is very hard to have a rational discussion about these issues. I’m not laying “blame” or suggesting any player is operating maliciously in the ecosystem.

Retail ISPs can buy global Internet access in capital cities for a few dollars per megabit. They have no incentive to create artificial scarcity in this layer. This isn’t the place to look for a problem with NBN performance.

Operators of legacy DSLAMs have some mixed incentives. Their entire network has been declared obsolete by the very existence of the NBN. There is little incentive to make capital expenditure on improving the backhaul capacity of those networks for the (hopefully) months of operation left before FTTN migration starts and relieves the demand.

NBNCo will doubtless claim that all the performance problems end-users have with RSPs would go away if only RSPs would purchase enough $15 to $17.50 per megabit CVC. Or should I say “up to $17.50”?

In politics you never have an inquiry unless you know the likely outcome. This review is going to make it plain that RSPs need to purchase more CVC. This is only affordable for them if retail ISP prices rise or if the NBN operating company reduce their charges.

Given Telstra warned earlier this year that NBN costs would reduce their Net Profit by around $2b pa once the NBN is rolled out this enquiry is likely the first phase of a plan that will see a significant rise in the price of NBN delivered ISP services in Australia.

It’s time for the NBN company to reduce the CVC charge significantly and to be more transparent about the link utilisation within its network. That will make it clear to consumers which RSPs have sufficient capacity.

The real threat to Hollywood

I’m at the CommsDay Congress in Melbourne and in the goodie bag (provided by my client TransGrid) is an 8GB USB “thumb drive” which would be described as a “fingernail drive”. Over the last 10 years conference giveaways have shifted from a CD-ROM to USB sticks of 16 megabytes and doubled every year. This 8GB drive must have cost only a dollar or two for it to be given away with a pdf brochure on it. The same 8GB can store 8 movies at compressed high definition and perhaps a couple at really good resolution. That’s 4 next year and 512 in eight years. That’s one really high def movie for every day of the week and a couple of extras for the weekend. That’s the entire output of Hollywood including the stuff that never got shown on a big screen http://www.the-numbers.com/movies/year/2014 You don’t need a broadband network for this, you need sneakers. 


Some idle thoughts on Dallas Buyers Club LLC v iiNet

In his August 14th 2015 judgement Justice Perram said “If the explanatory memorandum is to be believed, ss 115(5)-(8) are aimed squarely at peer-to-peer file sharing. There are aspects of the provisions which are puzzling from a drafting perspective. The use of the expression ‘on a commercial scale’ is not defined although subs (7) tells one what one is to take into account in determining whether a particular infringement was on such a scale. This appears to be focussed on the possibility that whilst one might be able to show that a particular user shared three copies of a work by uploading it, one might not be able to show much about what those three individual downloaders went on to do by way of sharing themselves.”

Peer to peer file sharing has been around a long time. Bittorrent dates back to 2001 and Napster to 1999. But you might be forgiven for thinking that the Copyright Act 1968 didn’t anticipate peer to peer file sharing and indeed it didn’t.

According to CommLaw the history of amending Section 115 of the Copyright Act recorded in the End Notes is “am. No. 110, 2000; No. 34, 2003; No. 158, 2006”

These all occurred during the Howard government and Daryl Williams was Attorney General in 2000 and Philip Ruddock was in 2003 and 2006. The 2006 amendments were very important because they implemented the US-Australia Free Trade Agreement. I have been told that representatives of the Hollywood studios sat in the Attorney General’s offices drafting the amendments so it must be particularly galling for them that the judge in this case could not see how to apply the Copyright Act to this peer to peer sharing.

There must have been some amazing tantrums after the judgment was handed down.

Perhaps next time the Copyright Act is amended some technologists might be consulted during the process, but perhaps the world is better if they just mess it up so that it’s unenforceable.

Windows 10 – Shut up and take my money!

I got an email from Holly Raiche, ISOC-au ex ED / ICANN luminary / ACCAN director, that she forwarded to the ISOC-au / Internet Australia mailing list from a friend who lives in the Cook Islands who was complaining that Windows 10 had eaten her satellite internet quota of 15GB and run up a NZD280 excess bill in less than half a month.

Of course the causes are Windows 10 cloud services and it automatically updating itself.

I mentioned this Internet consumption problem online and Phil Dobbie called me and he recorded a Balls Radio podcast segment (at 18:52) with me on the topic.

Having done this I thought I should have a play with Windows 10 myself so I downloaded it this morning, all 4GB of it.

Then I attempted to buy a license from Microsoft.


Ouch! This was painful and didn’t work. I got as far as putting my PayPal details in and confirming my shipping address (for a download only version of Windows) and then the browser wedged looping around between Microsoft and some analytics company.

I tried three different browsers on my Mac.

Eventually I created a new Microsoft Live account and tried again with Chrome which worked! Perhaps it didn’t like my @mac.com address? More likely it has decided my decade old Live account is defective in some way because I never respond to their spam.

Full marks to Microsoft for charging GST and issuing a proper Tax Invoice.

Naturally the expensive new license key didn’t work!

So I downloaded Windows 10 again, thinking “well perhaps there is some timestamp thing” since the downloads seem to be generated specifically for each customer and have a time limit for use.

Another 4GB of download later and it still didn’t work.


I thought a bit harder: perhaps selecting the single language version was a mistake? So I selected the plain Windows 10 and put International English into the default language box.

Another 4GB (that’s 12GB so far!) download and the key worked! I selected Custom install because I’ve been paying attention to social media… Things ground along at a fairly rapid pace and eventually I was given this evil choice:


Think about that! I have the choice of automatically connecting to wifi networks my contacts share with me. It didn’t offer the choice to prevent my computer from sharing that information. Perhaps that comes later? I still haven’t seen an option to disable this insidious hole in my WiFi network security.

I wonder what else Windows 10 shares? Let’s look at the Update and Security screens. “Choose how updates are delivered” looks interesting.


Bless Microsoft’s cold heart, they’re using my PC as a P2P node and I’m joining the sharing economy where people on the Internet can get parts of the latest update from Microsoft from my computer. Good job I’m a nice guy and not messing with those files…

Windows 10 also offered to give all my private stuff (keyboard input and document “inking”, whatever that is) to Cortana which would ensure Cortana could help me more. I clicked “Skip”.

It offered to send pretty much everything I type into a web browser to online services to check spelling and improve page loading. Say what?!? I clicked “Skip”.

Now it looks like Windows 10 already sucked down an update and is planning on rebooting at 3am tomorrow to apply them. Again, bless its little digital heart. Good job it’s not running my alarm clock or swimming pool or watering system.

I selected Reboot Now and everything worked. Quick booting, smooth, impressive even.

Moving along, VMware Fusion offers a VM-side tool. Installing this seems to have confused the display driver. It’s not a big issue but I had to turn high performance graphics off and untick Use Full resolution for Retina display before things worked nicely on my external monitor. Not a big issue but another thing that burnt some time.

I ran a quick browser benchmark on Internet Explorer vs Safari and I think 22,000 vs 25,000 was pretty much line-ball. It doesn’t seem that running in a VMware VM does Windows 10 too much harm. I don’t actually own any modern Windows software so I can’t test Word against Word for instance.

I’ve stopped being a WinXP hipster and have Windows 10 running in a VM on my Mac. It’s like carrying around a seductive little portal into hell. I can run all those vendor config tools for DC to AC power inverters and network appliances without fear that my Windows will certainly be p0wned within minutes, but now I’m back to the simple uncertainty about when. I can run Internet Explorer so all those government websites will be slightly more cooperative and I can look at a desktop that belongs on a tablet. Cool.