Learn to love IPv6 and embrace the cloud

Whisper it, but the proponents of IPv6 have a point when they talk about its importance. It will certainly play a part in cloud

Trying to talk about protocols always puts me in mind of the movie “Dr Strangelove” and that one-liner about “there will be no fighting in the war room”.

Unconscious recursion lurks around every corner, with protocols, because a protocol is all about communication, whether that’s between people, or computers.

For those unfortunate souls who don't know the film, Dr Strangelove is about a communication breakdown, a tragic comedy of errors where distant counterparties communicate via several low-bandwidth protocols, some of which are punctuated by nuclear explosions: Much of the comic impact arises because those counterparties – the heads of state of the USA and USSR, for all you colour-movie-only people – are using yesterday’s bombs-and-bullets protocols to hand back and forth today’s messages, using tomorrow’s technologies.

Cloud networking, for implementers, feels a bit like that. When anyone considers protocols they tend to start by talking about TCP/IP v4, which has pretty much come to define the world’s data interchange though the noughties.

And yet there’s a whole lot wrong with both the protocol itself, and that statement. First of all, as soon as you traverse the links that comprise the net, your IPv4 packet gets trussed up inside something else – connectivity providers being fans of things like ATM Frame Relay and MPLS.

We all know that we are all out of available IPv4 addresses, and pretty much everyone who has peeked over the fence at IPv6 has come back looking a bit uncomfortable. A bit like Dr Strangelove, IPv6 is yesterday’s solutions being applied to tomorrow’s problems, with all the subtlety and finality of a nuclear detonation.

Which is actually, nothing at all to do with the motivation of a whole slew of manufacturers and developers in starting to talk about gently levering everyone’s death-grip off their IPv4 implementations and internal knowledge-set. It just seems like an opportune moment to politely start a dialogue about that protocol thing.

Pressure to move away from IPv4 within cloud deployments, you see, has different motivations from those that the IETF and friends had in mind when putting together IPv6.  And for people trying to grab hold of cloud concepts, or use them as a way of selecting a winner from a portfolio of suppliers, there’s a rich field of talking points to consider.

Deep techies will tell you that it never was a case of "one protocol to rule them all” in the first place, even if you could be forgiven for thinking that way – after all, lots of quite senior IT managers have progressed through a chunk of their careers in the 15 or so years since the IPv6 design broke surface. That’s a lot of ingrained thinking to overturn, and a lot of comparisons to draw which are quite likely to require a steady stream of double espressos and a sizeable whiteboard.

At Virtual Clouds, we have been seeing a steady stream of releases and product launches which make use of non-traditional protocols with Private Cloud labels attached; the release which got us talking and lead to this article comes from Big Switch networks, who wanted us to know that they were incorporating the OpenFlow standard into their range of kit that provides a Software Defined Network. Our takeaway for you isn’t about the relative merits of that over other protocols, it is a little flag-raising exercise over the swelling number of vendors who want to talk about this type of traffic apartheid in the private cloud.

Callout: When I say "private cloud” here I am paraphrasing. One company’s private cloud can at the same time, look and feel very public to a swathe of their clients. Foggy, indistinct category barriers between kit for IT industry timesharers, and kit for intensely private businesses with their data centre in a bunker, is the order of the day here. Probably the most useful way to define “private cloud” is to say that it’s a thing you’re going to have to understand how to operate yourself, instead of trusting an outsourced brain”. Callout Ends.

BrocadeEmulexCisco and almost all the show floor at SNW Europe last year want us to tell you about protocol changes, and why IPv4 was only ever a bodge when it comes to building up stacks of drives and throwing bits on them in the most efficient way possible.

Naturally to any grey-haired budget owner in IT the spectre of proprietary lock-in raises its head, though quite a few of those traditional objections are swept away by the raw panic of a large enterprise storage manager facing a factor-of-five increase in demand. It does seem rational, on first pass, to consider alternatives for your private internal networking that don’t suffer the expanded expectations of a world-spanning public transport encoding intended mainly to slow down the fans in the world’s backbone router suites. So who to talk to, to get a handle on what this burgeoning world of alternatives really means, and which horse to back?

Along came NetOptics who insert analysis tools into your troublesome (or simply, opaque) misbehaving Fibre Channel storage network and tell you what’s up . To meet that simple brief the company has to wade through deep and fast-running streams of mutating technical fashions in storage, not just taking the decision-makers' approach of sitting as high above water as possible, but the opposite: they let it trickle through their fingers, taste it, chart its path. It's a task that simply falls to bits if some vendor suddenly springs up with kit that chatters away using packets whose contents are in an undeclared format.

Amazingly, they pronounced themselves unfazed by the burgeoning spread of standards, proprietary offerings, custom network layers and smart kit which we think we’ve been seeing. I suspect that in part this is because the years when SAN networks were truly distinct from LAN are well and truly behind us – “convergence” is the key shift in thinking.

This has already meant that the job NetOptics does is all about correctly identifying traffic pollution on networks built up from standard parts. Starting from that pre-existing situation, the challenges thrown up by new standards or proprietary vendor-invented replacement protocols seem like just another exercise in zero-interference bit pattern matching.

Which (to be brutally honest) I expect you to be reassured by, if not necessarily perfectly comprehend.

What it means for those of us who need background indicators to use in picking out grown-up long-term partner vendors, is that the protocol explosion in private clouds isn’t a bad thing, and that those who like to imply that it is, and we should all be happy with good old IPv4, are more the villains of the piece, than the protectors of our sanity.

If the traffic-analyser gurus like NetOptics are, valley-girl style, so over that whole topic already, then the rest of us can relax, and start to escape the headache-inducing discussions of subnets, bridges and routers which IPv4 uses to enthrall the unwary.

Sign up for our free newsletter