Tagged: systems

on SDN, network virtualization and the future of networks

To say that SDN has a lot of hype to live up to is a huge understatement. Given the hype, some are saying that SDN can’t deliver, but others—notably Nicira—are saying that network virtualization is what will actually deliver on the promises of SDN. Instead, it appears that network virtualization is the first, and presumably not the best, take at the new way of managing networks where we can finally holistically manage networks with policy and goals separated from the actual devices, be they virtual or physical, that implement them.

Out with SDN; In with Network Virtualization?

In the last few months there has been a huge amount of back and forth about SDN and network virtualization. Really, this has been going on since Nicira was acquired about a year ago and probably before that, but the message seems to have solidified recently. The core message is something like this:

SDN is old and tired; network virtualization is the new hotness.

Network virtualization vs. SDN
Network virtualization vs. SDN

That message—in different, but not substantially less cheeky terms—was more or less exactly the message that Bruce Davie (formerly Cisco, formerly Nicira, now VMware) gave during his talk on networking virtualization at the Open Networking Summit in April. (The talk slides are available there along with a link the video which requires a free registration.)

The talk rubbed me all the wrong ways. It sounded like, “I don’t know what this internal combustion engine can do for you, but these car things, they give you what you really want.” It’s true and there’s a point worth noting there, but the point is not that internal combustion engines (or SDN) are not that interesting.

A 5-year retrospective on SDN

Fortunately, about a month ago, Scott Shenker of UC Berkeley gave an hour-long retrospective on SDN (and OpenFlow) focusing on what they got right and wrong with the benefit of 5 year of hindsight. The talk managed to nail more or less the same set of points that Bruce’s did, but with more nuance. The whole talk is available on YouTube and it should be required watching if you’re at all interested in SDN.

An SDN architecture with network virtualization folded in.
An SDN architecture with network virtualization folded in.

The highest-order bits from Scott’s talk are:

  1. Prior to SDN, we were missing any reasonable kind of abstraction or modularity in the control planes of our networks. Further, identifying this problem and trying to fix it is the biggest contribution of SDN.
  2. Network virtualization is the killer app for SDN and, in fact, it is likely to be more important than SDN and may outlive SDN.
  3. The places they got the original vision of SDN wrong, were where they either misunderstood or failed to fully carry out the abstraction and modularization of the control plane.
  4. Once you account for the places where Scott thinks they got it wrong, you wind up coming to the conclusion that networks should consist of an “edge” implemented entirely in software where the interesting stuff happens and a “core” which is dead simple and merely routes on labels computed at the edge.

This last point is pretty controversial—and I’m not 100% sure that he argues it to my satisfaction in the talk—but I largely agree with it. In fact, I agree with it so much so that I wrote half of my PhD thesis (you can find the paper and video of the talk there) on the topic. I’ll freely admit that I didn’t have the full understanding and background that Scott does as he argues why this is the case, but I sketched out the details on how you’d build this without calling it SDN and even built a (research quality) prototype.

What is network virtualization, really?

Network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.

Anyway, that’s getting a bit afield of where we started. The thing that Scott doesn’t quite come out and say is that the way he thinks of network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.

He says that Nicira started off trying to pitch other ideas of how to specify policy, but that they had trouble. Essentially, the clients they talked to said they knew how to manage a legacy network and get the policy right there and any solution that didn’t let them leverage that knowledge was going to face a steep uphill battle.

The end result was that Nicira chose to implement an abstraction of the simplest legacy network possible: a single switch with lots of ports. This makes a lot of sense. If policy is defined in the context of a single switch, changes in the underlying topology don’t affect the policy (it’s the controller’s responsibility to keep the mappings correct) and there’s only one place to look to see the whole policy: the one switch.

The next big problems: High-level policy and composition of SDN apps

Despite this, there’s at least two big things which this model doesn’t address:

  1. In the long run, we probably want a higher-level policy description than a switch configuration even if a single switch configuration is a whole lot better than n different ones. Scott does mention this fact during the Q&A.
  2. While the concept of network virtualization and a network hypervisor (or a network policy language and a network policy compiler) helps with implementing a single network control problem, it doesn’t help with composing different network control programs. This composition is required if we’re really going to be able to pick and choose the best of breed hardware and software components to build our networks.
A 10,000-foot view of Pyretic's goals of an SDN control program built of composable parts.
A 10,000-foot view of Pyretic’s goals of an SDN control program built of composable parts.

Both of these topics are actively being worked on in both the open source community (mainly via OpenDaylight) and in academic research with the Frenetic project probably being the best known and most mature of them. In particular, their recent Pyretic paper and talk took an impressive stab at how you might do this. Like Frenetic before it, they take a domain-specific language approach and assume that all applications (which are really just policy since the language is declarative), are written in that language.

Personally, I’m very interested in how many of the guarantees that the Frenetic/Pyretic approach provide can be provided by using a restricted set of API calls rather than a restricted language which all applications have to be written in. Put another way, could the careful selection of the northbound APIs provided to applications in OpenDaylight enable us to get many—or even all—of the features that these language-based approaches take. I’m not sure, but it’s certainly going to be exciting to find out.

has the academic computer systems community lost it's way?

I just got into a debate with a few friends about whether or not the current academic computer systems community is still relevant or if we’ve lost our way and are now not productively matching out efforts with how to have impact.

My friends—who I should say are very smart people—argue that somewhere between when academics produced nearly all computer systems (including UNIX, the Internet, databases and basically everything else) and now, we have lost our way. If you look at recent kinds of systems—for example: peer-to-peer file-sharing, modern file systems, modern operating systems, modern databases and modern networks—many of the ones in common use are not those developed by academics.

Instead we’ve seen Napster, Gnutella, BitTorrent, and DirectConnect in peer-to-peer; ext3 and ReiserFS in file systems; Windows and Mac OS in operating systems; MySQL and SQL Server in databases, and the list goes on.

One conclusion from this is to make the observation that the systems we build—be they wireless networking technologies, new internet routing algorithms, new congestion control algorithms, new operating systems, new databases, or new peer-to-peer systems—simply aren’t being used and that this is out of alignment with how we view ourselves. The natural next step is that we should change our approach to acknowledge that we aren’t building systems which people are going to use or to figure out why what we’re building isn’t used.

For a variety of reasons, I think that this conclusion is wrong. First, you don’t have to look far to find recent academic systems which are in widespread use. VMWare and Xen both came direction from academic work on virtualization. Google (or more precisely page rank) started as an academic system. The list goes on, and this doesn’t count the fact that many systems not directly built by academics are heavily influenced by academic research. The ext3 file system is just ext2 with additions based the log-structured file systems paper by Mendel Rosenblum (who is now a co-founder of VMWare). Linux isn’t an academic venture, but it’s essentially a complete reimplementation of UNIX which was.

In the areas where academics appear to be “outperformed” there are some very reasonable explanations. In areas like databases and wireless networking you are looking at the impact of a few grad students and a few million dollars on multi-billion dollar industries employing thousands of people. The fact that we have any impact at all is impressive.

In areas like peer-to-peer file-sharing, most innovation has been driving not by technical needs, but by legal needs and making systems hard to shut down. While this is now someplace that academics find interesting, to have expected academics to do research into how to circumvent legal machinations seems a bit out of whack.

In the end, I feel like I am more able to contribute to the real world from academia than I would be elsewhere. There is a certain level of respect for ideas and tools produced by academics which is hard to garner elsewhere, there are fewer constraints caused by market pressures, and teams of people are smaller and more agile.