Tagged: future

on SDN, network virtualization and the future of networks

To say that SDN has a lot of hype to live up to is a huge understatement. Given the hype, some are saying that SDN can’t deliver, but others—notably Nicira—are saying that network virtualization is what will actually deliver on the promises of SDN. Instead, it appears that network virtualization is the first, and presumably not the best, take at the new way of managing networks where we can finally holistically manage networks with policy and goals separated from the actual devices, be they virtual or physical, that implement them.

Out with SDN; In with Network Virtualization?

In the last few months there has been a huge amount of back and forth about SDN and network virtualization. Really, this has been going on since Nicira was acquired about a year ago and probably before that, but the message seems to have solidified recently. The core message is something like this:

SDN is old and tired; network virtualization is the new hotness.

Network virtualization vs. SDN
Network virtualization vs. SDN

That message—in different, but not substantially less cheeky terms—was more or less exactly the message that Bruce Davie (formerly Cisco, formerly Nicira, now VMware) gave during his talk on networking virtualization at the Open Networking Summit in April. (The talk slides are available there along with a link the video which requires a free registration.)

The talk rubbed me all the wrong ways. It sounded like, “I don’t know what this internal combustion engine can do for you, but these car things, they give you what you really want.” It’s true and there’s a point worth noting there, but the point is not that internal combustion engines (or SDN) are not that interesting.

A 5-year retrospective on SDN

Fortunately, about a month ago, Scott Shenker of UC Berkeley gave an hour-long retrospective on SDN (and OpenFlow) focusing on what they got right and wrong with the benefit of 5 year of hindsight. The talk managed to nail more or less the same set of points that Bruce’s did, but with more nuance. The whole talk is available on YouTube and it should be required watching if you’re at all interested in SDN.

An SDN architecture with network virtualization folded in.
An SDN architecture with network virtualization folded in.

The highest-order bits from Scott’s talk are:

  1. Prior to SDN, we were missing any reasonable kind of abstraction or modularity in the control planes of our networks. Further, identifying this problem and trying to fix it is the biggest contribution of SDN.
  2. Network virtualization is the killer app for SDN and, in fact, it is likely to be more important than SDN and may outlive SDN.
  3. The places they got the original vision of SDN wrong, were where they either misunderstood or failed to fully carry out the abstraction and modularization of the control plane.
  4. Once you account for the places where Scott thinks they got it wrong, you wind up coming to the conclusion that networks should consist of an “edge” implemented entirely in software where the interesting stuff happens and a “core” which is dead simple and merely routes on labels computed at the edge.

This last point is pretty controversial—and I’m not 100% sure that he argues it to my satisfaction in the talk—but I largely agree with it. In fact, I agree with it so much so that I wrote half of my PhD thesis (you can find the paper and video of the talk there) on the topic. I’ll freely admit that I didn’t have the full understanding and background that Scott does as he argues why this is the case, but I sketched out the details on how you’d build this without calling it SDN and even built a (research quality) prototype.

What is network virtualization, really?

Network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.

Anyway, that’s getting a bit afield of where we started. The thing that Scott doesn’t quite come out and say is that the way he thinks of network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.

He says that Nicira started off trying to pitch other ideas of how to specify policy, but that they had trouble. Essentially, the clients they talked to said they knew how to manage a legacy network and get the policy right there and any solution that didn’t let them leverage that knowledge was going to face a steep uphill battle.

The end result was that Nicira chose to implement an abstraction of the simplest legacy network possible: a single switch with lots of ports. This makes a lot of sense. If policy is defined in the context of a single switch, changes in the underlying topology don’t affect the policy (it’s the controller’s responsibility to keep the mappings correct) and there’s only one place to look to see the whole policy: the one switch.

The next big problems: High-level policy and composition of SDN apps

Despite this, there’s at least two big things which this model doesn’t address:

  1. In the long run, we probably want a higher-level policy description than a switch configuration even if a single switch configuration is a whole lot better than n different ones. Scott does mention this fact during the Q&A.
  2. While the concept of network virtualization and a network hypervisor (or a network policy language and a network policy compiler) helps with implementing a single network control problem, it doesn’t help with composing different network control programs. This composition is required if we’re really going to be able to pick and choose the best of breed hardware and software components to build our networks.
A 10,000-foot view of Pyretic's goals of an SDN control program built of composable parts.
A 10,000-foot view of Pyretic’s goals of an SDN control program built of composable parts.

Both of these topics are actively being worked on in both the open source community (mainly via OpenDaylight) and in academic research with the Frenetic project probably being the best known and most mature of them. In particular, their recent Pyretic paper and talk took an impressive stab at how you might do this. Like Frenetic before it, they take a domain-specific language approach and assume that all applications (which are really just policy since the language is declarative), are written in that language.

Personally, I’m very interested in how many of the guarantees that the Frenetic/Pyretic approach provide can be provided by using a restricted set of API calls rather than a restricted language which all applications have to be written in. Put another way, could the careful selection of the northbound APIs provided to applications in OpenDaylight enable us to get many—or even all—of the features that these language-based approaches take. I’m not sure, but it’s certainly going to be exciting to find out.

on Apple in the post-device world

Apple has had their Icarus moment and it’s not losing Steve Jobs—though that may also prove problematic. Their Icarus moment is the inability to actually deliver on the promises of iCloud. They are now, and have always been, a device company and they are about to enter the post-device world. Try as they might, they can’t seem to execute on a strategy that puts the device second to anything else.

Let’s step back for a minute and think about where technology is heading in the next 5-10 years. It hasn’t even been 5 years since the iPhone came out and effectively launched the smartphone and in the process started us down the path to the post-PC world. We’re pretty much there at this point, but it doesn’t end there.

The next logical step is the post-device world where the actual device you use to access your data and apps is mostly irrelevant. It’s just a screen and input mechanism. Sure, things will have to be customized to fit screens of different sizes and input mechanisms willy vary, but basically all devices will be thin clients. They’ll reach out to touch (and maybe cache) your data in the cloud and any heavy computational lifting will be done somewhere else (as is already done with voice-to-text today).

The device you use more or less will not matter. As long as it has a halfway-decent display, a not shit keyboard, some cheap flash storage for a cache of some data, the barest minimum of a CPU and a wireless NIC, you’re good.

This world is not Apple’s forté. Not only is is nearly all of their profit from exactly the devices that will not matter, but they’re not very good at the seamless syncing between devices either. It took them until iOS 5 to provide native syncing of contacts, calendars and the like directly to and from the cloud. After Android, Palm’s webOS and even comically-late-to-the-party Windows Phone had implemented it.

Moreover, this is not the first time Apple has tried to provide some kind of cloud service. They started with iTools in 2000, then .Mac in 2002, MobileMe in 2008, iWork.com in 2009 and now they’re on iCloud. None of the previous incarnations have been what anyone would call a resounding success. In at least one case, it was bad enough that Steve Jobs asked “So why the fuck doesn’t it do that?”

So, who will succeed well in this post-device world? The obvious answer might be Google since they’re already more or less there by having all of their apps be browser-based, but I’m not totally convinced. They seem to be struggling to provide uniform interfaces to their apps across devices and that seems hey here. For instance, the iconography of my gmail is different from my browser than it is on my Android tablet and that’s for a device they own.

Actually, in a perverse way, I think Microsoft might really have what it takes to succeed in this world if they can execute. They have a long history of managing to maintain similar interfaces and design languages across different platforms and devices. Though, their failure to provide a clean Metro-based interface in Windows 8 is a bit of a damper for their chances.

on us cyber-defense

I was doing my usual reading through the news thing when I stumbled across an opinion piece by the ex Director of National Intelligence Mike McConnell about how we should be preparing the nation’s cyber-defense strategy.

The piece is mostly a fluff-filled call to arms saying that we are woefully behind, but there’s no real reason for it and that really what we need is just the resolve to sit down and draw up some concrete plans and strategy for what it is that we’re going to do. I agree with most of that, but then I stumbled across this gem:

More specifically, we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment — who did it, from where, why and what was the result — more manageable.

This really perplexes me, because two paragraphs earlier he was talking about how Hilary Clinton was extolling the virtues of the Internet as a tool for free speech and democracy. Suddenly, when the U.S. needs to defend itself, we need exactly the tools that would make a repressive country best able to shut of the benefits of the Internet as a platform for expression.

It has just further convinced me that by keeping the current group of military and intelligence officials in charge of this, we will constantly be behind in the Internet-age.

Update: (3/2/2010) Wired wrote a story commenting on the same article pulling out the exact same sentence from McConnell’s op-ed. Good to know that I’m not the only one catching these things. They point out that McConnell has been fear mongering about this stuff in order to get bigger U.S. intelligence access to the Internet for years.

self-healing polyurethane

Following on the self-healing rubber from a few months ago, new work (by different people) has made self healing polyurethane coatings.

The secret of the material lies in using molecules made from chitosan, which is derived from the shells of crabs and other crustaceans.

In the event of a scratch, ultraviolet light drives a chemical reaction that patches the damage.

The work by University of Southern Mississippi researchers is reported in the journal Science.

They designed molecules joining ring-shaped molecules called oxetane with chitosan.

The custom-made molecules were added to a standard mix of polyurethane, a popular varnishing material that is also used in products ranging from soft furnishings to swimsuits.

Scratches or damage to the polyurethane coat split the oxetane rings, revealing loose ends that are highly likely to chemically react.

In the ultraviolet light provided by the sun, the chitosan molecules split in two, joining to the oxetane’s reactive ends.

Cool stuff, though I wonder how many bathing suits you need to replace before you make up the cost difference between your self-healing one and a normal one.

on journalism and newspapers

My mom is a career journalist and I’ve always been interested in journalism, even though the closest I’ve been has been as a graphic artist and IT person for a couple of newspapers. So, when a friend of mine posted a link to this article on the current fate of journalism in this country and and the world with a focus on the newspaper and it’s seemingly imminent demise, I listened up. The article makes a series of very good points and while presenting a lot of cold, hard facts about what’s going on also has an unabashed point about the current world.

From my personal knowledge of what’s going on with the newspapers my mom is associated with, I can say that everything this story talks about is pretty much dead on and it applies to newspapers everywhere from the local weeklies that my mom runs, to the biggest papers in the country as this article points out.

The 3 main points are really this:

  1. Today, newspapers provide the overwhelming majority of original reporting and are the single most important tool for informing the public about anything.
  2. Newspapers are being hit by a perfect storm of rising newsprint costs, falling advertising revenue, decreasing interest in reading anything, and the current economy leading to sharp cuts to exactly the things newspapers do well.
  3. There probably needs to be a replacement for newspaper journalism, but it’s really not clear what that is and how to make it happen.

I’ll just touch on those 3 things, because really you should go read the article.

First, for all the noise being made about “new media” the vast majority of what goes on there is usually a commentary on journalism done in newspapers. Even TV and radio journalism is more often than not picking up on stories first covered in newspapers. This is because traditionally, newspapers have had the most feet on the ground, the most expertise, the most trust and respect and finally the most balls to do what needs to be done in the name of the truth and journalism. The “profitable” sources of news (TV, Internet and radio) may find that it’s much harder to be profitable if you had to field all of your own reporters.

Second, newspapers are seeking to become like the things they’re competing with in the hopes that it will bring readers and profits back. This means abandoning all the things which separated them. Unfortunately this includes nearly all substantive investigative reporting. This essentially commoditizes what newspapers have to offer making it even harder for them to compete.

The way forward is a lot less clear. I’ve spent a bunch of time thinking about it. What exactly made a newspaper? The article seems to believe it was a combination of respect, trust and unified resources with a common, concrete goal to shed light and report the truth. That sounds about right.

The article only hints at another problem which I find personally frustrating which is the increasing suspicion of inserted bias in all forms for journalism. The fact that a huge amount of our news industry is essentially entertainment has tainted all of our news sources and made the average person assume that every article as a position which it’s arguing for.

on artificial eyes

http://news.bbc.co.uk/2/hi/health/7919645.stm

The BBC posted this article describing a 73-year-old man who has been blind for nearly 30 years and is getting his sight back—or something that vaguely resembles sight and is a whole hell of a lot better than not seeing.

He says he can now follow white lines on the road, and even sort socks, using the bionic eye, known as Argus II.

That’s actually pretty impressive given that most of the previous stuff I’ve read here really only involves people seeing colors and lights. There’s two cool videos, one’s an interview with the actual guy using the eye and another explains the basics of how it works, though at a level that you could have probably figured out on your own.

It’s not quite to the point where I can have a chip in my optic nerve which gives me the augmented reality that I’ve wanted since I was about 10, but it’s still damn cool and a really impressive first step. Maybe I’ll get it my wish in my lifetime.

classes of computers

A while back I was thinking about this whole cell phone, PDA, laptop, desktop thing with computers where we have stuff in different form factors which we use in different ways. At the time I was thinking about what size laptop to buy, but it got me thinking about how many devices we need and where they should fall on the size/power spectrum.

This lead me to my current thinking which is that there are loosely 3 classes of computers (at least for consumers) and that you are basically forced to compete with everything in your class. They are pocketable, backpackable, and stationary. That is to say that there’s little to be gained by shrinking the size of laptop once it fits comfortably in my backpack unless you’re going to make it small enough to fit in my pocket.

So, that says, that my phone, my palm, my iPod (or walkman/discman if you go back) were all basically competing with each other. As a consequence, it would have been relatively easy to predict that these devices would converge into things like the Treo and iPhone which combine these features.

At the next level, we have backpackable objects. These include laptops, most e-readers, and the new entry of netbooks. Interestingly, my logic says that these devices are competing with each other and will really have to converge in some way. That is, netbooks, despite being hot, really don’t offer a truly new form of computer, but simply a new (and possibly useful) trade-off in the power/battery life/price space. It may be that they drag the price and performance of the average backpackable computer down and push it’s battery life up, but these aren’t revolutionary devices.

This jives with my experience that most people I know buying netbooks are doing so because either they didn’t have a laptop before, they had a laptop they didn’t like or no laptop at all. I see a large number of people who had old, big, heavy windows laptops getting netbooks, which is basically people trading an object which isn’t really backpackable for a backpackable one. An unrelated, but relevant fact is that windows works terribly as an operating system for laptops as it handles sleep in an atrocious fashion, meaning that for many people their netbook is the first non-windows laptop which thus actually works like a laptop should with fast-sleep and fast-resume.

The last category is stationary computers, which for consumers basically means desktops. There are other things, like media servers and the like, but for the most part we mean desktops. Here things are kind of interesting for a couple of reasons. First, desktops seem to actually be losing to laptops, which makes sense because while backpackable objects don’t necessarily have to compete with stationary ones, your stationary computer does have to justify it’s existence by providing some features which your backpackable computer doesn’t. In that sense, laptops have really been able to successfully compete with desktops for the last few years. Maybe netbooks are simply the realigning of backpackable class and will, in fact, increase the difference between backpackable and stationary computers again.

has the academic computer systems community lost it's way?

I just got into a debate with a few friends about whether or not the current academic computer systems community is still relevant or if we’ve lost our way and are now not productively matching out efforts with how to have impact.

My friends—who I should say are very smart people—argue that somewhere between when academics produced nearly all computer systems (including UNIX, the Internet, databases and basically everything else) and now, we have lost our way. If you look at recent kinds of systems—for example: peer-to-peer file-sharing, modern file systems, modern operating systems, modern databases and modern networks—many of the ones in common use are not those developed by academics.

Instead we’ve seen Napster, Gnutella, BitTorrent, and DirectConnect in peer-to-peer; ext3 and ReiserFS in file systems; Windows and Mac OS in operating systems; MySQL and SQL Server in databases, and the list goes on.

One conclusion from this is to make the observation that the systems we build—be they wireless networking technologies, new internet routing algorithms, new congestion control algorithms, new operating systems, new databases, or new peer-to-peer systems—simply aren’t being used and that this is out of alignment with how we view ourselves. The natural next step is that we should change our approach to acknowledge that we aren’t building systems which people are going to use or to figure out why what we’re building isn’t used.

For a variety of reasons, I think that this conclusion is wrong. First, you don’t have to look far to find recent academic systems which are in widespread use. VMWare and Xen both came direction from academic work on virtualization. Google (or more precisely page rank) started as an academic system. The list goes on, and this doesn’t count the fact that many systems not directly built by academics are heavily influenced by academic research. The ext3 file system is just ext2 with additions based the log-structured file systems paper by Mendel Rosenblum (who is now a co-founder of VMWare). Linux isn’t an academic venture, but it’s essentially a complete reimplementation of UNIX which was.

In the areas where academics appear to be “outperformed” there are some very reasonable explanations. In areas like databases and wireless networking you are looking at the impact of a few grad students and a few million dollars on multi-billion dollar industries employing thousands of people. The fact that we have any impact at all is impressive.

In areas like peer-to-peer file-sharing, most innovation has been driving not by technical needs, but by legal needs and making systems hard to shut down. While this is now someplace that academics find interesting, to have expected academics to do research into how to circumvent legal machinations seems a bit out of whack.

In the end, I feel like I am more able to contribute to the real world from academia than I would be elsewhere. There is a certain level of respect for ideas and tools produced by academics which is hard to garner elsewhere, there are fewer constraints caused by market pressures, and teams of people are smaller and more agile.

on speech recognition

http://www.sciam.com/podcast/episode.cfm?id=thinking-of-human-as-machine-09-02-24

This is the first idea about speech recognition that sounded right in a long time. The idea is to try to understand how it is that the human brain picks up speech and decodes it as guidelines for how we might make computers do the same thing.

While this short snippet is short of details it mentions the idea that different neurons respond to different frequencies. I have no idea how state-of-the-art speech recognition is done these days, but I bet there’s a lot of things that we can learn from seeing how the brain does it. The premise that the researcher in the above link is working on is that it’s a more mechanical process in the brain than we think and that maybe we can leverage that.

Kind of cool. Makes me wonder if we might eventually get this stuff to work after all.

on web-based personal finance redux

Finally convinced myself that using Mint wasn’t going to kill me or my banking security and signed up only to find out that while it supports monitoring my bank’s checking and savings accounts, it does not support their credit card.

Since about 90% of my transactions happen on that credit card, it makes the service basically useless to me at the moment. I’m not really sure what you can do about it. I’m thinking about calling my bank to complain, but chances are they won’t be able to fix it and I already filed a request for Mint to support it.

Oh well, the service seems really cool and I think it would be immensely helpful, though I wish I could do it with a program running on my computer rather than a web application where I worry about my data. Also, they could use to support https for their basic site just so I knew my session couldn’t be hijacked and I wish they had a legitimate mobile solution rather than relying either SMS or an iPhone app.

Hopefully I can work something out so I can use it, because I think it would really help me look at things and figure out beyond a gut level how I’m spending my money, but as it is, it is missing several critical features to make it useful for my life.