Tagged: computers

on openflow and minimal software on routers

Some recent coverage of OpenFlow and software-defined networks (SDN) in general has gotten me thinking about the relationship between it and server/desktop virtualization like Xen, VMware and the like. I’m by far not the first person to think about this. Martin Casado, as close to the father of OpenFlow as anyone, has written about how networking doesn’t and does need a VMware, but this is a bit of a different story than address or topology virtualization.

One of the core benefits that’s being heralded about OpenFlow/SDN is that it can reduce the amount of code running on switches/routers which is good for security, cost, performance and everything else. It came up in Urs Hoelzle’s keynote about OpenFlow at Google at the Open Networking Summit and it’s featured heavily in other coverage including this recent webinar which discusses how OpenFlow/SDN changes forwarding.

The same arguments came up when desktop/server virtualization first started. Hypervisors were going to be small things which you could actually reason about and they wouldn’t have nearly as many bugs or vulnerabilities as regular OSes. Finally, we were going to get the mircokernels we always wanted.

That’s not how it ended up though. Today, Xen and VMware ESX are both in the realm of 400,000 lines of code and Microsoft’s Hyper-V isn’t far behind. (The numbers are borrowed from the NOVA paper in EuroSys 2010 and I’ve stuck the relevant figure here.)

While a few hundred thousand lines of code is a far cry from the ~15 million that are in the Linux kernel today and the over 50 million lines of code that are theoretically in Windows, it’s probably not where people arguing that hypervisors were the final coming of microkernels wanted them to be. With new features being released all of the time, the evidence is that hypervisors will follow the same course as OSes and have nearly unbounded growth in terms of code size perhaps with a delayed start.

Similarly, I think that while the first take at implementing pure OpenFlow/SDN switches may result in a reduced amount of code running on switches as functionality moves to controllers. However, this isn’t quite what it looks like for two reasons. First, unlike a hypervisor which provided a clean layer to protect hardware resources from the OSes that ran on top of it, OpenFlow has no such advantage and instead exposes it’s resources completely to a controller with no security against a misbehaving controller.

Second, it seems likely to me that functionality is going to drift back to switches assuming that it ever really leaves the switches. My money is on the fact that people will realize that some functionality makes sense to be pushed to the controller—perhaps even most functionality, but there will always be things which are cheaper, faster and better to implement in the switches themselves. Latency sensitive tasks that don’t want to wait to hear from a controller like quickly routing around failures come to mind.

on Apple in the post-device world

Apple has had their Icarus moment and it’s not losing Steve Jobs—though that may also prove problematic. Their Icarus moment is the inability to actually deliver on the promises of iCloud. They are now, and have always been, a device company and they are about to enter the post-device world. Try as they might, they can’t seem to execute on a strategy that puts the device second to anything else.

Let’s step back for a minute and think about where technology is heading in the next 5-10 years. It hasn’t even been 5 years since the iPhone came out and effectively launched the smartphone and in the process started us down the path to the post-PC world. We’re pretty much there at this point, but it doesn’t end there.

The next logical step is the post-device world where the actual device you use to access your data and apps is mostly irrelevant. It’s just a screen and input mechanism. Sure, things will have to be customized to fit screens of different sizes and input mechanisms willy vary, but basically all devices will be thin clients. They’ll reach out to touch (and maybe cache) your data in the cloud and any heavy computational lifting will be done somewhere else (as is already done with voice-to-text today).

The device you use more or less will not matter. As long as it has a halfway-decent display, a not shit keyboard, some cheap flash storage for a cache of some data, the barest minimum of a CPU and a wireless NIC, you’re good.

This world is not Apple’s forté. Not only is is nearly all of their profit from exactly the devices that will not matter, but they’re not very good at the seamless syncing between devices either. It took them until iOS 5 to provide native syncing of contacts, calendars and the like directly to and from the cloud. After Android, Palm’s webOS and even comically-late-to-the-party Windows Phone had implemented it.

Moreover, this is not the first time Apple has tried to provide some kind of cloud service. They started with iTools in 2000, then .Mac in 2002, MobileMe in 2008, iWork.com in 2009 and now they’re on iCloud. None of the previous incarnations have been what anyone would call a resounding success. In at least one case, it was bad enough that Steve Jobs asked “So why the fuck doesn’t it do that?”

So, who will succeed well in this post-device world? The obvious answer might be Google since they’re already more or less there by having all of their apps be browser-based, but I’m not totally convinced. They seem to be struggling to provide uniform interfaces to their apps across devices and that seems hey here. For instance, the iconography of my gmail is different from my browser than it is on my Android tablet and that’s for a device they own.

Actually, in a perverse way, I think Microsoft might really have what it takes to succeed in this world if they can execute. They have a long history of managing to maintain similar interfaces and design languages across different platforms and devices. Though, their failure to provide a clean Metro-based interface in Windows 8 is a bit of a damper for their chances.

cory doctorow on the war on general computation

I’m not really a huge Cory Doctorow fanboy, but he does tend to say things well. Here, he talks a lot less about copyright/copyleft (though still some) and a lot more about how society is going to deal with and or not deal with general purpose computation as it (even more than now) becomes part of everything.

Really, he just makes two points:

  1. General purpose computation and general purpose networking are going to be everywhere and there will be huge (seemingly reasonable) demand to restrict it.
  2. All ways to restrict it converge spyware and rootkits to restrict general purpose computation and surveillance and censorship to restrict general purpose networking.

That being said, what he doesn’t do, and what I wish he would do, is articulate an argument for why somebody who would today want to do SOPA-like things (and there are lots of such people) should think twice about doing it in a SOPA-style way.

Still, it’s worth watching because it’s an entertaining and clean explanation of those two points.

on us cyber-defense

I was doing my usual reading through the news thing when I stumbled across an opinion piece by the ex Director of National Intelligence Mike McConnell about how we should be preparing the nation’s cyber-defense strategy.

The piece is mostly a fluff-filled call to arms saying that we are woefully behind, but there’s no real reason for it and that really what we need is just the resolve to sit down and draw up some concrete plans and strategy for what it is that we’re going to do. I agree with most of that, but then I stumbled across this gem:

More specifically, we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment — who did it, from where, why and what was the result — more manageable.

This really perplexes me, because two paragraphs earlier he was talking about how Hilary Clinton was extolling the virtues of the Internet as a tool for free speech and democracy. Suddenly, when the U.S. needs to defend itself, we need exactly the tools that would make a repressive country best able to shut of the benefits of the Internet as a platform for expression.

It has just further convinced me that by keeping the current group of military and intelligence officials in charge of this, we will constantly be behind in the Internet-age.

Update: (3/2/2010) Wired wrote a story commenting on the same article pulling out the exact same sentence from McConnell’s op-ed. Good to know that I’m not the only one catching these things. They point out that McConnell has been fear mongering about this stuff in order to get bigger U.S. intelligence access to the Internet for years.

on new communication paradigms

In the last 5 years communication has gone from being massively dominated by e-mail, phones, IM and snail mail to nearly doubling the number of ways to communicate.

You can quibble with me about when this stuff actually started changing, but certain the last 5 years has seen a massive liftoff in at least 4 new areas of communication.

  • Social Networks (Facebook, LinkedIn, etc.)
  • Microblogging (Twitter, Facebook status updates, etc.)
  • Virtual Worlds (World of Warcraft, Second Life, etc.)
  • Video Chat (Skype, Google Talk, etc.)

Each of these seems—at least to me—to be exploring a new set of trade-offs in how we communicate with each other. It’s interesting to stare at them, squint and turn your head sideways to try and see how things are shaping up and what’s going to drop out of it all.

This is especially interesting given the recent demo of Google Wave which claims to be the future of all communication, and while it seems to combine, blur and facilitate a lot of the previously mentioned pieces. It’s not clear to me that this is the correct re-consolidation of communication, though it’s certainly a good first step.

back-of-device interfaces

I was poking around the other day and I found this coverage of CHI 2009. A bunch of my friends are HCI researchers and were there, so I’ll need to hit them up for more details, but the back-of-device interfaces are looking way cooler than they did the last time I saw them.

With the preponderance of touchscreen things happening these days, I can say that it’s really hard to hit any button that’s smaller than about a dime, which is a pretty big space given how small the devices are.

It seems to really present a better technique to get control over using touchscreen devices. I can’t tell you how often I can’t quite get the right spot on my Palm Centro and it takes me two or 3 silly stabs with my finger to press the link or button that I want.

I’m curious what the constraints on the device be though when both the front and back surface matter for input and output. It’s essentially removing a degree of freedom from the design of the device by requiring that the back be touch sensitive and presumably flat in the area behind the actual screen.

contest for better OpenWrt web interface

http://www.ubnt.com/challenge/

They’re offering $200,000 in prizes for new Web UIs for OpenWrt to run on their new router. That’s really cool, but from my point of view the X-Wrt stuff is actually pretty damn good. The thing holding back making open router platforms one of the cooler things around is nothing to do with the Web UI.

The real problem is two-fold, first the most supported hardware is ancient and barely capable of running anything useful and can’t make use of any the new, cool rapid development languages because they’re interpreted and their runtime is bigger than usefully fits in the memory on these devices.

Second—and related—the development for these devices is a royal pain in the ass. Because of the devices limitations, you are forced to use a compiled language and you have to actually compile it on your own machine which isn’t so bad, but it makes debugging and profiling really hard because you have to make the code, deploy it to the device, and then use whatever limited tools you can get to work there to figure out if things are working right. If they aren’t you have to repeat and it’s brutal.

My little time spent working with that stuff gave me a huge newfound respect for embedded device developers. What we really need is a better toolchain to build programs for devices like this. Also, if people really think that devices like this are going to play a bigger role going forward, maybe we need to spend a few extra bucks and get them some real storage and make them a bit beefier.

classes of computers

A while back I was thinking about this whole cell phone, PDA, laptop, desktop thing with computers where we have stuff in different form factors which we use in different ways. At the time I was thinking about what size laptop to buy, but it got me thinking about how many devices we need and where they should fall on the size/power spectrum.

This lead me to my current thinking which is that there are loosely 3 classes of computers (at least for consumers) and that you are basically forced to compete with everything in your class. They are pocketable, backpackable, and stationary. That is to say that there’s little to be gained by shrinking the size of laptop once it fits comfortably in my backpack unless you’re going to make it small enough to fit in my pocket.

So, that says, that my phone, my palm, my iPod (or walkman/discman if you go back) were all basically competing with each other. As a consequence, it would have been relatively easy to predict that these devices would converge into things like the Treo and iPhone which combine these features.

At the next level, we have backpackable objects. These include laptops, most e-readers, and the new entry of netbooks. Interestingly, my logic says that these devices are competing with each other and will really have to converge in some way. That is, netbooks, despite being hot, really don’t offer a truly new form of computer, but simply a new (and possibly useful) trade-off in the power/battery life/price space. It may be that they drag the price and performance of the average backpackable computer down and push it’s battery life up, but these aren’t revolutionary devices.

This jives with my experience that most people I know buying netbooks are doing so because either they didn’t have a laptop before, they had a laptop they didn’t like or no laptop at all. I see a large number of people who had old, big, heavy windows laptops getting netbooks, which is basically people trading an object which isn’t really backpackable for a backpackable one. An unrelated, but relevant fact is that windows works terribly as an operating system for laptops as it handles sleep in an atrocious fashion, meaning that for many people their netbook is the first non-windows laptop which thus actually works like a laptop should with fast-sleep and fast-resume.

The last category is stationary computers, which for consumers basically means desktops. There are other things, like media servers and the like, but for the most part we mean desktops. Here things are kind of interesting for a couple of reasons. First, desktops seem to actually be losing to laptops, which makes sense because while backpackable objects don’t necessarily have to compete with stationary ones, your stationary computer does have to justify it’s existence by providing some features which your backpackable computer doesn’t. In that sense, laptops have really been able to successfully compete with desktops for the last few years. Maybe netbooks are simply the realigning of backpackable class and will, in fact, increase the difference between backpackable and stationary computers again.