more on the post-device world

A while ago I wrote that I thought Apple’s major failing is not realizing that we’re heading for a post-device world where the devices we use in the future will become a lot like the apps we use today. That is, with a few exceptions, more or less impulse buys rather than painstakingly selected profitable objects.

It seems as though some of this new world is happening faster than I would have thought. In a recent article at VentureBeat, a financial analyst who’s spent a bunch of time in China over the last 10+ years describes his shock at finding fully-capable 7″ Android tablets running Ice Cream Sandwich for sale for $45. They’re apparently called A-Pads.

The truth is that if your company sells hardware today, your business model is essentially over. No one can make money selling hardware when faced with the cold hard truth of a $45 computer that is already shipping in volume.

My contacts in the supply chain tell me they expect these devices to ship 20 million to 40 million units this year. Most of these designs are powered by a processor from a company that is not known outside China — All Winner. As a result, we have heard the tablets referred to as “A-Pads.”

When I show this tablet to people in the industry, they have universally shared my shock. And then they always ask “Who made it?.” My stock answer is “Who cares?” But the truth of it is that I do not know. There was no brand on the box or on the device. I have combed some of the internal documentation and cannot find an answer. This is how far the Shenzhen electronics complex has evolved. The hardware maker literally does not matter. Contract manufacturers can download a reference design from the chip maker and build to suit customer orders.

He goes on to draw the scary, but straightforward conclusion that:

No one can make money selling hardware anymore. The only way to make money with hardware is to sell something else and get consumers to pay for the whole device and experience.

So, companies like Apple can stay around if they can add enough extra things to demand a higher price for their hardware. Apple in particular has an advantage because they have enough money that they actually fund the creation of new fabs in exchange for getting the best hardware before everyone else, but it seems that will likely fade some too. He mentions that product cycles are getting shorter and so competitive advantages like Apple funding fabs are likely to last shorter and shorter times.

As product cycles tighten (and we had quotes for 40-day turnaround times), the supplier with the right technology, available right now will benefit.

It seems to me like the right option is to admit that your hardware business is likely to be undercut in most areas and to instead focus on software and integration and move up the stack to where there’s still real value. That being said, this is exactly the kind of thing that the Innovator’s Dilemma says is nearly impossible for companies to do.

on what matters to the current generation

http://www.theatlantic.com/magazine/archive/2012/09/the-cheapest-generation/309060/

This whole recent article on “The Cheapest Generation” does a lot of talking about how young people’s buying habits have been changing. Specifically, they’re buying fewer cars and houses and even when they buy them, they’re going with smaller and cheaper ones. Some of that can be chalked up to the recent economic collapse, but the article argues that even the collapse doesn’t quite explain it all.

In any event, the article has one line which explains so much of the current generation in a single, short sentence that it’s stuck with me ever since I read it.

Young people prize “access over ownership,” said Sheryl Connelly, head of global consumer trends at Ford.

It explains the transition from car ownership to Zipcar. It explains the shift from buying music to subscription services like Pandora and Spotify. It also explains why people are mostly happy to jump to the cloud—where they don’t own their data, but do have access to it. It even explains a bunch of the mentality of Facebook—focusing on providing the cleanest simplest way to give people access to their lives even as they give up control of things.

Apparently, Ford employs some smart people.

is the app store destroying software?

I’m sure a bunch of you remember the story a while back about how the app store is more of a casino than an actual business model. Now the recent acquisition of the sparrow mail app by Google has people talking again about how the app store works again.

It’s not good.

On the one hand people are feeling betrayed because after they shelled out money for the app, the developers are walking out on the project because of the acquisition. At least one person is correctly pointing out that the real reason people are reacting so strongly is that it violated one of their assumptions about how apps work. They thought that by paying for they app they were somehow more in control over what they were getting.

As if this wasn’t bad enough, other people noted that even though Sparrow has angered their users, they were also likely not making enough money to support their development efforts.

From our experience, a $2.99 app in the App Store needs to hover around #250 in the top paid list to sustain two people working full-time on the app.

This doesn’t work. You need to be able to have more than ~500 developers full time making apps. A lot more.

The worst part of all of this isn’t actually the fact that the app store is unsustainable—that’s fine, the app store can fail or people can raise prices. The real problem is that in the process of failing, the app store is redefining what people think software is worth. If we’ve permanently changed people’s valuation from software down from $30–50 to $1–5 that’s going to really hurt software development for some time to come.

are apps enough innovation?

For the last while I’ve had this unsettling feeling that while there are a lot of startups going around, most of them aren’t really innovating that much. Most notably, I’m tired of the idea that innovation can come from companies busy basically just trying to make apps that serve the smallest little quirk of what we want rings hollow to me.

If your core innovation is just an iPhone app, then it’s not clear that you’ve really had the impact you wanted to have. There are obvious exceptions like VizWiz which a friend of mine (now a professor at the University of Rochester) wrote that allows blind people to take a picture, speak a question about the the picture and get a response back within seconds allowing the vision impaired to “see” via a proxy. But, in general, your new startup about hyper-local, crowdsourced, social, location-based iPhone app is probably not going to change the world.

The always great Planet Money Podcast reminded me of this with their recent episode The Cool Kids Don’t Want To Go Public. They explored the fact that a lot of the new companies that people are starting aren’t really sustained business models but rather a elaborate courtesan dance to seduce the princely giants of the tech world.

As Bill DeRouchey (@billder) put it:

Hammerbacher looked around Silicon Valley at companies like his own, Google, and Twitter, and saw his peers wasting their talents. “The best minds of my generation are thinking about how to make people click ads,” he says. “That sucks.”

The Planet Money Podcast also makes the more nuanced point that there are people who choose not to take their companies public because they don’t need the extra money to grow and they don’t want the extra oversight that’s required. Still, it seems like we could use more people trying to build companies than can last decades rather than companies looking for the early out.

on photo storage solutions

I am by no means a prolific photographer, but I do own an halfway decent entry-level SLR—an Olympus e-520 for those who care. This has resulted in my having about 100 GB of photos that I need to somehow keep stored in an accessible and reliable way. More worryingly, about 45–50 GB of the photos are from last year.

So, in reality, storing that amount of data is trivial. A 3 TB hard drive can be had for $150 on Amazon right now and it would easily store all of photos for the foreseeable future. However, the SSD I have in my MacBook Air is only a 256 GB drive and between music, the videos I want to watch over the next bit, virtual machines and general stuff I don’t have 100 GB free for all of my photos.

the easy (almost-)solution

Fine, so I need to get something to store my photos on that isn’t my laptop. The easiest thing to do is to get a big external hard drive and store all the photos there. I could even get two and you can store a second copy. At least in theory.

In practice, there are at least 2 problems. First, when I put all my photos on an external hard drive, they’re suddenly inaccessible to me except when I’m at home with them plugged into my laptop.

Second, I need some way to make sure the data is replicated on both drives. I can’t simply set them up in a RAID since then anything you accidentally deleted will be deleted on both drives. Since the drives themselves are dumb, they can’t run a chron job to back themselves up either. Meaning that I have to manually make sure they’re both in sync.

This is actually more or less what I do right now, but it sucks. In actuality, one of the two hard drives sits in a Windows box that sits under my TV, so I can kind of get at my photos by remote desktoping back into it, but it’s not what anyone would call elegant.

working out a real solution

Fine, both the previous problems seem to have a common problem which is that you can’t have just dumb drives. You need to have some intelligence associated with you date both to give you access to it even though it’s not all stored locally on your computer and to maintain replicas for you.

Ideally, it should also meet the following 3 goals:

  1. Keep older version of files and folders so that accidental deletions don’t actually destroy any data.
  2. Be easily accessible over the network—fast while you’re on the same LAN and at least functional over the Internet.
  3. Allow for some data to be stored off-site either on a remote machine or in the cloud somewhere.

With goal 3 above, you might be thinking that a cloud solution might solve all of your problems. Just store it all in DropBox, SkyDrive or Google Drive and be done with it.

While this seems like a great solution at first, it doesn’t actually work out so well in practice. The problem isn’t actually the cost so much, in fact, SkyDrive will sell you 100 GB for $50/year which is more than reasonable.

The biggest problem is that you basically have to be willing to store all or nothing of the collection on any given computer. In other words, I can’t easily access my photos from my laptop where I don’t have enough space to store a complete copy. The web interfaces which might save you aren’t really quite good enough to actually use to get at your files except in a pinch.

This does kind of point out what I think the real solution is though. A virtual folder—or file system, however you want to look at it—that gives you the logical view that it contains all the 100s of gigabytes of your files, but in practice only stores a small cache of ones you’ve recently used. Then, if you try to load a file that isn’t cached it goes out and finds a copy to fetch for you whether it’s from your home PC or the cloud.

In theory, this shouldn’t be too hard to implement using a FUSE filesystem—though that rules out Windows for now—where when the user accesses a cached file it works just the same as opening any other file and when they access an uncached file, it blocks until the file can be retrieved. This could even be made better for photos in particular since OSes now store metadata like thumbnails along with photos and you could conceivably cache all of that allowing for easy browsing.

Any file dropped into the virtual folder could be sent to any number of replicas in the cloud or elsewhere and any changes to a file could be made to a new version while the old version was carefully squirreled away.

Basically, I get a magic folder on my laptop which takes up only a small fraction of the space of all my photos—My bet is that 10–15 GB would be more than enough. But, the magic folder does exactly what I want. When I put stuff in it, it backs it up for me. When I look at it I see all of my files—albeit maybe a bit slower than if it was actually local.

I’m honestly a little surprised that DropBox hasn’t already done this.

on Google (not) being evil

http://www.wired.com/wiredenterprise/2012/06/google-coordinate/

On Thursday, Google uncloaked a new service dubbed Google Maps Coordinate that lets businesses track the activities of remote workers — such as traveling sales staff and field technicians — by tapping into GPS devices on their cell phones. For instance, says Google, a cable TV company could follow the progress of their field techs as they move from home to home repairing cable connections.

Really? Is this the space that Google needs to be innovating in most? Also how does this fit into their “don’t be evil” mantra?

VMware’s networking CTO on OpenFlow

These people interviewed Allwyn Sequeira, CTO and vice president of cloud, networking and security for VMware about OpenFlow. Honestly, the technical content is pretty weak, but my favorite part is all the snark.

If you have 200 Stanford Ph.D.s and you own your own fiber and can build your own boxes in your backyard and you own your own traffic and have proprietary applications, then [Openflow] is for you [now].

Ouch!

Then, when asked if he thought of Nicira and Big Switch as possible competitors, he had this to say:

The real question is: Are Nicira and Big Switch a feature in VMware?

Beautiful. This guy is great.

on openflow and minimal software on routers

Some recent coverage of OpenFlow and software-defined networks (SDN) in general has gotten me thinking about the relationship between it and server/desktop virtualization like Xen, VMware and the like. I’m by far not the first person to think about this. Martin Casado, as close to the father of OpenFlow as anyone, has written about how networking doesn’t and does need a VMware, but this is a bit of a different story than address or topology virtualization.

One of the core benefits that’s being heralded about OpenFlow/SDN is that it can reduce the amount of code running on switches/routers which is good for security, cost, performance and everything else. It came up in Urs Hoelzle’s keynote about OpenFlow at Google at the Open Networking Summit and it’s featured heavily in other coverage including this recent webinar which discusses how OpenFlow/SDN changes forwarding.

The same arguments came up when desktop/server virtualization first started. Hypervisors were going to be small things which you could actually reason about and they wouldn’t have nearly as many bugs or vulnerabilities as regular OSes. Finally, we were going to get the mircokernels we always wanted.

That’s not how it ended up though. Today, Xen and VMware ESX are both in the realm of 400,000 lines of code and Microsoft’s Hyper-V isn’t far behind. (The numbers are borrowed from the NOVA paper in EuroSys 2010 and I’ve stuck the relevant figure here.)

While a few hundred thousand lines of code is a far cry from the ~15 million that are in the Linux kernel today and the over 50 million lines of code that are theoretically in Windows, it’s probably not where people arguing that hypervisors were the final coming of microkernels wanted them to be. With new features being released all of the time, the evidence is that hypervisors will follow the same course as OSes and have nearly unbounded growth in terms of code size perhaps with a delayed start.

Similarly, I think that while the first take at implementing pure OpenFlow/SDN switches may result in a reduced amount of code running on switches as functionality moves to controllers. However, this isn’t quite what it looks like for two reasons. First, unlike a hypervisor which provided a clean layer to protect hardware resources from the OSes that ran on top of it, OpenFlow has no such advantage and instead exposes it’s resources completely to a controller with no security against a misbehaving controller.

Second, it seems likely to me that functionality is going to drift back to switches assuming that it ever really leaves the switches. My money is on the fact that people will realize that some functionality makes sense to be pushed to the controller—perhaps even most functionality, but there will always be things which are cheaper, faster and better to implement in the switches themselves. Latency sensitive tasks that don’t want to wait to hear from a controller like quickly routing around failures come to mind.

on ‘eco-anarchist’ terrorists

http://www.nature.com/news/anarchists-attack-science-1.10729

This seems like it belongs in a Neal Stephenson book and not on Nature’s website, but here we are. Apparently there’s this thing called eco-anarchism or green anarchism and apparently groups of people identifying with the movement have been taking claim for attacks against scientists.

A group calling itself the Olga Cell of the Informal Anarchist Federation International Revolutionary Front has claimed responsibility for the non-fatal shooting of a nuclear-engineering executive on 7 May in Genoa, Italy. The same group sent a letter bomb to a Swiss pro-nuclear lobby group in 2011; attempted to bomb IBM’s nanotechnology laboratory in Switzerland in 2010; and has ties with a group responsible for at least four bomb attacks on nanotechnology facilities in Mexico. Security authorities say that such eco-anarchist groups are forging stronger links.

Last year after an attack by a similar group in Mexico, the brother of one of the physicists that was attacked wrote a column in nature describing his reaction.

My elder brother, Armando Herrera Corral, was this month sent a tube of dynamite by terrorists who oppose his scientific research. The home-made bomb, which was in a shoe-box-sized package labelled as an award for his personal attention, exploded when he pulled at the adhesive tape wrapped around it. My brother, director of the technology park at the Monterrey Institute of Technology in Mexico, was standing at the time, and suffered burns to his legs and a perforated eardrum. More severely injured by the blast was his friend and colleague Alejandro Aceves López, whom my brother had gone to see in his office to share a cup of coffee and open the award. Aceves López was sitting down when my brother opened the package; he took the brunt of the explosion in his chest, and shrapnel pierced one of his lungs.

Both scientists are now recovering from their injuries, but they were extremely fortunate to survive. The bomb failed to go off properly, and only a fraction of the 20-centimetre-long cylinder of dynamite ignited. The police estimate that the package contained enough explosive to take down part of the building, had it worked as intended.

The rhetoric of the organizations is almost stranger than fiction, but I guess I can see people, especially impressionable young people could buy into it. It’s certainly not any more strange than other organizations that attract people on college campuses.

An extremist anarchist group known as Individuals Tending to Savagery (ITS) has claimed responsibility for the attack on my brother. This is confirmed by a partially burned note found by the authorities at the bomb site, signed by the ITS and with a message along the lines of: “If this does not get to the newspapers we will produce more explosions. Wounding or killing teachers and students does not matter to us.”

In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by ‘the system’. The group praises Theodore Kaczynski, the Unabomber, whose anti-technology crusade in the United States in 1978–95 killed three people and injured many others.

more on cyberwar and cyberespionage

It seems like state-sponsored hacking is hitting Iran again with the discovery of the Flame virus which seems to be screen scraping and key-logging infected machines dominantly in Iran. The logical culprit is Israel, but nothing’s been confirmed and any allegations about Israel coming from Iran should be taken with more than a grain of salt.

Interestingly, it seems like Iran is being pretty open about the compromise and the potential effects. They eventually did similar things with Stuxnet too. I’m a bit curious as to what they have to gain from this especially from a country that is not renowned for it’s honesty on the international stage.

Iran’s Computer Emergency Response Team Coordination Center warned that the virus was dangerous. An expert at the organization said in a telephone interview that it was potentially more harmful than the 2010 Stuxnet virus, which destroyed several centrifuges used for Iran’s nuclear enrichment program. In contrast to Stuxnet, the newly identified virus is designed not to do damage but to collect information secretly from a wide variety of sources.

In and of itself, this isn’t very interesting and I suspect that this kind of thing is happening nearly constantly between most major nations in the world, but it’s interesting to me because of a recent CACM article on “Why Computer Scientists Should Care About Cyber Conflict and U.S. National Security Policy.”

It actually does a pretty good job of laying out rational issues in cyberwar and whether or not it is something that we should be paying attention to. It points out that so far most things which might be misconstrued as cyberwar are actually just cyber-espionage which in and of itself is not generally considered to be an act of war—cyber or otherwise.

Most of what is discussed in the popular media as “cyber attacks” is really espionage of various kinds. What is “lost” is information: technical documents, political memos, credit card numbers, Social Security numbers, money in bank accounts, business plans, and so on. As most computer scientists know, these are breaches of confidentiality—the legitimate owner still has the information, but someone else has it as well, someone who should not have it and who might be able to use it against the legitimate owner.

These acts are undeniably unfriendly—but do they amount to “acts of war”? Espionage is not traditionally regarded as a violation of international law—primarily because all nations do it. They do violate domestic law, which is why such acts are (properly) regarded as criminal acts—appropriate for investigation and prosecution by law-enforcement authorities.

However, with the proof of concept in Stuxnet, we can envision a potential cyber attack which could cost lives and perhaps be considered an act of war.

A number of examples of actual cyber attacks—actions taken to destroy, disrupt, or degrade computers—are known publicly. It is alleged that in 1984, the U.S. modified software that was subsequently obtained by the Soviet Union in its efforts to obtain U.S. technology. Ostensibly designed to operate oil and gas pipelines, the Soviets used this software to operate a natural gas pipeline in Siberia. After a period in which all appeared normal, the software allegedly caused the machinery it controlled to operate outside its safety margins, at which point a large explosion occurred.6 And, in 2010, the Stuxnet worm disrupted industrial control systems in the Iranian infrastructure for enriching uranium, apparently destroying centrifuges by ordering them to operate at unsafe speeds.3

In this case, actions could in fact be an act of war. What does this mean though and how should a nation—and the U.S. in particular—respond? That turns out to be complex and the article brings up a bunch of issues. The main one logically seems to be attribution. Cyber attacks are notoriously difficult to attribute to a given entity especially in the time frame during which a retaliation might make sense.

They conclude with a set of things which computer scientists in the U.S. should think about and possibly offer advice about when it comes to issues of cyber attacks and our policy about responding to them.

  • Attack assessment. Knowing that a nation or even a particular facility is under serious cyber attack is highly problematic given the background noise of ongoing cyber attacks occurring all the time. What information would have to be collected, from what sources should that information be collected, and how should it be integrated to make such a determination?
  • Geolocation of computers. Given that computers are physical objects, every computer is in some physical location. Knowledge of that location may be important in assessing the political impact of any given cyber attack.
  • Techniques for limiting the scope of a cyber attack. Associated with any bomb is a lethal radius outside of which a given type of target is likely to be unharmed—knowledge of a bomb’s lethal radius helps military planners minimize collateral damage. What, if any, is the cyber analog of “lethal radius” for cyber weapons?
  • How could a penetration of an adversary’s computer system be conducted so that the adversary knows the penetration is an exploitation rather than an attack?
  • Given a continuing and noisy background of criminal and hacker cyber attacks, how would two nations that agreed to a “cyber cease-fire” know the other side was abiding by the terms of the agreement?
  • How might catalytic cyber conflict between two nations be avoided? (Catalytic conflict refers to conflict between two parties initiated by a third party, perhaps by impersonating one of the two parties.)
  • How can small conflicts in cyberspace between political/military adversaries be kept from growing into larger ones?

These are all good questions and vary between very ambitious goals which would take a lot of deep thought to break off pieces and work on and simpler things which researchers could work on today.