Monday, November 21, 2011

Pushing the OpenFlow discussion forward.

Just listened to - and loved! - the recent Packet Pushers discussion on OpenFlow. Ethan and Greg with guests Ivan and Derrick put together a great episode with very interesting insights that made time fly for me as I listened.

I think some considerations made on the show deserve some comments:

  • Architecturally, Ivan made a point of how we should have a stable IP core and a more 'flexible' access/edge layer, which is where the OpenFlow goodness comes in. YES! That makes perfect sense: the challenges we're looking at adresssing - multi-tenancy, virtualized topologies, compliance compartmentalization, ... - are much more focused on access layers. Of course there is a need to define how the core addresses these topics, but solving them at the access layer is paramount.
  • Greg espouses a long-term view of OpenFlow that I think is also correct. The notion of redoing a whole new network now to accommodate OpenFlow is not practical, but the effect of OF on network design is likely to be felt in a few years.
  • There was some discussion on whether some features (LLDP/CDP, LACP, ...) will have to be implemented locally and how that might contrary to the unstated goal of commoditizing the access switches. Frankly, I don't think this is an issue. Just like 'commodity switches' have had to slowly incorporate functionality (think VLANs, QOS, POE, ... which can now all be had on consumer-level gear) I think these 'essential' functions will also find their way into the more commoditized product lines over time.
  • The scaling limitations are ever present. That being said, I am wondering if the current limitations (as described in the show) of 600-1000 new flows/sec shouldn't be looked at as new flow setups, but as EXCEPTIONs to pre-existing flow setups. Think about this scenario
  • a) OpenFlow controller is 'aware' and instantiates expected flows from each access switch port as part of maintenance window: "IP traffic from source A coming on port X going to destination B is allowed and goes out port y" and so on.
  • b) Then, only when *exceptions* to that occur ('bad hacker takes over source A and attacks server C instead') does the controller need to get involved - and can then do all the kungfu-magic of redirecting traffic to a network forensics device or something...
So yes, there are scaling limitations, but the power of pre-programming flows might be tapped to alleviate a lot of that. If this is correct, then the issues to consider are really how many flow rules can a switch realistically host in its policy rule table, how many flows can a switch host overall (not unlinke a firewall) and what happens in the failure scenarios of overloading policies and/or concurrent flows. Again, just like a firewall...

Finally, in my opinion, the current thinking on the applicability of OpenFlow to enterprise networks - specifically as it relates to the functions of the OpenFlow controller - can be summarized as:

Guess where the OpenFlow controller sits...

(As a side note, I remember a cartoon with the same message from many years ago but with a flowchart instead, but couldn't find it online. If anyone can point me to it, greatly appreciated!)

Great, great show guys! Thanks!

1 comment:

  1. On that flow exception idea, that's pretty much what Rob Sherwood commented in a post I made about OpenFlow a while ago. Most flow table entries are going to be wildcards at some level or another, sort of like how we favor route summarization and keeping our routing tables small. In Rob's vision, highly granular flow matching (Rob called them "microflows") isn't likely to be a default OF topology, except for specific applications that call for it. So I think your point on that is spot-on. A well-constructed flow table is going to substantially mitigate the scalability issue that comes up due to punting.