Computer networks have historically evolved box by box, with individual network elements occupying specific ecological niches as routers, switches, load balancers, network address translations (NATs), or firewalls. Software-defined networking proposes to overturn that ecology, turning the network as a whole into a platform and the individual network elements into programmable entities. The apps running on the network platform can optimize traffic flows to take the shortest path, just as the current distributed protocols do, but they can also optimize the network to maximize link utilization, create different reachability domains for different users, or make device mobility seamless.
OpenFlow, an open standard that enables software-defined networking in IP networks, is a new network technology that will enable many new applications and new ways of managing networks.
Here are three real, though somewhat fictionalized, applications:
Example 1: Bandwidth Management. A typical wide area network has 30%60% utilization; it must "reserve" bandwidth for "burst" times. Using OpenFlow, however, a system was developed in which internal application systems (consumers) that need bulk data transfer could use the spare bandwidth. Typical uses include daily replication of datasets, database backups, and the bulk transmission of logs. Consumers register the source, destination, and quantity of data to be transferred with a central service. The service does various calculations and sends the results to the routers so they know how to forward this bulk data when links are otherwise unused. Communication between the applications and the central service is bidirectional: applications inform the service of their needs, the service replies when the bandwidth is available, and the application informs the service when it is done. Meanwhile, the routers feed real-time usage information to the central service.
As a result, the network has 90%95% utilization. This is unheard of in the industry. The CIO is excited, but the CFO is the one who is really impressed. The competition is paying almost 50% more in capital expenditure (CAPEX) and operating expenditure (OPEX) for the same network capacity.
This example is based on what Google is doing today, as described by Urs Hoelzle, a Google Fellow and senior vice president of technical infrastructure, in his keynote address at the 2012 Open Networking Summit.1
Example 2: Tenantized Networking. A large virtual-machine hosting company has a network management problem. Its service is "tenantized," meaning each customer is isolated from the others like tenants in a building. As a virtual machine migrates from one physical host to another, the virtual LAN (VLAN) segments must be reconfigured. Inside the physical machine is a software-emulated network that connects the virtual machines. There must be careful coordination between the physical VLANs and the software-emulated world. The problem is that connections can be misconfigured by human error, and the boundaries between tenants are... tenuous.
OpenFlow allows new features to be added to the VM management system so that it communicates with the network infrastructure as virtual and physical machines are added and changed. This application ensures each tenant is isolated from the others no matter how the VMs migrate or where the physical machines are located. This separation is programmed end-to-end and verifiable.
This example is based on presentations such as the one delivered at the 2012 Open Networking Summit by Geng Lin, CTO, Networking Business, Dell Inc.3
Example 3: Game Server. Some college students set up a Quake server running on a spare laptop as part of a closed competition. Latency to this server is important not only to gameplay, but also to fairness. Because these are poor graduate students, the server runs on a spare laptop. During the competition someone picks up the laptop and unplugs it from the wired Ethernet. It joins the Wi-Fi as expected. The person then walks with the laptop all the way to the cafeteria and returns an hour later. During this path the laptop changes IP address four times, and bandwidth varies from 100Mbps on wired Ethernet, to 11Mbps on an 802.11b connection in the computer science building, to 54Mbps on the 802.11g connection in the cafeteria. Nevertheless, the game competition continues without interruption.
Using OpenFlow, the graduate students wrote an application so that no matter what subnetwork the laptop is moved to, the IP address will not change. Also, in the event of network congestion, traffic to and from the game server will receive higher priority: the routers will try to drop other traffic before the game-related traffic, thus ensuring smooth gameplay for everyone. The software on the laptop is unmodified; all the "smarts" are in the network. The competition is a big success.
This example is fictional but loosely based on the recipients of the Best Demo award at SIGCOMM (Special Interest Group on Data Communications) 2008.4 More-serious examples of wireless uses of OpenFlow were detailed by Sachin Katti, assistant professor of electrical engineering and computer science at Stanford University, in a presentation at the 2012 Open Networking Summit.2
OpenFlow is either the most innovative new idea in networking, or it is a radical step backwarda devolution. It is a new idea because it changes the fundamental way we think about network-routing architecture and paves the way for new opportunities and applications. It is a devolution because it is a radical simplificationthe kind that results in previously unheard of new possibilities.
A comparison can be made to the history of smartphones. Before the iPhone, there was no app store where nearly anyone could publish an app. If you had a phone that could run applications at all, then each app was individually negotiated with each carrier. Vetting was intense. What if an app were to send the wrong bit out the antenna and take down the entire phone system? (Why do we have a phone system where one wrong bit could take it down?) The process was expensive, slow, and highly political.
Some fictional (but not that fictional) examples:
Network equipment vendors do not currently permit apps. An app that fulfills a niche market is a distraction from the vendor's roadmap. It could ruin the product's "c'est la mode." Routers are too critical. Router protocols are difficult to design, implementing them requires highly specialized programming skills, and verification is expensive. These excuses sound all-to familiar.
To understand why writing routing protocols is so difficult requires understanding how routing works in today's networks. Networks are made of endpoints (your PC and the server it is talking to) and the intermediate devices that connect them. Between your PC and www.acm.org there may be 20 to 100 routers (technically, routers and switches but for the sake of simplicity all packet-passing devices are called routers in this article). Each router has many network-interface jacks, or ports. A packet arrives at one port, the router examines it to determine where it is trying to go, and sends it out the port that will bring it one hop closer to its destination.
When your PC sends a packet, it does not know how to get to the destination. It simply marks the packet with the desired destination, gives it to the next hop, and trusts that this router, and all the following routers, will eventually get the packet to the destination. The fact that this happens trillions of times a second around the world and nearly every packet reaches its destination is awe-inspiring.
No "Internet map" exists for planning the route a packet must take to reach its destination. Your PC does not know ahead of time the entire path to the destination. Neither does the first router, nor the next. Every hop trusts that the next hop will be one step closer and eventually the packet will reach the destination.
How does each router know what the next hop should be? Every router works independently to figure it out for itself. The routers cooperate in an algorithm that works like this: each one periodically tells its neighbors which networks it connects to, and each neighbor accumulates this information and uses it to deduce the design of the entire network. If a router could think out loud, this is what you might hear: "My neighbor on port 37 told me she is 10 hops away from network 126.96.36.199, but the neighbor on port 20 told me he is four hops away. Aha! I'll make a note that if I receive a packet for network 188.8.131.52, I should send it out port 20. Four hops is better than 10!"
How does each router know what the next hop should be? Every router works independently to figure it out for itself.
Although they share topological information, routers do the route calculations independently. Even if the network topology means two nearby routers will calculate similar results, they do not share the results of the overlapping calculations. Since every CPU cycle uses a certain amount of power, this duplication of effort is not energy efficient.
Fortunately, routers need to be concerned with only their particular organization's network, not the entire Internet. Generally, a router stores the complete route table for only its part of the Internetthe company, university, or ISP it is part of (its autonomous domain). Packets destined for outside that domain are sent to gateway routers, which interconnect with other organizations. Another algorithm determines how the first set of routers knows where the gateway routers are. The combination of these two algorithms requires less RAM and CPU horsepower than if every router had to know the entire Internet. If it were not for this optimization, then every router would need enough RAM and CPU horsepower to store an inventory of the entire Internet. The cost would be staggering.
The routing algorithms are complicated by the fact that routers are given clues and must infer what they need to know. For example, the conclusion that "port 20 is the best place to send packets destined for 184.108.40.206" is inferred from clues given by other routers. A router has no way of sending a question to another router. Imagine if automobiles had no windows but you could talk to any driver within 25 feet. Rather than knowing what is ahead, you would have to infer a worldview based on what everyone else is saying. If everyone were using the same vocabulary and the same system of cooperative logical reasoning, then every car could get to where it is going without a collision. That's the Internet: no maps; close your eyes and drive.
Every router is trying to infer an entire worldview based on what it hears. Driving these complicated algorithms requires a lot of CPU horsepower. Each router is an expensive mainframe doing the same calculation as everyone else just to get the slightly different result it needs. Larger networks require more computation. As an enterprise's network grows, every router needs to be upgraded to handle the additional computation. The number and types of ports on the router haven't changed, but the engine no longer has enough capacity to run its algorithms. Sometimes this means adding RAM, but often it means swapping in a new and more expensive CPU. It is a good business model for network vendors: as you buy more routers, you need to buy upgrades for the routers you have already purchased. These are not standard PCs that benefit from the constant Dell-vs.-HP-vs.-everyone-else price war; these are specialized, expensive CPUs that customers cannot get anywhere else.
The basic idea of OpenFlow is that things would be better if routers were dumb and downloaded their routing information from a big "route compiler in the sky," or RCITS. The CPU horsepower needed by a router would be a function of the speed and quantity of its ports. As the network grew, the RCITS would need more horsepower but not the routers. The RCITS would not have to be vendor specific, and vendors would compete to make the best RCITS software. You could change software vendors if a better one came along. The computer running the RCITS software could be off-the-shelf commodity hardware that takes advantage of Moore's Law, price wars, and all that good stuff.
Obviously, it is not that simple. You have to consider other issues such as redundancy, which requires multiple RCITS and a failover mechanism. An individual router needs to be smart enough to know how to route data between it and the RCITS, which may be several hops away. Also, the communication channel between entities needs to be secure. Lastly, we need a much better name than RCITS.
The basic idea of OpenFlow is that things would be better if routers were dumb and downloaded their routing information from a big "route compiler in the sky."
The OpenFlow standard addresses these issues. It uses the term controller or controller platform (yawn) rather than RCITS. The controller takes in configuration, network, and other information and outputs a different blob of data for each router, which interprets the blob as a decision table: packets are selected by port, MAC address, IP address, and other means. Once the packets are selected, the table indicates what to do with them: drop, forward, or mutate then forward. (This is a gross oversimplification; the full details are in the specification, technical white papers, and presentations at http://www.OpenFlow.org/wp/learnmore.)
Traditional networks can choose a route based on something other than the shortest pathperhaps because it is cheaper, has better latency, or has spare capacity. Even with systems designed for such traffic engineering, such as Multiprotocol Label Switching Traffic Engineering (MPLS TE), it is difficult to manage effectively with any real granularity. OpenFlow, in contrast, enables you to program the network for fundamentally different optimizations on a per-flow basis. That means latency-sensitive traffic can take the fastest path, while bulk flows can take the cheapest. Rather than being based on particular endpoints, OpenFlow is granular down to the type of traffic coming from each endpoint.
OpenFlow does not stop at traffic engineering, however, because it is capable of doing more than populating a forwarding table. Because an OpenFlow-capable device can also rewrite the packets, it can act as a NAT or AFTR (address family transition router); and because it can drop packets, it can act as a firewall. It can also implement ECMP (equal-cost multi-path) or other load-balancing algorithms. They do not have to have the same role for every flow going through them; they can load-balance some, firewall others, and rewrite a few. Individual network elements are thus significantly more flexible in an OpenFlow environment, even as they shed some responsibilities to the centralized controller.
OpenFlow is designed to be used entirely within a single organization. All the routers in an OpenFlow domain act as one entity. The controller has godlike power over all the routers it controls. You would not let someone outside your sphere of trust have such access. An ISP may use OpenFlow to control its routers, but it would not extend that control to the networks of customers. An enterprise might use OpenFlow to manage the network inside a large data center and have a different OpenFlow domain to manage its WAN. ISPs may use OpenFlow to run their patch of the Internet, but it is not intended to take over the Internet. It is not a replacement for inter-ISP systems such as Border Gateway Protocol (BGP).
The switch to OpenFlow is not expected to happen suddenly and, as with all new technologies, may not be adopted at all.
Centralizing route planning has many benefits:
In the past smartphone vendors carefully controlled which applications ran on their phones and had perfectly valid reasons for doing so: quality of apps, preventing instability in the phone network, and protection of their revenue streams. We can all see now that the paradigm shift of permitting the mass creation of apps did not result in a "wild west" of chaos and lost revenue. Instead, it inspired entirely new categories of applications and new revenue streams that exceed the value of the ones they replaced. Who would have imagined Angry Birds or apps that monitor our sleep patterns to calculate the optimal time to wake us up? Apps are crazy, weird, and disruptiveand I cannot imagine a world without them.
OpenFlow has the potential to be similarly disruptive. The democratization of network-based apps is a crazy idea and will result in weird applications, but someday we will talk about the old days and it will be difficult to imagine what it was once like.
Revisitng Network I/O APIs: The netmap Framework
SoC: Software, Hardware, Nightmare, Bliss
George Neville-Neil, Telle Whitney
TCP Offload to the Rescue
1. Hoelzle, U. Keynote speech at the Open Networking Summit (2012); http://www.youtube.com/watch?v=VLHJUfgxEO4.
2. Katti, S. OpenRadio: Virtualizing cellular wireless infrastructure. presented at the Open Networking Summit (2012); http://opennetsummit.org/talks/ONS2012/katti-wed-openradio.pdf.
3. Lin, G. Industry perspectives of SDN: Technical challenges and business use cases. presentation at the 2012 Open Networking Summit; http://opennetsummit.org/talks/ONS2012/lin-tue-usecases.pdf (slides 67).
4. SIGCOMM Demo; http://www.openflow.org/wp/2008/10/video-of-sigcomm-demo/.
©2012 ACM 0001-0782/12/0800 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.
No entries found