Changes between Version 10 and Version 11 of Internal/OpenFlow/ofTopology


Ignore:
Timestamp:
Jul 5, 2012, 6:18:46 PM (12 years ago)
Author:
akoshibe
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Internal/OpenFlow/ofTopology

    v10 v11  
    1 = Building Network Topologies (with !OpenFlow) =
    2 This is a rough overview of the steps one needs to take in order to simulate point-to-point links between nodes sharing a single switch (e.g. within the same broadcast domain), using !OpenFlow-controlled nodes as a means to move traffic from one VLAN to another. The steps described here are incomplete; Things will be updated as methods are refined/improved.
    3 
    4 == Background ==
    5 VLANs are good for breaking up broadcast domains. If each node is placed on separate VLANs and given a choice of a few "gateways" out of its VLAN, it has no choice but to communicate through the gateway(s). How the gateway nodes moves packets/frames from one VLAN to another depends on which network layer(s) are involved.
     1= Building Network Topologies =
     2This page aims to describe some things-to-consider when setting up topologies for experimentation.
    63
    74== Prerequisites ==
     
    1714First and foremost, the shared switch should be split into several VLANs according to your topology. Two interconnected nodes should be on the same VLAN e.g. the switch-ports connected to them should be associated with the same VLAN ID. Nodes connected to more than one element should sit on a trunked port open to all VLANs the node should associate with.
    1815
     16= I. Simulating point-to-point Links =
     17
     18This section aims to provide a rough overview of the steps one needs to take in order to simulate point-to-point links between nodes sharing a single switch (e.g. within the same broadcast domain), using standard and !OpenFlow-controlled nodes. In general, we want to partition the shared switch so that the nodes are isolated from each other, and then introduce relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. The general topology we use to describe our methods is the following:
     19{{{
     20 A-[r]-B
     21}}}
     22Where A and B are nodes 'trapped' in their partitions, and [r] is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node''; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Most of the configuration occurs on the network node. The steps described here are incomplete; Things will be updated as methods are refined/improved.
     23 
    1924== Contents ==
    2025We first describe some base "sanity-test" setups that do not involve any !OpenFlow elements. These are:
    21  I [#basic Basic Methods]
    22   1.1 [#KernIP Kernel IP routing] (Layer 3) [[BR]]
    23   1.2 [#brctl Linux Bridge] (Layer 2)
     26 1.1 [#pre1 Some Considerations]
     27 1.2 [#basic Basic Methods]
     28  1.2.1 [#KernIP Kernel IP routing] (Layer 3) [[BR]]
     29  1.2.2 [#brctl Linux Bridge] (Layer 2) [[BR]]
     30  1.2.3 [#filt Packet filters] (Layers 2-4)
    2431Then we describe the (ongoing) process of topology setup using !OpenFlow-related elements, such as:
    25  II [#of OpenFlow Methods]
    26   2.1 [#OVS OpenvSwitch] [[BR]]
    27   2.2 [#nfpga NetFPGA OpenFlow switch]
    28  [#extra Extras] - With Mininet
    29 
     32 1.3 [#of OpenFlow Methods]
     33  1.3.1 [#OVS OpenvSwitch] [[BR]]
     34  1.3.2 [#nfpga NetFPGA OpenFlow switch] [[BR]]
     35  1.3.3 [#pt Prototyping] - With Mininet
    3036!OpenFlow is rather layer-agnostic, defining traffic rules based on a combination of any of the 12 packet header fields that may be used for matching under the !OpenFlow standard. These fields correspond to layers 1~4.   
    3137
    32 All save the very last method requires that the network node is VLAN aware. Before moving on to Section I we will quickly describe how to add VLAN awareness to the system. 
     38== 1.1 Some Considerations == #pre1
     39The techniques used to partition the broadcast domain will heavily depend on two things:
     40 1. the type of experiment
     41 2. the available interfaces on the nodes
     42
     43In terms of 1., for example - we don't want to use TCP/IP-based schemes such as IP routing if we don't plan on using TCP/IP, or are planning to modify layer 3. 2. is important in that, depending on technique the number of links you can have (the node degree in terms of graphs) will be restricted to how-many-ever interfaces you have. When you only have one interface, you will want to use virtual interfaces to increase the number of links to/from your node. In turn, you may also need to modify the partitioning scheme of the shared switch.
     44
     45A standard way to deploy virtual interfaces is in combination with VLANs and trunking. This is not a bad, since VLANs may be combined with other configuration schemes, are relatively simple to configure, and a good portion of networked devices understand them. Many of the examples here that require virtual interfaces will make use of this standard technique. So, to make things easier, we will quickly describe how to add virtual interfaces and VLAN awareness to a node before moving on to Section 1.1 . 
    3346
    3447 1. Install and load VLAN module:
     
    4255 vconfig add eth0 222
    4356}}}
    44  This creates two virtual LAN interfaces, eth0.111 and eth0.222. The module can be made to load at boot time by appending '8021q' to the list, /etc/modules.
    45 
    46 == I Basic Methods == #basic
    47 These two methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
    48 == 1.1 Kernel IP routing == #KernIP
    49 Kernel IP routing is the simplest, in that no extra packages are required if you have multiple Ethernet ports on your node. 
    50 === 1.1.1 Network node setup ===
    51  1. This setup assumes a 1-to-1 mapping of VLANs to subnets. Choose IP blocks, one fore each VLAN. For example, if you have two clients connected across your node, you need two IP blocks, one for each VLAN:
     57 This creates two virtual LAN interfaces, eth0.111 and eth0.222 on eth0. The module can be made to load at boot time by appending '8021q' to the list, /etc/modules.
     58
     59Note, virtual interfaces are workarounds to being restricted to one physical interface. Any setup with nodes with multiple interfaces (e.g. using NetFPGAs) will not require the above configs, lest you want more interfaces than you have. For nodes with multiple physical interfaces, the steps describing 'eth0.xxx' can be replaced by the names of each unique interface. Keep in mind, however, that if the interface is connected to a switchport configured as a trunk, it must also be made VLAN aware even if it does not hold multiple virtual interfaces.   
     60
     61== 1.2 Basic Methods == #basic
     62These methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
     63
     64=== 1.2.1 Kernel IP routing === #KernIP
     65Kernel IP routing has the least requirements, in that no extra packages are required if you have multiple Ethernet ports on your node. As its name indicates, it works strictly at layer 3. Partitioning occurs across IP blocks; you would need one block per link. It can be combined with VLANs and/or virtual interfaces if you are limited in the number of physical interfaces you have on your relay.   
     66
     67==== Network node setup ====
     68 1. This setup assumes a 1-to-1 mapping of VLANs to subnets. Choose IP blocks, one for each VLAN/interface. For example, if you have two clients connected across your node, you need two IP blocks, one for each VLAN:
    5269  * VLAN 111: 192.168.1.0/24, gateway 192.168.1.13
    5370  * VLAN 222: 192.168.2.0/24, gateway 192.168.2.23
     
    90107}}}
    91108 to /etc/sysctl.conf.
    92 === 1.1.2 End node setup ===
     109
     110==== End node setup ====
    93111Unless you have set up DHCP, you must manually assign an IP address and default gateway to each node. The former should be consistent with the subnet associated with the VLAN to which the end host belongs. For example, the following host is connected to a switch port associated with VLAN 222, so it is assigned an address from the 192.168.2.0/24 block:
    94112{{{
     
    101119Do this for each remote subnet that the node should be able to communicate with. Once all of the nodes are configured, you should be able to ping end-to-end. 
    102120
    103 == 1.2 Linux Bridge == #brctl
     121=== 1.2.2 Linux Bridge === #brctl
    104122In terms of implementation, this is probably the simplest method.
    105123
     
    125143The Linux Foundation keeps a page that may be useful for various troubleshooting: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge
    126144
    127 == II !OpenFlow Methods == #of
     145== 1.3 !OpenFlow Methods == #of
    128146This section assumes that you have all of the !OpenFLow components (e.g. OVS, NetFPGA drivers) set up and working, and that you have several choices of controller. The controller used primarily in this section is the Big Switch Networks (BSN) controller. 
    129 == 2.1 !OpenvSwitch == #OVS
     147=== 1.3.1 !OpenvSwitch === #OVS
    130148!OpenvSwitch (OVS) is a user-space software defined switch with !OpenFlow support, complete with its own implementation of a controller. It can, and is assumed to be, built as a kernel module throughout this page.
    131 === 2.1.1 Initialization ===
     149
     150==== Initialization ====
    132151OVS has three main components that must be initialized:
    133152 * openvswitch_mod.ko, the OVS kernel module
     
    163182}}}
    164183The 'unix:...db.sock' specifies that the process attach to the socket opened by `ovsdb`. 
    165 === 2.1.2 Configuring OVS ===
     184
     185==== Configuring OVS ====
    166186the following only needs to be done once, in the initial configurations.
    167187 1. Add ports:
     
    208228}}}
    209229
    210 == 2.2 NetFPGA !OpenFlow switch == #nfpga
     230=== 1.3.2 NetFPGA !OpenFlow switch === #nfpga
    211231This method is probably the most involved and difficult to get right, although in theory would be the best since you would get the programmatic flexibility of the OVS switch and the speed of a hardware-implemented device.
    212232
     
    228248This set of flows basically implements VLAN stitching based on source MAC address. Unlike in the Linux bridge, one cannot see the VLAN-stripped packets on the virtual interface (tap0 on the NFPGA, br0 on bridge); they will already have the proper tag, since the processing is probably occurring in the FPGA and not in the kernel. 
    229249
    230 = III Morals of the story =
     250== 1.4 Morals of the story ==
    231251For quick setup of a network toppology using nodes sharing a medium, point-to-point links should be defined at as low a layer as possible. The next best thing (that is even better because of its flexibility) to actually going in and connecting up the topology using cables is to carve up the shared switch into VLANs. This lets you restrict the broadcast domain however you want, without hard-wiring everything.
    232252