Changes between Version 16 and Version 17 of Internal/OpenFlow/ofTopology


Ignore:
Timestamp:
Jul 17, 2012, 7:58:38 PM (12 years ago)
Author:
akoshibe
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Internal/OpenFlow/ofTopology

    v16 v17  
    1414 * [http://www.orbit-lab.org/wiki/Internal/OpenFlow/HostSetup NetFPGA] - FPGA-based network device with !OpenFlow support
    1515 * [http://www.orbit-lab.org/wiki/Internal/OpenFlow/QuantaSetup Quanta LB9A] - The shared medium switch. In this page this switch will be used in !XorPlus (normal) mode. 
    16  * As for the !OpenFlow controller, there is a [http://www.orbit-lab.org/wiki/Internal/OpenFlow/Controllers collection] to choose from.
    17 
    18 The nodes that were used for the topologies in this page are NetFPGA cubes running Ubuntu10.10 (kernel: 2.6.35-30-generic).
     16 * [http://www.orbit-lab.org/wiki/Internal/OpenFlow/Controllers collection] - A short list of common !OpenFlow controllers.
     17
     18We also assume that you have basic knowledge about L2 constructs, and how they apply to switched networks.
     19
     20The nodes that were used for the topologies in this page are [http://www.accenttechnologyinc.com/netfpga.php?products_id=1 NetFPGA cubes] running Ubuntu10.10 (kernel: 2.6.35-30-generic). The shared switch is a [http://www.quantaqct.com/en/01_product/02_detail.php?mid=30&sid=114&id=115&qs=61 Quanta LB9A] running [http://sourceforge.net/p/xorplus/home/Pica8%20Xorplus/ Pica8's !XorPlus]. 
    1921
    2022= I. Simulating point-to-point Links = #p2p
    21 This section introduces topology setup using the simplest case of a single link between two nodes.
    22 
    23 == 1.1 Overview ==
    24 In general, a topology describes the restrictions on traffic flow between multiple nodes. We build a topology by first partitioning the shared switch so that the nodes are isolated from each other, and then introducing relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. Our base topology, in which all nodes can reach eachother, is a fully connected graph:
    25 {{{
    26 (A)   A - B         
    27        \ /         
    28         C           
    29 }}}
    30 
    31 We build the following topology from the one shown above to demonstrate our methods:
    32 {{{
    33 (B)
    34      A-[r]-B
    35 }}}
    36 Where A and B are nodes 'trapped' in their partitions, and [r] (C in fig. A) is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node'' joining together two links; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects.
    37 
    38 We cover a handful of ways to realize the setup in Fig. B. 
     23This section introduces topology setup using the simplest case of building a single link between two nodes.
     24
    3925== Contents ==
    4026 1.2 [#pre1 Some Considerations] [[BR]]
     
    4935 1.5 [#morals1 Summary]
    5036
     37== 1.1 Overview ==
     38In general, a topology describes the restrictions on traffic flow between one or more nodes (with the simplest case being a single node by itself). We build a topology by first isolating the nodes from each other, and then introducing relays that can move traffic between these nodes in a controlled manner. The way the traffic is relayed produces the topology. Our base topology, in which all nodes can reach each-other through a switch, is logically a fully connected graph:
     39{{{
     40(A)   A - B         
     41       \ /         
     42        C           
     43}}}
     44
     45A usual method to isolate the nodes is to configure the switch to place each node on a separate VLAN. Picking node C as a traffic relay (A router-on-a-stick in the case of VLANs) produces the following topology:
     46{{{
     47(B)
     48     A-C-B
     49}}}
     50We call A and B ''end nodes'' and C a ''network node'' straddling A and B's VLANs. From this logic it follows that the VLAN is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects; for the case above, C has knowledge of, and interfaces, both A and B's VLANs, so we have the following links:
     51{{{
     52 A-C  (VLAN A)
     53 C-B  (VLAN B)
     54}}}
     55The next section explores several methods to produce similar effects.
     56
    5157== 1.2 Some Considerations == #pre1
    5258The techniques used to partition the broadcast domain will heavily depend on two things:
     
    5460 2. the available interfaces on the nodes
    5561
    56 In terms of 1., for example - we don't want to use TCP/IP-based schemes such as IP routing if we don't plan on using TCP/IP, or are planning to modify layer 3. 2. is important in that, depending on technique the number of links you can have (the node degree in terms of graphs) will be restricted to how-many-ever interfaces you have. When you only have one interface, you will want to use virtual interfaces to increase the number of links to/from your node. In turn, you may also need to modify the partitioning scheme of the shared switch.
    57 
    58 A standard way to deploy virtual interfaces is in combination with VLANs and trunking. This is not a bad, since VLANs may be combined with other configuration schemes, are relatively simple to configure, and a good portion of networked devices understand them. Many of the examples here that require virtual interfaces will make use of this standard technique. So, to make things easier, we will quickly describe how to add virtual interfaces and VLAN awareness to a node before moving on to Section 1.1 . 
     62In terms of 1., we aim to keep topology setup and the experiment as two individual problems as opposed to one monolithic one. For example, we may not want to use IP routing if we don't plan on using TCP/IP, or are planning to modify layer 3 ^1^. 2. is important in that, depending on how it's done, the number of links you can have on the node will be restricted to the number of interfaces you have. When you only have one interface, you may want to use virtual interfaces to increase the number of links to/from your node. In turn, you may also need to modify the partitioning scheme of the shared switch.
     63
     64A standard way to deploy virtual interfaces is with VLANs and trunking. VLANs may be combined with other configuration schemes, are relatively simple to configure, and a good portion of networked devices understand them. Many of the examples here that require virtual interfaces will make use of this standard technique. So, to make things easier, we will quickly describe how to add virtual interfaces and VLAN awareness to a node before moving on to Section 1.1 . 
    5965
    6066 1. Install and load VLAN module:
     
    6369 modprobe 8021q
    6470}}}
     71    The module can be made to load at boot time by appending '8021q' to the list /etc/modules.
    6572 2. Add VLAN interfaces using `vconfig`:
    6673{{{
     
    6875 vconfig add eth0 222
    6976}}}
    70  This creates two virtual LAN interfaces, eth0.111 and eth0.222 on eth0. The module can be made to load at boot time by appending '8021q' to the list, /etc/modules.
     77 This creates two virtual LAN interfaces, eth0.111 and eth0.222 on eth0. An interface configured in this manner is said to be ''VLAN aware''.
    7178
    7279Note, virtual interfaces are workarounds to being restricted to one physical interface. Any setup with nodes with multiple interfaces (e.g. using NetFPGAs) will not require the above configs, lest you want more interfaces than you have. For nodes with multiple physical interfaces, the steps describing 'eth0.xxx' can be replaced by the names of each unique interface. Keep in mind, however, that if the interface is connected to a switchport configured as a trunk, it must also be made VLAN aware even if it does not hold multiple virtual interfaces.     
    7380
    74 == 1.3 non-OpenFlow Methods == #basic
    75 These methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
     81== 1.3 non-!OpenFlow Methods == #basic
     82The methods in this section should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
    7683
    7784=== 1.3.1 Kernel IP routing === #KernIP
    78 Kernel IP routing has the least requirements, in that no extra packages are required if you have multiple Ethernet ports on your node. As its name indicates, it works strictly at layer 3. Partitioning occurs across IP blocks; you would need one block per link. It can be combined with VLANs and/or virtual interfaces if you are limited in the number of physical interfaces you have on your relay.   
     85Kernel IP routing has the least requirements, in that no extra packages are required if you have multiple Ethernet ports on your node. As its name indicates, it works strictly at layer 3. Partitioning occurs across IP blocks; you would need one block per link.    
    7986
    8087==== Network node setup ====
     
    157164
    158165== 1.4 !OpenFlow Methods == #of
    159 This section assumes that you have all of the !OpenFLow components (e.g. OVS, NetFPGA drivers) set up and working, and that you have several choices of controller. The controller used primarily in this section is the Big Switch Networks (BSN) controller. 
     166This section assumes that you have all of the !OpenFlow components (e.g. OVS, NetFPGA drivers) set up and working, and that you have several choices of controller. The controller used primarily in this section is the Big Switch Networks (BSN) controller, which has a decent CLI and RESTful API for pushing flows.
     167 
    160168=== 1.4.1 !OpenvSwitch === #OVS
    161169!OpenvSwitch (OVS) is a user-space software defined switch with !OpenFlow support, complete with its own implementation of a controller. It can, and is assumed to be, built as a kernel module throughout this page.
    162170
    163 ==== Initialization ====
    164 OVS has three main components that must be initialized:
    165  * openvswitch_mod.ko, the OVS kernel module
    166  * ovsdb, the database containing configurations
    167  * ovs-vswitchd, the OVS switch daemon
    168 The latter configures itself using the data provided by the former; `ovs-vsctl` is used to modify the contents of the database in order to configure the OVS switch.
    169 
    170  1. Load openVswitch kernel module
    171 {{{
    172  cd datapath/linux/
    173  insmod openvswitch_mod.ko
    174 }}}
    175 Note, OVS and Linux bridging may not be used at the same time. This step will fail if the bridge module (bridge.ko) is loaded. You may need to reboot the node in order to unload bridge.ko.[[BR]]   
    176 If this is the first time OVS is being run, make am openvswitch directory in /usr/local/etc/ and run `ovsdb-tool` to create the database file:
    177 {{{
    178  mkdir -p /usr/local/etc/openvswitch
    179  ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
    180 }}}
    181  2. Start ovs-db:
    182 {{{
    183  ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
    184         --remote=db:Open_vSwitch,manager_options \
    185         --pidfile --detach
    186 }}}
    187  3. Initialize the database:
    188 {{{
    189  utilities/ovs-vsctl --no-wait init
    190 }}}
    191 the `--no-wait` allows the database to be initialized before ovs-vswitchd is invoked.
    192  4. Start ovs-vswitchd:
    193 {{{
    194  vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
    195 }}}
    196 The 'unix:...db.sock' specifies that the process attach to the socket opened by `ovsdb`. 
    197 
    198 ==== Configuring OVS ====
    199171the following only needs to be done once, in the initial configurations.
    200172 1. Add ports:
     
    220192ovs-vsctl set-controller br0 tcp:172.16.0.14:6633
    221193}}}
    222 In this example, the OVS process is pointed to a BSN controller (kvm-big) on 172.16.0.14, listening on port 6633^1^. With a properly initialized and configured database, `ovs-vswitchd` will spit out a bunch of messages as it attempts to connect to the controller. Its output should look something similar to this:
     194In this example, the OVS process is pointed to a BSN controller (kvm-big) on 172.16.0.14, listening on port 6633^2^. With a properly initialized and configured database, `ovs-vswitchd` will spit out a bunch of messages as it attempts to connect to the controller. Its output should look something similar to this:
    223195{{{
    224196root@node1-4:/opt/openvswitch-1.2.2# vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
     
    310282[[BR]]
    311283----
    312 ^1. This specific example requires a bit of network reconfiguration and comes with substantial risk of disconnecting your node from the network if done carelessly.^
     284^1. this, of course, depends on what you are trying to demonstrate.^
     285^2. This specific example requires a bit of network reconfiguration and comes with substantial risk of disconnecting your node from the network if done carelessly.^