wiki:Internal/OpenFlow/ofTopology

Version 3 (modified by akoshibe, 12 years ago) ( diff )

Building Network Topologies (with OpenFlow)

This is a rough overview of the steps one needs to take in order to simulate point-to-point links between nodes sharing a single switch (e.g. within the same broadcast domain), using OpenFlow-controlled nodes as a means to move traffic from one VLAN to another. The steps described here are incomplete; Things will be updated as methods are refined/improved.

Prerequisites

VLANs are good for breaking up broadcast domains. If each node is placed on separate VLANs and given a choice of a few "gateways" out of its VLAN, it has no choice but to communicate through the gateway(s). How the gateway nodes moves packets/frames from one VLAN to another depends on which network layer(s) are involved.

This page assumes that you have a setup similar to SB9, as well as a node with a working install of NetFPGA drivers or OpenvSwitch, depending on how links are being set up. For the OpenFlow methods, you also need a OpenFlow controller that allows you to push flows to your software defined switch. You should have access to the switch that the nodes are sharing as well, since you need to slice it into VLANs. The following links describe setup and use of theses components (internal links):

  • OpenVswitch - A software-defined virtual switch with OpenFlow support, no special hardware required.
  • NetFPGA - FPGA-based network device with OpenFlow support
  • Quanta LB9A - The shared medium switch. In this page this switch will be used in XorPlus (normal) mode.
  • As for the OpenFlow controller, there is a collection to choose from.

The system used here is Ubuntu10.10 (kernel: 2.6.35-30-generic). Command syntax will change depending on your distro.

Contents

We first describe some base "sanity-test" setups that do not involve any OpenFlow elements. These are:

I Basic Methods

1.1 Kernel IP routing (Layer 3)
1.2 Linux Bridge (Layer 2)

Then we describe the (ongoing) process of topology setup using OpenFlow-related elements, such as:

II OpenFlow Methods

2.1 OpenvSwitch
2.2 NetFPGA OpenFlow switch

OpenFlow is rather layer-agnostic, defining traffic rules based on a combination of any of the 12 packet header fields that may be used for matching under the OpenFlow standard. These fields correspond to layers 1~4.

All save the very last method requires that the network node is VLAN aware. Before moving on to Section I we will quickly describe how to add VLAN awareness to the system.

  1. Install and load VLAN module:
     apt-get install vlan
     modprobe 8021q
    
  2. Add VLAN interfaces using vconfig:
     vconfig add eth0 111
     vconfig add eth0 222
    
    This creates two virtual LAN interfaces, eth0.111 and eth0.222. The module can be made to load at boot time by appending '8021q' to the list, /etc/modules.

I Basic Methods

These two methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.

1.1 Kernel IP routing

Kernel IP routing is the simplest, in that no extra packages are required if you have multiple Ethernet ports on your node.

1.1.1 Network node setup

  1. This setup assumes a 1-to-1 mapping of VLANs to subnets. Choose IP blocks, one fore each VLAN. For example, if you have two clients connected across your node, you need two IP blocks, one for each VLAN:
    • VLAN 111: 192.168.1.0/24, gateway 192.168.1.13
    • VLAN 222: 192.168.2.0/24, gateway 192.168.2.23

The gateway IPs chosen above will be the IP addresses assigned to the VLAN interfaces you have set up earlier on your network node.

  1. Bring up VLAN interfaces with the IP addresses/blocks you have chosen:
     ifconfig eth0 0.0.0.0 up
     ifconfig eth0.111 inet 192.168.1.23 broadcast 192.168.1.255 netmask 0xffffff00 up
     ifconfig eth0.222 inet 192.168.2.23 broadcast 192.168.2.255 netmask 0xffffff00 up
    
    This configuration can be made permanent by modifying /etc/network/interfaces:
    auto eth0.111
    iface eth0.111 inet static
        address 192.168.1.13
        netmask 255.255.255.0
        vlan-raw-device eth0
    
    auto eth0.222
    iface eth0.222 inet static
        address 192.168.2.23
        netmask 255.255.255.0
        vlan-raw-device eth0
    
  1. Enable routing on network node
     route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.13
     route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.2.23
     echo 1 >  /proc/sys/net/ipv4/ip_forward
    
    The last line in the above block is equivalent to running the command:
    sysctl -w net.ipv4.ip_forward=1
    
    The ip_forward flag resets itself after reboot. To make it permanent, add
    sysctl net.ipv4.ip_forward=1
    
    to /etc/sysctl.conf.

1.1.2 End node setup

Unless you have set up DHCP, you must manually assign an IP address and default gateway to each node. The former should be consistent with the subnet associated with the VLAN to which the end host belongs. For example, the following host is connected to a switch port associated with VLAN 222, so it is assigned an address from the 192.168.2.0/24 block:

 ifconfig eth0 inet 192.168.2.4

Then you must add reachability information to the node's routing table e.g. the IP addresses that it must send data to in order to have it reach remote subnets. Since there is only one other subnet in this example, a single entry specifying the destination subnet (192.168.1.0/24 - VLAN 111) and the gateway IP in/out of the current node's subnet is added:

 route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.2.23

Do this for each remote subnet that the node should be able to communicate with. Once all of the nodes are configured, you should be able to ping end-to-end.

1.2 Linux Bridge

In terms of implementation, this is probably the simplest method.

A bridge will ignore VLAN tags, so if you have two VLAN interfaces e.g. eth0.111 and 222 sitting on a trunk, the packets will come in tagged. An intermediate abstraction will strip the tag from the packet (at br0), and the packet will get tagged as appropriate on the outbound. Unlike kernel IP forwrding, bridging works purely at Layer 2, hence you do not need to worry about IP addressing.

The first three steps refer to the network node.

  1. Configure and bring VLANS up as before, sans IP addresses
  1. Install bridge-utils:
     apt-get install bridge-utils
    
  2. Create bridge interface, add ports:
     brctl addbr br0
     brctl addif br0 eth0.111
     brctl addif br0 eth0.222
    
  3. Set all hosts on the bridged VLANs to the same IP block.

II OpenFlow Methods

This section assumes that you have all of the !OpenFLow components (e.g. OVS, NetFPGA drivers) set up and working, and that you have several choices of controller. The controller used primarily in this section is the Big Switch Networks (BSN) controller.

2.1 OpenvSwitch

OpenvSwitch (OVS) is a user-space software defined switch with OpenFlow support, complete with its own implementation of a controller. It can, and is assumed to be, built as a kernel module throughout this page.

2.1.1 Initialization

OVS has three main components that must be initialized:

  • openvswitch_mod.ko, the OVS kernel module
  • ovsdb, the database containing configurations
  • ovs-vswitchd, the OVS switch daemon

The latter configures itself using the data provided by the former; ovs-vsctl is used to modify the contents of the database in order to configure the OVS switch.

  1. Load openVswitch kernel module
     cd datapath/linux/
     insmod openvswitch_mod.ko
    

Note, OVS and Linux bridging may not be used at the same time. This step will fail if the bridge module (bridge.ko) is loaded. You may need to reboot the node in order to unload bridge.ko.

  1. Start ovs-db:
     ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
            --remote=db:Open_vSwitch,manager_options \
            --pidfile --detach
    
  2. Initialize the database:
     utilities/ovs-vsctl --no-wait init
    

the --no-wait allows the database to be initialized before ovs-vswitchd is invoked.

  1. Start ovs-vswitchd:
     vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
    

The 'unix:…db.sock' specifies that the process attach to the socket opened by ovsdb.

2.1.2 Configuring OVS

the following only needs to be done once, in the initial configurations.

  1. Add ports:
     ovs-vsctl add-br br0
     ovs-vsctl add-port br0 eth0.111
     ovs-vsctl add-port br0 eth0.222
    

By the time the 'add-port' commands are used, you should not be able to ping across the two VLANS, even with correct route table entries and packet forwarding enabled in the kernel. Here, br0 is a virtual interface similar to tap0 in the bridge. There should be one virtual interface per virtual switch to be instantiated. By default, ports added to the switch are trunked. Using the option tag=VLAN ID makes the interfaces behave as access ports for the VLAN ID specified:

 ovs-vsctl add-port br0 eth0.111 tag=111
 ovs-vsctl add-port br0 eth0.222 tag=222

However, this is unrelated to what needs to happen here so we will not explore its uses any further (for now).

  1. If it has not been done already, initialize the OpenFlow controller. The procedures for this step differ according to the controller in use, and are discussed in the pages for each respective controller.

A sanity check for this step is to test your virtual switch with the OVS built-in controller, ovs-controller, which may be initialized on the same node running OVS:

ovs-controller -v ptcp:6633

When ovs-controller is used, the controller IP is, unsurprisingly, 127.0.0.1.

  1. Point ovs-vswitchd to the OpenFlow controller.
    ovs-vsctl set-controller br0 tcp:172.16.0.14:6633
    

In this example, the OVS process is pointed to a BSN controller (kvm-big) on 172.16.0.14, listening on port 66331. With a properly initialized and configured database, ovs-vswitchd will spit out a bunch of messages as it attempts to connect to the controller. Its output should look something similar to this:

root@node1-4:/opt/openvswitch-1.2.2# vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
Nov 07 17:37:02|00001|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connecting...
Nov 07 17:37:02|00002|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connected
Nov 07 17:37:02|00003|bridge|INFO|created port br0 on bridge br0
Nov 07 17:37:02|00004|bridge|INFO|created port eth0.101 on bridge br0
Nov 07 17:37:02|00005|bridge|INFO|created port eth0.102 on bridge br0
Nov 07 17:37:02|00006|ofproto|INFO|using datapath ID 0000002320b91d13
Nov 07 17:37:02|00007|ofproto|INFO|datapath ID changed to 000002972599b1ca
Nov 07 17:37:02|00008|rconn|INFO|br0<->tcp:172.16.0.14:6633: connecting...

The OpenvSwitch OpenFlow switch should be functional as soon as it finds and connects to the controller. As you can see above, a DPID is chosen at random; if a random DPID does not suit your needs, a DPID may be specified manually using ovs-vsctl:

ovs-vsctl set bridge <mybr> other-config:datapath-id=<datapathid>

Where <datapathid> is a 16-digit hex value. For our network node, this becomes:

ovs-vsctl set bridge br0 other-config:datapath-id=0000009900113300

2.2 NetFPGA OpenFlow switch

The following are the flow configurations applied to the first (trunked) setup.

switch 00:00:00:00:00:10:10:10

#strip tag from any incoming traffic on port 1
  flow-entry port1
    active True
    ingress-port 1
    vlan-id 111
    actions strip-vlan,output=4

#re-apply VLAN 222 tag to ARP packets bound for port 4, from 1
  flow-entry port1-2
    active False
    ingress-port 1
    ether-type 2054
    actions set-vlan-id=222,output=4

#re-apply tag to IP packets bound for 192.168.1.4
  flow-entry port1-3
    active False
    ether-type 2048
    src-ip 192.168.1.1
    actions set-vlan-id=222,output=4

#re-apply VLAN 111 tag to ARP packets bound for port 1, from 4
  flow-entry port2-2
    active False
    ingress-port 4
    ether-type 2054
    actions set-vlan-id=111,output=1

#re-apply tag to IP packets bound for 192.168.1.1
  flow-entry port2-3
    active False
    ether-type 2048
    src-ip 192.168.1.4
    actions set-vlan-id=111,output=1

#strip tag from any incoming traffic on port 4
  flow-entry port4
    active True
    ingress-port 4
    vlan-id 222
    actions strip-vlan,output=1
!




1. This specific example requires a bit of network reconfiguration and comes with substantial risk of disconnecting your node from the network if done carelessly.

Attachments (1)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.