Changes between Version 11 and Version 12 of Internal/OpenFlow/ofTopology


Ignore:
Timestamp:
Jul 5, 2012, 6:56:53 PM (12 years ago)
Author:
akoshibe
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Internal/OpenFlow/ofTopology

    v11 v12  
    11= Building Network Topologies =
    22This page aims to describe some things-to-consider when setting up topologies for experimentation.
     3 I. [#p2p Simulating Point-to-Point Links] [[BR]]
     4 II. [#tdescr Topology Description Methods]
    35
    46== Prerequisites ==
     
    1214The system used here is Ubuntu10.10 (kernel: 2.6.35-30-generic). Command syntax will change depending on your distro.
    1315
    14 First and foremost, the shared switch should be split into several VLANs according to your topology. Two interconnected nodes should be on the same VLAN e.g. the switch-ports connected to them should be associated with the same VLAN ID. Nodes connected to more than one element should sit on a trunked port open to all VLANs the node should associate with.
    15 
    16 = I. Simulating point-to-point Links =
     16= I. Simulating point-to-point Links = #p2p
    1717
    1818This section aims to provide a rough overview of the steps one needs to take in order to simulate point-to-point links between nodes sharing a single switch (e.g. within the same broadcast domain), using standard and !OpenFlow-controlled nodes. In general, we want to partition the shared switch so that the nodes are isolated from each other, and then introduce relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. The general topology we use to describe our methods is the following:
     
    2020 A-[r]-B
    2121}}}
    22 Where A and B are nodes 'trapped' in their partitions, and [r] is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node''; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Most of the configuration occurs on the network node. The steps described here are incomplete; Things will be updated as methods are refined/improved.
     22Where A and B are nodes 'trapped' in their partitions, and [r] is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node''; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects.
    2323 
    2424== Contents ==
    25 We first describe some base "sanity-test" setups that do not involve any !OpenFlow elements. These are:
    26  1.1 [#pre1 Some Considerations]
    27  1.2 [#basic Basic Methods]
     25 1.1 [#pre1 Some Considerations] [[BR]]
     26 1.2 [#basic Non-OpenFlow Methods] [[BR]]
    2827  1.2.1 [#KernIP Kernel IP routing] (Layer 3) [[BR]]
    2928  1.2.2 [#brctl Linux Bridge] (Layer 2) [[BR]]
    3029  1.2.3 [#filt Packet filters] (Layers 2-4)
    31 Then we describe the (ongoing) process of topology setup using !OpenFlow-related elements, such as:
    3230 1.3 [#of OpenFlow Methods]
    3331  1.3.1 [#OVS OpenvSwitch] [[BR]]
    3432  1.3.2 [#nfpga NetFPGA OpenFlow switch] [[BR]]
    35   1.3.3 [#pt Prototyping] - With Mininet
     33  1.3.3 [#pt Prototyping] - With Mininet [[BR]]
     34 1.4 [#morals1 Summary]
    3635!OpenFlow is rather layer-agnostic, defining traffic rules based on a combination of any of the 12 packet header fields that may be used for matching under the !OpenFlow standard. These fields correspond to layers 1~4.   
    3736
     
    5756 This creates two virtual LAN interfaces, eth0.111 and eth0.222 on eth0. The module can be made to load at boot time by appending '8021q' to the list, /etc/modules.
    5857
    59 Note, virtual interfaces are workarounds to being restricted to one physical interface. Any setup with nodes with multiple interfaces (e.g. using NetFPGAs) will not require the above configs, lest you want more interfaces than you have. For nodes with multiple physical interfaces, the steps describing 'eth0.xxx' can be replaced by the names of each unique interface. Keep in mind, however, that if the interface is connected to a switchport configured as a trunk, it must also be made VLAN aware even if it does not hold multiple virtual interfaces.   
    60 
    61 == 1.2 Basic Methods == #basic
     58Note, virtual interfaces are workarounds to being restricted to one physical interface. Any setup with nodes with multiple interfaces (e.g. using NetFPGAs) will not require the above configs, lest you want more interfaces than you have. For nodes with multiple physical interfaces, the steps describing 'eth0.xxx' can be replaced by the names of each unique interface. Keep in mind, however, that if the interface is connected to a switchport configured as a trunk, it must also be made VLAN aware even if it does not hold multiple virtual interfaces.     
     59
     60== 1.2 non-OpenFlow Methods == #basic
    6261These methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
    6362
     
    248247This set of flows basically implements VLAN stitching based on source MAC address. Unlike in the Linux bridge, one cannot see the VLAN-stripped packets on the virtual interface (tap0 on the NFPGA, br0 on bridge); they will already have the proper tag, since the processing is probably occurring in the FPGA and not in the kernel. 
    249248
    250 == 1.4 Morals of the story ==
    251 For quick setup of a network toppology using nodes sharing a medium, point-to-point links should be defined at as low a layer as possible. The next best thing (that is even better because of its flexibility) to actually going in and connecting up the topology using cables is to carve up the shared switch into VLANs. This lets you restrict the broadcast domain however you want, without hard-wiring everything.
    252 
    253 As for !OpenFlow switching, OVS nodes controlled by a BSN controller is the flexible, least-hassle choice for this task. 
    254 
     249== 1.4 Morals of the story == #morals1
     250For quick setup of a network toppology using nodes sharing a medium, point-to-point links should be defined at as low a layer as possible. The next best thing (that is even better because of its flexibility) to actually going in and connecting up the topology using cables is to carve up the shared switch into VLANs. This lets you restrict the broadcast domain however you want, without hard-wiring everything, even when you are restricted to just one physical interface per node.
     251
     252As for !OpenFlow switching, OVS nodes (or a Mininet prototype topology) controlled by a controller supporting flow pushing (e.g. BSN or Floodllight) is the most flexible, least-hassle choice for this task. 
     253----
     254= II. Topology Description Methods = #tdescr
     255The previous section described some methods to manually configure a link. The ultimate goal in topology configuration is to be able to do it automatically for multiple links in various configurations. This implies a process along the following lines:
     256{{{
     257 [Topology File] -> [parse] -> [configure] -> [topology in network]
     258}}}
     259The Topology file must be able to describe not only the topology, but also various other factors that will change between experiments. This section attempts to explore some topology description formats that may be decent candidates for an automated system.   
     260
     261== Contents ==
     262 2.1 [#pre2 Some Considerations] [[BR]]
     263 2.2 [#fmts Formats] [[BR]]
     264  2.2.1 [#gexf GEXF] [[BR]]
     265  2.2.2 [#gdf GDF] [[BR]]
     266<!-- 2.2.3 [#] [[BR]] -->
     267 2.3 [#morals2 Summary]
    255268[[BR]]
    256269[[BR]]