wiki:Documentation/fSDN/bMininet

Mininet

This page documents the usage/installation of mininet as well as it's interaction with OEDL. The main source is the mininet homepage located at http://mininet.org/.

0. Installation

While mininet can be installed via an apt-package, the version that is in the repository is very old and does not have some important features so we will instead install by source. References can be found here

  1. Install build perquisite packages:
    apt-get install build-essential git-core
    
  2. Clone the Repository:
    git clone git://github.com/mininet/mininet
    
  3. According to the note located here, the default install comes with more packages than we need, so instead we will call the install script with the -nv flag
    mininet/util/install.sh -nv
    

This will install the base mini-net dependencies with only open-vswitch support. Once installed you can try the pingall test to ensure the installation succeeded:

root@node1-1:~# sudo mn --test pingall
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) 
*** Configuring hosts
h1 h2 
*** Starting controller
c0 
*** Starting 1 switches
s1 
*** Waiting for switches to connect
s1 
*** Ping: testing ping reachability
h1 -> h2 
h2 -> h1 
*** Results: 0% dropped (2/2 received)
*** Stopping 1 controllers
c0 
*** Stopping 1 switches
s1 ..
*** Stopping 2 links

*** Stopping 2 hosts
h1 h2 
*** Done
completed in 0.976 seconds

Example Usage

see here

Mininet Cluster Edition

Mininet supports running a network across a set of nodes (cluster). Currently, this relies on SSH tunnels, and is experimental. To enable cluster support, on each node that will be part of the cluster:

  1. Generate and exchange SSH keys. If we are running a tiny cluster of node1-1 and node1-2:
    ssh-keygen -t rsa
    ssh-copy-id root@node1-1
    ssh-copy-id root@node1-2
    
    Note how a node also has to have its own key in authorized_keys, so we run ssh-copy-id against every node in the cluster.
  1. Edit /etc/ssh/sshd_config, adding the following two lines:
    AllowTCPForwarding yes                                                                                
    PermitTunnel yes
    
  1. restart sshd
    service ssh restart
    

If all goes well, you should be able to run the following to run a tree topology across nodes1-1 and node1-2:

# mn --cluster node1-1,node1-2 --topo tree,2,3

The virtual switches are distributed randomly across the two nodes, as indicated as part of the above command's output:

*** Placing nodes
h1:node1-1 h2:node1-1 h3:node1-1 h4:node1-2 h5:node1-2 h6:node1-2 h7:node1-2 h8:node1-2 h9:node1-2 s1:
node1-1 s2:node1-1 s3:node1-2 s4:node1-2 

If you have a controller running remotely, you can append the —controller remote,ip=x.x.x.x argument like in a regular mn command.


1.2.1 Using Wireshark

In the above example, tcpdump can be replaced by wireshark. Wireshark is "friendlier" in that it has a GUI and an OpenFlow dissector plugin is available for it. In order to use Wireshark, you must enable X11 forwarding from your workstation to the node, with the -X or -Y flag for ssh e.g.:

ssh -X -l root node1-1

1.2.2 Using OpenVswitch directly

Mininet's datapaths are backed by OVS. Therefore, if you have a Mininet install, you get OVS for "free". You can use OVS directly for your data plane.


II More complex examples

It is possible to run multiple instances of controllers (for whatever reason), or different logical components together in the same network. This section shows two examples of more complex SDN network setups - multiple controller instances and with FlowVisor, a network hypervisor.


2.1 Multiple Controllers

You may have multiple controllers in the same logical space of the control plane for various reasons - special applications, fail-over, distributed control planes, etc.

  • 2.1.1 On multiple hosts
  • 2.1.2 On the same host

2.1.1 On multiple hosts If each controller is running on its own host (machine, VM, etc.), there is little to change; if you have hosts A,B, and C, and Floodlight instances running on each, switches can be pointed to targets A:6633, B:6633, C:6633, or any combination thereof (switches can be pointed to multiple controllers).

2.1.2 On the same host #s2_1_2

The Floodlight configuration file Multiple instances of Floodlight may be run on the same host, as long as each controller listens on a separate set of sockets. In this case, all controllers would be on the same IP address(es), so you must change the ports they are listening on. These ports include the OpenFlow control port (TCP 6633), REST API (TCP 8080), and debug (TCP 6655).

In Floodlight, this value can be changed by modifying the file floodlightdefault.properties, located in src/main/resources/ of the Floodlight sources. (Currently) It looks like this:

floodlight.modules=\
net.floodlightcontroller.jython.JythonDebugInterface,\
net.floodlightcontroller.counter.CounterStore,\
net.floodlightcontroller.storage.memory.MemoryStorageSource,\
net.floodlightcontroller.core.internal.FloodlightProvider,\
net.floodlightcontroller.threadpool.ThreadPool,\
net.floodlightcontroller.devicemanager.internal.DeviceManagerImpl,\
net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier,\
net.floodlightcontroller.staticflowentry.StaticFlowEntryPusher,\
net.floodlightcontroller.firewall.Firewall,\
net.floodlightcontroller.forwarding.Forwarding,\
net.floodlightcontroller.linkdiscovery.internal.LinkDiscoveryManager,\
net.floodlightcontroller.topology.TopologyManager,\
net.floodlightcontroller.flowcache.FlowReconcileManager,\
net.floodlightcontroller.debugcounter.DebugCounter,\
net.floodlightcontroller.debugevent.DebugEvent,\
net.floodlightcontroller.perfmon.PktInProcessingTime,\
net.floodlightcontroller.ui.web.StaticWebRoutable,\
net.floodlightcontroller.loadbalancer.LoadBalancer,\
org.sdnplatform.sync.internal.SyncManager,\
org.sdnplatform.sync.internal.SyncTorture,\ 
net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier
org.sdnplatform.sync.internal.SyncManager.authScheme=CHALLENGE_RESPONSE
org.sdnplatform.sync.internal.SyncManager.keyStorePath=/etc/floodlight/auth_credentials.jceks
org.sdnplatform.sync.internal.SyncManager.dbPath=/var/lib/floodlight/

Several entries can be added to this list to tweak TCP port values. Unfortunately, these entries may change fairly frequently due to active development.

  • net.floodlightcontroller.restserver.RestApiServer.port = 8080
  • net.floodlightcontroller.core.internal.FloodlightProvider.openflowport = 6633
  • net.floodlightcontroller.jython.JythonDebugInterface.port = 6655

Each entry should be on its own line, with no spaces or newlines in between lines. For example, to change the port that Floodlight listens for switches on from the default of 6633 to 6634, append:

net.floodlightcontroller.core.internal.FloodlightProvider.openflowport = 6634

To the .properties file. Then, point Floodlight to the configuration file with the -cf flag:

java -jar target/floodlight.jar -cf src/main/resources/floodlightdefault.properties

The file specified after -cf will be read in, and the values in it used to configure the controller instance. You should be able to confirm the change:

# netstat -nlp | grep 6634
...
tcp6       0      0 :::6634                 :::*                    LISTEN      2029/java       
...

Launching multiple controllers

Each instance of the controller run on the same host can be pointed to its own .properties file with the -cf flag, with different port value parameters. Begin by making as many copies of the default .properties file as you will have controllers. Going with a similar example as earlier, you can have one host A and three Floodlight instances 1,2, and 3, configured as below:

1 2 3
FloodlightProvider.openflowport 6633 6634 6635
RestApiServer.port 8080 8081 8082
JythonDebugInterface.port 6655 6656 6657

No ports should be shared by the three instances, or else they will probably throw errors at startup and exit shortly after. With a .properties file for each instance under resources/ (named 1,2, and 3.properties for this example), you can launch the controllers in a loop for example:

for i in `seq 1 3`; do
   java -jar target/floodlight.jar -cf src/main/resources/$i.properties 1>/dev/null 2>&1 &
done

This should launch three backgrounded instances of Floodlight.


2.2 Network virtualization/slicing

A more typical case you might encounter is a network that is sliced, or virtualized.

  • 2.2.1 A brief intro to network virtualization
  • 2.2.2 Virtualization with multiple hosts
  • 2.2.3 On the same host

2.2.1 A brief intro to network virtualization

A virtualized network is organized as below:

[controller 1] [controller 2] [controller 3]
             \       |       /
              \      |      /
            [network hypervisor]-[policies]
                     |
                  [network]

A network hyperviser like FlowVisor sits between the control and data plane, intercepting and re-writing the contents of the OpenFlow control channel to one or more controllers running independently of one another. Ultimately, the network hypervisor provides each controller with an illusion that it is the only controller in the network. It accomplishes this by

  1. Rewriting the topology information conveyed by OpenFlow (in the form of PORT_STATs and PacketIns triggered by LLDP messages) before it reaches each controller, allowing it to only work on a subset, or slice, of the network, and
  2. Mapping the PacketIns/PacketOuts to and from each controller to the proper sets of switches and switch ports.

How the re-writing occurs depends on a set of admin-defined policies.


2.2.2 Virtualization with multiple hosts

We begin by introducing a simple example of a virtualized topology:

[Floodlight 1] [Floodlight 2]
           \    /
         [FlowVisor]
              |
          [Mininet]    

Each component above will be run on a separate node. Since we need more than two nodes, you may want to reserve either Sandboxes 4 or 9. The components can also be run on the same node, with the caveats discussed in the next section, 2.2.3.

Here, Mininet will be used to emulate a three-switch, three-host data plane:

h1   h2   h3
 |    |    | 
s1---s2---s3 

This data plane will be sliced so that one Floodlight instance will control switches s1 and s2, and the other, s3.


2.2.3 On the same host


As with the case of multiple controllers on the same VM/host, you must be careful that neither FlowVisor nor the controllers listen on the same sets of ports. For the multiple controllers, this can be avoided as described in Section 2.1.2. FlowVisor and Floodlight conflict on ports 6633 and 8080.


3.3 Cbench
website: http://docs.projectfloodlight.org/display/floodlightcontroller/Cbench+(New)


dependencies

sudo apt-get install autoconf automake libtool libsnmp-dev libpcap-dev

installation/build

git clone git://gitosis.stanford.edu/openflow.git
cd openflow; git checkout -b mybranch origin/release/1.0.0
git clone git://gitosis.stanford.edu/oflops.git
git submodule init && git submodule update
wget http://hyperrealm.com/libconfig/libconfig-1.4.9.tar.gz
tar -xvzf libconfig-1.4.9.tar.gz
cd libconfig-1.4.9
./configure
sudo make && sudo make install
cd ../oflops/
sh ./boot.sh ; ./configure --with-openflow-src-dir=${OF_PATH}/openflow/
make install

Where OF_PATH is where you had cloned the OpenFlow repository to.


run

Run from the cbench directory under oflops:

cd cbench 
cbench -c localhost -p 6633 -m 10000 -l 10 -s 16 -M 1000 -t 
  • -c localhost : controller at loopback
  • -p 6633 : controller listaning at port 6633
  • -m 10000 : 10000 ms (10 sec) per test
  • -l 10 : 10 loops(trials) per test
  • -s 16 : 16 emulated switches
  • -M 1000 : 1000 unique MAC addresses(hosts) per switch
  • -t : throughput testing

for the complete list, use the -h flag.

The output for the above command looks like this:

cbench: controller benchmarking tool
   running in mode 'throughput'
   connecting to controller at localhost:6633 
   faking 16 switches offset 1 :: 3 tests each; 10000 ms per test
   with 10 unique source MACs per switch
   learning destination mac addresses before the test
   starting test with 0 ms delay after features_reply
   ignoring first 1 "warmup" and last 0 "cooldown" loops
   connection delay of 0ms per 1 switch(es)
   debugging info is off
16:53:14.384 16  switches: flows/sec:  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18   total = 0.028796 per ms 
16:53:24.485 16  switches: flows/sec:  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20   total = 0.031999 per ms 
16:53:34.590 16  switches: flows/sec:  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24   total = 0.038380 per ms 
RESULT: 16 switches 2 tests min/max/avg/stdev = 32.00/38.38/35.19/3.19 responses/s

—- 3.4 liboftrace (ofdump/ofstats)

docs:

https://github.com/capveg/oftrace/blob/master/README
http://www.openflow.org/wk/index.php/Liboftrace


dependencies

sudo apt-get install libpcap-dev swig libssl-dev

installation/build

git clone git://github.com/capveg/oftrace.git
cd oftrace
./boot.sh
./configure --with-openflow-src-dir=${OF_PATH}/openflow/
make && make install

run

There are two tools pre-packaged with liboftrace (as per a mailing-list entry):

  1. ofstats: a program which calculates the controller processing delay, i.e., the difference in time between a packet_in message and the corresponding packet_out or flow_mod message.
  2. ofdump: a program that simply lists openflow message types with timestamps by switch/controller pair.

Both have the same syntax:

[ofstats|ofdump] [controller IP] [OF port]

Without the arguments it defaults to localhost:6633.

For example, with a pcap file named sample.pcap from a tcpdump session sniffing for traffic from a controller at 192.168.1.5, port 6637:
ofdump:

# ofdump sample.pcap 192.168.1.5 6637
DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 
DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 0      LEN 8   TIME 0.000000
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 0      LEN 8   TIME 0.026077
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 5      LEN 8   TIME 0.029839
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 6      LEN 128 TIME 0.1070415

...

FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038485
 --- 2 sessions:  0 0
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038523
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038573
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038614
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038663
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038704
Total OpenFlow Messages: 20015

ofstats:

# ofstats sample.pcap 192.168.1.5 6637  
Reading from pcap file 1.pcap for controller 192.168.1.5 on port 6637
DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 
DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 
0.008088        secs_to_resp buf_id=333 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
0.000454        secs_to_resp buf_id=334 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000437        secs_to_resp buf_id=335 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000534        secs_to_resp buf_id=336 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
0.000273        secs_to_resp buf_id=337 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000486        secs_to_resp buf_id=338 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000379        secs_to_resp buf_id=339 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000275        secs_to_resp buf_id=340 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
...
0.000135        secs_to_resp buf_id=10330 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000132        secs_to_resp buf_id=10331 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000131        secs_to_resp buf_id=10332 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued

Since the outputs are dumped to stdout it is probably best to redirect it to a file for parsing later, like so:

# ofstats sample.pcap 192.168.1.5 6637 > outfile

3.5 Wireshark

website(wireshark): http://www.wireshark.org
docs(plugin): https://bitbucket.org/barnstorm/of-dissector


dependencies
wireshark(source):

sudo apt-get install libpcap-dev bison flex libgtk2.0-dev build-essential 

plugin:

sudo apt-get install scons mercurial

installation/build

You need the source for Wireshark to build the plugin. At the time of this writing Wireshark is at v.1.10.

wget http://wiresharkdownloads.riverbed.com/wireshark/src/wireshark-1.10.0.tar.bz2
tar -xjf wireshark-1.10.0.tar.bz2 
cd wireshark-1.10.0/
./configure

The above is sufficient for the plugin. Installing Wireshark from source e.g. with make;make install can take a while, so you may choose to install the binary, i.e. do:

apt-get install wireshark

If you decide to build from source, also install libwiretap1.

Next fetch and build the plugin:

hg clone https://bitbucket.org/barnstorm/of-dissector
cd of-dissector/
export WIRESHARK=${WS_ROOT}/wireshark-1.10.0/
cd src
scons install
cp openflow.so /usr/lib/wireshark/libwireshark1/plugins/

Where ${WS_ROOT} is the directory you've untarred the Wireshark source to. The plugin directory may also differ depending on if you installed Wireshark from source or not - if you did, the path will be something similar to /usr/local/lib/wireshark/plugins/1.10.0/


run

Run Wireshark as root:

sudo wireshark

You should see openflow.so in the list of plugins if you go to Help > About Wireshark > plugins.


Refrences

Floodlight

Mininet

FlowVisor website: http://onlab.us/flowvisor.html

Last modified 9 years ago Last modified on May 9, 2015, 1:34:46 PM
Note: See TracWiki for help on using the wiki.