= Orbit-Planet Lab Integration demo: = This demo illustrates an orbit node (acting as an AP) fetching a video stream from a remote planet lab node and feeding it to a client node. The basic steps involved in the demo are:- -> Firstly, a key-pair has to be generated {{{ ssh-keygen -t rsa }}} This will generate 2 files id_rsa and id_rsa.pub which hold the private and public keys resp. -> Next the keys need to be transferred to the slice on the planet node. Winlab has a permenant slice orbit_pkamat. {{{ ssh-copy-id -i ~/.ssh/id_rsa.pub orbit_pkamat@pli1-pa-3.hpl.hp.com }}} -> After logging into the grid console, you have to image the nodes with planetlab-biggrid.ndz (which resides in repository2:/export/orbit/image) {{{ imageNodes [13,5],[13,7] planetlab-biggrid.ndz }}} Changes might be needed to the script file 'startitall.rb'. For this the nodes have to be powered up {{{ wget -O - -q 'http://cmc:5012/cmc/on?x=13&y=5' }}} And then ssh the node to get access to its root {{{ ssh root@node13-5 }}} Then the script file can be modified accordingly. -> The new version of the nodehandler script and the experiment script 'planetlab.rb' are to be downloaded from nodehandler-planetlabdemo.tar.gz {{{ cd nodehandler/src/ruby ruby handler/nodeHandler.rb -k test:exp:planetlab }}} Changes might be needed to the script files 'nodehandler.rb' and 'planetlab.rb'. Setting up the planet node as Apache servers: You might need to configure Apache server for the Planet Lab nodes. This can be done as follows 1.ssh to the appropriate planet lab node from the grid console {{{ ssh -v -l orbit_pkamat pli1-pa-3.hpl.hp.com }}} 2.Check if httpd is already installed using the following command {{{ rpm -q httpd }}} 3.If not installed use yum to install it as follows {{{ sudo yum update sudo yum -y install httpd }}} 4.Then configure the Apache server using httpd.conf. Search for Listen. The default port would be given as 80. Change this to 8080 as port 80 cannot be used. {{{ Listen 80 Listen 8080 }}} The httpd.conf resides in /etc/httpd/conf/httpd.conf 5.The video file has to be copied on to the planet lab node. The scp command is used for this purpose. Supposing the video file name is inc.avi {{{ scp inc.avi orbit_pkamat@pli1-pa-3.hpl.hp.com }}} 6.The configure file httpd.conf has to be edited once again. Search for documentroot. This is the main user directory. Change the path to the place where the video file is located. 7. The video may not be accessible. Hence the following command may be needed on the planetlab node {{{ sudo chmod +x / to give executible rights. }}} = VLC commands = The VLC player operates in 3 modes: 1. VLC server: It streams the video to one (unicast) or more (multicast) clients. 2. VLC relay: It receives the stream from a server and forwards the stream to one or more clients. In the demo, the Access Point acts as the VLC relay. 3. VLC clients: It receives a unicast/multicast stream. Some extended capabilities of VLC player: 1. VLC streams in unicast and multicast on an IPv4 or IPv6 network everything that VLC is able to read, via UDP, RTP or HTTP 2. VLC supports various formats/muxs like AVI,MPEG(PS/TS) and OGG. 3. VLC stream out has an architecture that uses modules. This demo uses 'standard' module that sends the stream. The module provides: -> access: udp,rtp or http -> mux/format: avi , mpeg or ogg -> url: the unicast/multicast address to stream to. 4. The different commands for the vlc are: -> Streaming server: vlc --sout #standard{access=udp,mux=ts,url=192.168.13.7:1234}. But in the demo, the server was set up as an Apache. -> Relay: vlc -d 'http://.....' --sout #standard{access=http,mux=ts,url=192.168.13.7:1234}. This will stream in the video from the Apache using http and stream it to the client using udp protocol. -> Client: vlc udp:@:1234. = To configure the wireless interface of a orbit node manually = 1. Type lspci on the command line to check whether the Artheros module is running. If not run the command 'modprobe ath_pci' 2. Then type ifconfig ath0 up. 3. ifconfig ath0 sets the ip address for the wireless interface. 4. After that iwconfig can be used to set the ESSID = PSSH and GEXEC = PSSH -- Mainly for controlling large collections of nodes in the wide-area. -- Has proven to be a better option for large coll of nodes like the Planet-Lab. I have contacted the developer asking for more info. -- pscp,prsync,pnuke and pslurp are included in the pssh package. I ran pssh from the grid console to simultaneously send commands to 3 planet lab nodes: pssh -h ips.txt -l orbit_pkamat ls -all -o o ips.txt contains pli1-pa-3.hpl.hp.com pli1-pa-4.hpl.hp.com pli1-pa-5.hpl.hp.com After the command was executed, the outpt was Error on pli1-pa-5.hpl.hp.com Success on pli1-pa-3.hpl.hp.com Success on pli1-pa-4.hpl.hp.com and the directory 'o' was created with 3 files namely pli1-pa-3.hpl.hp.com,pli1-pa-4.hpl.hp.com and pli1-pa-5.hpl.hp.com. The files pli1-pa-3.hpl.hp.com and pli1-pa-4.hpl.hp.com contained index.html inc.avi but the file pli1-pa-5.hpl.hp.com was empty. GEXEC -- Faster -- GEXEC operates by building an n-ary tree of TCP sockets and threads between gexec daemons and propagating control information up and down the tree. -- By using hierarchical control, GEXEC distributes both the work and resource usage associated with massive amounts of parallelism across multiple nodes, thereby eliminating problems associated with single node resource limits (e.g., limits on the number of file descriptors on front-end nodes) -- It uses a client server model. == [wiki:Internal/VirtualPL/IntegratedExpt/Status_Updates] ==