Configuring SPADE to connect to a peer deployment of SPADE.

This article explains how to connect two SPADE deployments so that they can transfer files between each other.

Pre-requisites

It is assumed that the nest-spade-war project has been install and running as outlined here, and that the log output of the JBoss server server can be seen in a second terminal.

It is also helpful if you have at least read the "Local Warehouse" scenario in order to familiarize yourself with the concepts discussed there as they will be re-used here. As in that scenario, the following environmental variables need to be set. They are shown here being set to their standard values.

export WILDFLY_HOME=${HOME}/server/wildfly-9.0.2.Final
export SPADE_VERSION=3.0.1
export SPADE_HOME=${HOME}/nest-spade-war-${SPADE_VERSION}
export SPADE_WAR=${SPADE_HOME}/target/spade-${SPADE_VERSION}.war

Configuration of the Shipping Instance

You now have two instances of SPADE instances to configure. The instance which initially fetches the file is referred to here as the shipping instance, while the instance that will received the file from the shipping instance is referred to as the receiving instance. This section lays out what you need to do to configure the shipping instance.

To configure the shipping instance you need to update its ssh configuration and add a new outbound transfer to the spade.xml. The following command, where RECEIVING_HOST has already been set to the receiving host's name and it is assumed you are using the same user id, will do just that.

cat >> ~/.ssh/config << EOF

Host ${RECEIVING_HOST}
    User `whoami`
    IdentityFile ~/.ssh/`hostname`_scptransfer
EOF

You can now use the following commands to transfer files over to the receiving host using scp accepting the RSA key when asked and using the remote account's password, as the appropriate key is not yet installed on the receiving host.

scp ~/bin/spade_scp_wrapper.py ~/.ssh/`hostname`_scptransfer.pub ${RECEIVING_HOST}:.

The following XML will declare the new outbound transfer on the shipping host and should be added to the spade.xml file immediately below the existing <outboundTransfer> element, with remote_user@remote_mail_hostcodeph> being replaced by the address you've selected for your remote, i.e. receiving, SPADE instance, receiving_host replaced by the name of the receiving host and shipping_host replaced by the name of the shipping host.

    <outboundTransfer>
        <name>Ship to Receiver</name>
        <neighbor>remote_user@remote_mail_host</neighbor>
        <location>receiving_host:spade/receiving/shipping_host</location>
        <shipper>receiving_host</shipper>
        <class>gov.lbl.nest.spade.services.impl.SCPTransfer</class>
    </outboundTransfer>

You can then set up a registration that uses this new outbound transfer by using the following commands.

mkdir -p ~/spade/registrations/local
cp ${SPADE_HOME}/src/main/extras/examples/registration.3.xml ~/spade/registrations/local

You can now turn you attention to the receiving instance.

Configuration of the Receiving Instance

Assuming that you are starting with a cleanly installed SPADE deployment on the receiving host, you are going to need to set up both the scp and SPADE configruations.

The following commands, where SHIPPING_HOST has already been set to the shipping host's name, sets up the receiving of files from the shipping host, using scp.

mkdir -p ~/spade/receiving/${SHIPPING_HOST}
mkdir -p ~/bin
mv ~/spade_scp_wrapper.py ~/bin
cd ~/.ssh
read PUBLIC_KEY < ~/${SHIPPING_HOST}_scptransfer.pub
echo from=\"${SHIPPING_HOST}\",command=\"$HOME/bin/spade_scp_wrapper.py \
-d $HOME/spade/receiving/${SHIPPING_HOST}\",\
no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding \
$PUBLIC_KEY >> authorized_keys
chmod 644 authorized_keys
unset PUBLIC_KEY
cd -

The following element should now be added to the spade.xml file below the existing warehouse element, with local_user@local_mail_host being replaced by the address you've selected for your local, i.e. shipping, SPADE instance, and shipping_host being replaced with the name of the host shipping the file.

    <inboundTransfer>
        <name>Receive From Shipper</name>
        <neighbor>local_user@local_mail_host</neighbor>
        <location>~/spade/receiving/shipping_host</location>
    </inboundTransfer>

Finally you need to set up this instance to be able to send confirnation. If this instance is going to use email for that (See "Emailing Confirmations" scenario) then executing the following command will provide a template file for that.

cp ${SPADE_HOME}/src/main/extras/examples/mail.properties ~/spade/mail.properties
chmod 600 ~/spade/mail.properties

You will need to edit this file and provide the appropriate information, where the receiving host is considers the be "local" ,for all values enclosed by angle brackets.

Otherwise you will need to add the following to the <neighborhood> element as a sibling to, and following, its <home> element, with webserver replaced with the name of the web server that is exposing the local SPADE instance's interface, local_user@local_mail_hostcodeph> being replaced by the address you've selected for your local, i.e. shipping, SPADE instance and shipping_host being replaced with the name of the host shipping the file.

        <neighbor>
            <neighbor>local_user@local_mail_host</neighbor>
            <verifyUrl>http://webserver/spade/shipping_host/command/verify</verifyUrl>
        </neighbor>

Execution

Execution of this scenario is similar to the scenario upon which it is built, namely the "Emailing Confirmations" scenario, except that the commands are now spread between the shipping and receiving hosts. To begin with the following commands should be run on the shipping host.

${WILDFLY_HOME}/bin/jboss-cli.sh --connect --command="deploy \
    --name=spade.war ${SPADE_WAR}"
mkdir -p ~/spade/dropbox/scenario/connected
cat > ~/spade/dropbox/scenario/connected/mock.A.data << EOF
put some junk in here
EOF
touch ~/spade/dropbox/scenario/connected/mock.A.sem
${SPADE_HOME}/src/main/python/spade-cli local_scan

Inspect the shipping hosts log file and wait until you see the "finisher" complete. Then execute the following commands on the receiving host.

${WILDFLY_HOME}/bin/jboss-cli.sh --connect --command="deploy \
    --name=spade.war ${SPADE_WAR}"
${SPADE_HOME}/src/main/python/spade-cli inbound_scan
# Wait for the "finisher" for this file to stop, e.g. 5 seconds
sleep 5
${SPADE_HOME}/src/main/python/spade-cli send_confirmations
find ~/spade/warehouse -name "mock.A.*"

Now, if you are using email to exchange conformations, you can make sure the confirmations have been sent and returned to the shipping host to execute the following command.

${SPADE_HOME}/src/main/python/spade-cli read_email

The transfer between two instances of SPADE is now complete.

You should remember that all of the ${SPADE_HOME}/src/main/python/spade-cli commands you have been executing are only to force the examples thropugh the system. Left to itself all of these commands run periodically and you do not need to actively down anyython other than dorp files and their semaphores into the appropriate dropbox.

Cleanup

Having successfully completed this scenario you should now undeploy the application using the following command, .

${WILDFLY_HOME}/bin/jboss-cli.sh --connect --command="undeploy spade.war"