bittorrent sync as geo-replication for storage

update: In early 2016, Resilio was spun out of BitTorrent to bring distributed technology to the enterprise. This is awesome news and I’ll be posting some updates about what Resilio is up to moving forward. Below is my initial post from 2013 that was syndicated on the Bittorrent Sync blog.

What is BitTorrent Sync?

The concept is simple, using a local client on your desktop or laptop Sync will synchronize the contents of the selected folder to other remote Sync clients sharing the same key. Synchronization is done securely via an encrypted (AES) bittorrent session. This ends up being effective for moving a lot of data across multiple devices and while I think it was initially designed for secure private dropbox style replication, I’ve been testing this as an alternative method of geo-replication between glusterfs clusters on Fedora.

Right off the bat there were a few things that got my gears turning:

  • a known and proven P2P protocol (monthly BitTorrent users are estimated at something insane like a quarter of a billion users)
  • encrypted transfers
  • multi platform
  • KISS oriented configuration

What is GlusterFS?

GlusterFS is an open source project leveraging commodity hardware and the network to create scale-out, fault tolerant, distributed and replicated NAS solutions that are flexible and highly available. It supports native clients, NFS, CIFS, HTTP, FTP, WebDAV and other protocols. (more info here)

GlusterFS has native Geo Replication. Why not use it?

Leveraging the native GlusterFS georeplication for a single volume is a one way street today. I’m not sure if this is something that will change moving forward but today, a volume replicated is configured in a traditional Master/Slave configuration.


In addition to simple failover configurations, it can also be configured for cascading configurations that allow for more interesting archival type configurations.


or even:


While I’m sure this works for replication and certain disaster recovery scenarios, I’m looking at multi master configurations, or with multiple datacenter configurations that are all “hot”, possibly removing the need for a centralized repository. I’d also like a scenario that allows for all sites to serve as DR locations for any other participant while leveraging the closest cluster as a data end point for writing. Something that looks a bit more like this…


This type of configuration also allows for a more easily grown environment and a quick way to bring another site online.


Leveraging bittorrent, one of the more interesting features is the optional use of a tracker service. This helps with peer discovery, letting the tracker announce SHA2(secret):IP:port to help peers connect directly. They tracker service also serves acts as a STUN server, helping with NAT traversal for peers that can’t directly see each other behind firewalls. This is interesting for environments where you don’t want to have to deal with reconfiguration of firewalls. It’s important to note thought that even leveraging the tracker service all transmission of data is encrypted in flight.

Getting Started

For quick testing, find a couple of boxes that you want to get replication moving between. These could be minimal install linux boxes, SAMBA servers for your SMB, webservers (backup replication?), or in my case, a single node of a gluster cluster. If you’re interested in getting started with gluster, here’s a good place to get started.

a quick note if you’re using gluster: On one of the nodes, make sure the glusterfs client is installed. Create a directory and using the glusterfs client mount the volume you want to have replicated. There are more complicated ways to do this, but for testing, this will work fine.

Download the client

Identify the directory you want to replicate, and go download the client from bittorrent labs for your installation. For me it was the x64 Linux client.


First we’ll need to untar the download and get some config files ready to go. Additionally, we’ll want to build an init.d script to ensure the client is running on startup. You don’t have to do all of this, but I wanted to have it available to manage as a service.

$ tar -xf btsync.tar.gz

We’ll want to move the binary to a better location

 $ sudo mv btsync /usr/bin

Next, create a directory for the configuration and generated storage files

 $ sudo mkdir /etc/btsync

We should identify or create the directory we want to use as a replication target as well. As an example I’ll create a new directory….

$ sudo mkdir /replication

With our directories created and in place it’s time to generate the initial config file and edit it appropriately.

$ sudo btsync --dump-sample-config > /etc/btsync/btsync.conf

Using your favorite text editor, edit the following lines…

"device name": "My Sync Device",


"device name": "whateveryourhostnameis",

"storage path" : "/home/user/.sync",


"storage path" : "/etc/btsync",

// "pid_file" : "/var/run/syncapp/",


"pid_file" : "/var/run/",

As we’re going to identify the replicated folders via the conf file it’s important to note that the webui that is normally available for the linux client will be disabled. First thing you’ll need to do is generate a “secret” that you’re going to use for your share. from the command line:

$ sudo btsync --generate-secret

will give you a secret you can use,but I find it easier to just go ahead and dump the secret at the bottom of the conf file I’m going to use and just move it around from there.

$ sudo btsync --generate-secret >> /etc/btsync.conf

In the shared folder section look for the following line:

"secret" : "MY_SECRET_1", // * required field

and replace MY_SECRET_1 with the secret you generated. As an example:

"secret" : "GYX6MWA67INIBN5XRHBQZRTGYX6MWA67XRHPJOO6ZINIBN5OQA", // * required field

you’ll want to change the directory line as well…

"dir" : "/home/user/bittorrent/sync_test", // * required field


"dir" : "/replication", // * required field

In the shared folders section either edit or comment out the known host section. The easiest thing is to comment out the examples provided. Change…



// "",
// ""

IMPORTANT: You’ll need to remove the leading /* and trailing */ of the shared folders section.

With the config file set start bittorrent sync using the config set.

btsync --config /etc/btsync.conf

sync init script

I’m by no means claiming this is a work of art. It gets the job done though. you’ll want to create a file /etc/init.d/btsync with the following content:

[theron@blackbox ~]$ cat /etc/init.d/btsync
# chkconfig: - 27 73
# description: Starts and stops the btsync Bittorrent sync client
# #
# pidfile: /var/run/bysync.pif
# config: /etc/btsync.conf

# Source function library.
. /etc/rc.d/init.d/functions

# Avoid using root’s TMPDIR
unset TMPDIR

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ ${NETWORKING} = “no” ] && exit 1

# Check that smb.conf exists.
[ -f /etc/btsync.conf ] || exit 6


BTSYNCOPTIONS=”–config /etc/btsync.conf”

start() {
echo -n $”Starting $KIND services: ”
daemon btsync “$BTSYNCOPTIONS”
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/btsync || RETVAL=1
return $RETVAL

stop() {
echo -n $”Shutting down $KIND services: ”
killproc btsync
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/btsync
echo “”
return $RETVAL

restart() {

rhstatus() {
status btsync
return $?

# Allow status as non-root.
if [ “$1” = status ]; then
exit $?

case “$1″ in
[ -f /var/lock/subsys/btsync ] && restart || :
echo $”Usage: $0 {start|stop|restart|reload|status|condrestart}”
exit 2

exit $?

Testing the sync service out

With that done you’ll want to change the mode of that file to 755. This will allow it to be run as a service.

chmod 755 /etc/init.d/btsync

and ensure it’s run at startup:

chkconfig --add btsync
chkconfig btsync on

Other nodes and additional thoughts

With the above in place you’ll want to configure additional btsync clients on gluster nodes (or whatever test system you’re using) at your remote locations using the same secret you used above. The mount point / local folder can be different, but the secret must be the same. This will allow for replication to start amongst the identified folders. Thanks for reading and check out other cool usescases for bittorrent sync on the bittorent sync forums.

Converged Infrastructure prototyping with Gluster 3.4 alpha and QEMU 1.4.0

I just wrapped up my presentation at the Gluster Workshop at CERN where I discussed Open Source advantages in tackling converged infrastructure challenges. Here is my slidedeck. Just a quick heads up, there’s some animation that’s lost in the pdf export as well as color commentary during almost every slide.

During the presentation I demo’ed out the new QEMU/GlusterFS native integration leveraging libgfapi. For those of you wondering what that means, in short, there’s no need for FUSE anymore and QEMU leverages GlusterFS natively on the back end. Awesome.

So for my demo I needed two boxes running QEMU/KVM/GlusterFS. This would provide the compute and storage hypervisor layers. As I only have a single laptop to tour Europe with, I obviously needed a nested KVM environment.

If you’re got enough hardware feel free to skip the Enable Nested virtualization section and skip ahead to the Base OS installation.

This wasn’t as easy envionment to get up and running, this is alpha code boys and girls so expect to roll your sleeves up. Ok with that out of the way, I’d like to walk through the steps I did in order to get my demo envionment up and running. These installation assumes you have Fedora 18 installed and updated with virt-manager and KVM installed.

Enable Nested Virtualization

Since we’re going to want to install an OS on our VM running on our Gluster/QEMU cluster that we’re building, we’ll need to enable Nested Virtualization. Let’s first check and see if nested virtualization is enabled. If it responds with an N then No. If yes, skip this section to the install.

$ cat /sys/module/kvm_intel/parameters/nested

If it’s not we’ll need to load a KVM specific module with the nested option loaded. The easist way to change this is using the modprobe configuration files:

$ echo “options kvm-intel nested=1″ | sudo tee /etc/modprobe.d/kvm-intel.conf

Reboot your machine once the changes have been made and check again to see if the feature is enabled:

$ cat /sys/module/kvm_intel/parameters/nested

That’s it we’re done with prepping the host.

Install VMs OS

Starting with my base Fedora laptop, I’ve installed virt-manager for VM management. I wanted to use Boxes, but it’s not designed for this type of configuration. So. Create your new VM, I selected the “Fedora http install” as I didn’t have an iso laying around. Also http install=awesome.


To do this, select the http install option and enter the nearest available location.


For me this was the Masaryk University, Brno (where I happened to be sitting during Dev Days 2013

I went with an 8 gig base disk to start (we’ll add another one in a bit), gave the VM 1G of ram and a default vCPU. Start the VM build and install.


The install will take a bit longer as it’s downloading the install files during the intial boot.


Select the language you want to use and continue to the installation summary screen. Here we’ll want to change the software selection option.


and select the minimal install:


during the installation, go ahead and set the root password:


Once the installation is complete, the VM will reboot. Once done, power it down. Although we’ve enable Nested Virtualization, we need to pass the CPU flags onto the VM.

In the virt-manager window right click on the VM, and select open. In the VM window, select view > details. Rather than guessing the cpu architecture, select the copy from host and select Ok.


While you’re here go ahead and add an additional 20 gig virtual drive. Make sure you select virtio for the drive type!


Boot your VM up and let’s get started.

Base installation components

You’ll need to install some base components before you get started installing GlusterFS or QEMU.

After logging in as root,

yum update
yum install nettools wget xfsprogs binutils 

Now we’re going to create the mount point and format the additional drive we just installed.

mkdir -p /export/brick1
mkfs.xfs -i size=512 /dev/vdb

We’ll need to edit our fstab and add this as well, so that it will remain persistent going forward after any reboots. add the following line to /etc/fstab

/dev/vdb /export/brick1 xfs defaults 1 2 

Once you’re done with this, let’s go ahead and mount the drive.

mount -a && mount

Firewalls. YMMV

it may be just me (I’m sure it is) but I struggled getting gluster to work with firewalld on fedora 18. This is not reccomeneded in production envionments, but for our all in VM on a laptop deployment, I just disabled and removed firewalld.

yum remove firewalld

Gluster 3.4.0 Alpha Installation

First thing we’ll need to do on our VM is configure and enable the gluster repo.


and move it to /etc/yum.repos.d/

mv glusterfs-alpha-fedora.repo /etc/yum.repos.d/

Now we enable the repo and install glusterfs:

yum update
yum install glusterfs-server glusterfs-devel

Important to note here we need the gluster-devel package for the QEMU integration we’ll be testing. Once done we’ll start the glusterd service and verify that it’s working.

break break 2nd VM

Ok folks, if you’ve made it here, get a coffee and do the install again on a 2nd VM. You’ll need the 2nd replication VM target before you proceed.

</end coffee break>

break break Network Prepping both VMs

As we’re on the private nat’d network on our laptop that virt-manager is managing, we’ll need to update our VMs we create and assign static addresses, as well as editing the /etc/hosts file to add both servers with thier addresses. We’re not proud here people, this is a test envionment, if you want to use proper DNS, I won’t judge if you don’t.

1) change both VMs to using static addresses in the nat range. 2) change VMs hostnames 3) update both VMs /etc/hosts to include both nodes. This is hacky but expedient.

back to Gluster

start and verify the gluster services on both VMs.

service glusterd start
service glusterd status

On either host, we’ll need to create the gluster volume and set it for replication.

gluster volume create vmstor replica 2 ci01.local:/export/brick1 ci02.local:/export/brick1

Now we’ll start the volume we just created

gluster volume start vmstor

Verify that everything is good, if this returns fine, you’re up and running with GlusterFS!

gluster volume info

building QEMU dependancies

let’s get some prereqs for getting the latest qemu up and running

yum install lvm2-devel git gcc-c++ make glib2-devel pixman-devel

Now we’ll download QEMU:

git clone git://

The rest is pretty standard compiling from source. you’ll start with configuring your build. I’ll trim the target list to save time as I know I’m not going to use many of the QEMU supported architectures.

./configure --enable-glusterfs --target-list=i386-softmmu,x86_64-softmmu,x86_64-linux-user,i386-linux-user

With that done everything on this host is done, and we’re ready to start building VMs using GlusterFS natively bypassing fuse and leveraging thin provisioning. W00!

Creating Virtual Disks on GlusterFS

qemu-img create gluster://ci01:0/vmstor/test01?transport=socket 5G

Breaking this down, we’re using qemu-img to create a disk natively on GlusterFS that’s five gigs in size. I’m looking for some more information about what the transport socket is, expect an answer soonish.

Build a VM and install an OS onto the GlusterFS mounted disk image

At this point you’ll want something to actually install on your image. I went with TinyCore because as it is I’m already pushing up against the limitations of this laptop with nested virtualization. You can download TinyCore Linux here.

qemu-system-x86_64 --enable-kvm -m 1024 -smp 4 -drive file=gluster://ci01/vmstor/test01,if=virtio -vnc --cdrom /home/theron/CorePlus-current.iso

This is the quickest way to get this moving, I skipped using Virsh for the demo, and am assigning the VNC IP and port manually. Once the VM starts up you should be able to connect to the VM from your external host and start the install process.


To get the install going, select the harddrive that was build with qemu-img and follow the OS install procedures.


At this point you’re done and you can start testing and submitting bugs! I’d expect to see some interesting things with OpenStack in this space as well as tighter oVirt integration moving forward. Let me know what you think about this guide and if it was useful.

side note

Also, something completely related. I’m pleased to announce that I’ve joined the Open Source and Standards team at Redhat working to promote and assist making upstream projects wildly successful. If you’re unsure what that means or you’re wondering why Red Hat cares about upstream projects, PLEASE reach out and say hello.


Tom's Brown ale

American Brown Ale

I initially brewed this Brown Ale for my Dad’s return from Iraq. This is by far my hands down favorite brew. With 10 gallons on tap I invited some friends over to try it, and 10 gallons of this goes FAST. I did manage to save some for my Dad, and all parties agreed that this was one of the best that I’d brewed. I’m going to be brewing this again shortly, quite possibly the brew I’ll make down at Raccoon River for teach a friend to homebrew days…

BJCP Style and Style Guidelines

10-D Brown Ale, American Brown Ale

Min OG: 1.040 Max OG: 1.060

Min IBU: 25 Max IBU: 60

Min Clr: 15 Max Clr: 22

Color in SRM, Lovibond

#Recipe Specifics Batch Size (Gal): 10.00

Wort Size (Gal): 10.00

Total Grain (Lbs): 20.75

Anticipated OG: 1.055 Plato: 13.64

Anticipated SRM: 20.0

Anticipated IBU: 40.9

Brewhouse Efficiency: 75 %

Wort Boil Time: 75 Minutes

Pre-Boil Amounts

Evaporation Rate: 15.00 Percent Per Hour

Pre-Boil Wort Size: 12.31 Gal

Pre-Boil Gravity: 1.045 SG 11.18 Plato

Formulas Used

Brewhouse Efficiency and Predicted Gravity based on Method #1, Potential Used.

Final Gravity Calculation Based on Points.

Hard Value of Sucrose applied. Value for recipe: 46.2100 ppppg

% Yield Type used in Gravity Prediction: Fine Grind Dry Basis.

Color Formula Used: Morey

Hop IBU Formula Used: Rager


% Amount Name Origin Potential SRM

86.7 18.00 lbs. Pale Malt(2-row) America 1.036 1

4.8 1.00 lbs. Crystal 80L 1.033 80

4.8 1.00 lbs. CaraPilsner France 1.035 10

2.4 0.50 lbs. Chocolate Malt America 1.029 350

1.2 0.25 lbs. Roasted Barley America 1.028 600

Potential represented as SG per pound per gallon.


# Amount Name Form Alpha IBU Boil Time

1.50 oz. Northern Brewer Whole 9.00 31.4 75 min.

0.50 oz. Northern Brewer Whole 9.00 5.0 30 min.

1.50 oz. Cascade Whole 6.80 4.4 10 min.

0.50 oz. Cascade Whole 6.80 0.0 Dry Hop


Amount Name Type Time

0.10 Oz Irish Moss Fining 15 Min.(boil)


WYeast 1187 Ringwood Ale

Mash Schedule

Mash Type: Single Step

Grain Lbs: 20.75

Water Qts: 22.75 - Before Additional Infusions

Water Gal: 5.69 - Before Additional Infusions

Qts Water Per Lbs Grain: 1.10 - Before Additional Infusions

Saccharification Rest Temp : 155 Time: 60

Mash-out Rest Temp : 167 Time: 5

Sparge Temp : 170 Time: 0

Total Mash Volume Gal: 7.35 - Dough-In Infusion Only

All temperature measurements are degrees Fahrenheit.


Primary Fermentation: 1 week @ 65-70 degrees

Secondary Fermentation: 1 week @ 65 degrees

Lagering: 2 weeks, added .50 oz of Cascade hops @ 55 degrees

Yeast: Wyeast 1187 Ringwood Ale or White Labs WLP007 Dry English Ale

21% ABV Cause of Death Ale

I’m going to stop by and see Mark, and see about getting a beer going that I’ve been thinking about for a while now. Cause of Death is a recipe from Johnny Max of the Brewcrazy podcast. I’ll list the recipe below as well. It’s going to be a monster. If you’re interested in helping out, let me know, and I’ll try and nail down the dates that this will take place.

Cause of Death 21% ABV Old Ale

Recipe from Johnny Max from (written by johnny max on 12/07/06) For what it is worth here is the procedure I used to brew my 21% ABV all-grain beer. It is still fermenting (when this was first written), but it hit 21% ABV last Thursday.

  • Make a 1 gallon starter @ 1.066 gravity in 6.5 gallon carboy (keep track of the gravity of your starter and volume as you will have to calculate it in with your wort to calculate you OG accuratly). I used WLP099 High gravity yeast. It can ferment to 25% ABV according to White Labs
  • Mash 31 lbs. of Maris Otter at 146 F overnight (or until conversion is complete)
  • Sparge very slowly until all sugar is extracted (I collected 18 gallons in two kettles).
  • Boil down wort to 4 gallons (I used two large pots) boil slow to reduce caramelization. I also put a clip on fan on each pot blowing on the surface of the wort. This eliminated boil overs (it really did, I don’t brew without one now) and caused the wort to boil down much faster by blowing the steam away. A better way to boil down is if you have a way to pull a vacuum while boiling it will take less than an hour and have zero caramelization. A friend of mine is a bee keeper and he is going to be getting one eventually. I can’t wait.
  • Add hops at last 60 Minutes of boil.

The final wort had an OG of 1.246, but combined with the Starter I had a calculated OG of 1.212

  • Added 1 gallon of wort to the 2 gallon starter
  • Oxygenate for 15 minutes minimum with O2 and affix air-lock. I use a welding oxygen cylinder I bought to used just for brewing (if you are just using air, airate for over 40 minutes).
  • Can the remaining 3 gallons of wort in 1 quart mason jars. Just siphon (so thick it siphons slow) wort into jars, set lids on lose and set in water bath and bring to a boil for 15 minutes. Then tighten lids.
  • Let ferment until fermentation slows.
  • After fermentation slows add one quart of wort each day and oxygenate for 3 to 5 minutes minimum with O2 or 9Â to 15 min with air. (The oxygenation is essential to keep the yeast population up to pass 20%)
  • Let ferment out.
  • When fermentation stops short (and it will) add 8 crushed Beano tablets to convert the nonfermentable sugars into fermentable ones. When fermentation slows add 5 more Crushed Beano tablets if needed. This is what I did, next time I will add the Beano much sooner, probably one day after I add the last quart of wort. It will shorten the time. Not sure when, but I did rack some time after I added the last quart of wort but before I added the Beano.


This list is the original list provided by Johnny, I’ll repost my changes and ingredients list later. * 31 lbs. Maris Otter Pale * 2 oz. Warrior (Pellets, 16.3 %AA) boiled 60 minutes. * 2 oz. Amarillo (Pellets, 9 %AA) boiled 60 minutes. * Yeast: White Labs WLP099 Super High Gravity Ale

Now there is nothing special about the recipe. What is special is the procedure to get a 21% alcohol. You may want to use different grains to add more flavor and different hops, just pick a style. I picked Old Ale, but I hopped the hell out of it, hoping to balance the hop bitterness with the sweetness and high alcohol. My IBUs were 184 according to I used Warrior and Amarillo because the AAs are so high and I like the hop flavor of Dogfish Head’s IPA. When you taste it, it does not taste too hoppy at all. I hope it comes out with age.

Things I would do different:

  • Add Beano a little sooner.
  • Add some grains that are a little more roasty
quick hard cider

During the last IBU meeting, one of the members, Mike, brought in a hard cider for a few of us to try. I’m not usually a fan of hard ciders, but I gave it a try. It was fantastic, and with a dead simple recipe, it’s easy for brewers (anyone really) of all skill levels to try. I started a batch of this today. So, without further ado, here’s the recipe:

Hard Cider


  • 5 Gallons Apple Cider (Cub Foods Brand, Indian Summer, to quote Mike, “the cheapest stuff you can find”) just make sure it doesn’t have added potassium sorbate!
  • 2 lbs Light or Dark Brown Sugar
  • 2 Split, Scraped, and Chopped Vanilla Beans (for secondary)
  • 2 cans of Apple Juice Concentrate (to keg)
  • Dry Ale Yeast (I used Safale S-04 I’m told it doesn’t matter)


  • Sanitize fermenter of your choice
  • Add Brown Sugar to the fermenter
  • Pour Apple Cider over the sugar which will mix it up and aerate
  • Add Yeast
  • Ferment until done (usually goes to 1.000 or a little higher)
  • Sanitize secondary fermenter
  • Add 2.5 tsp of Potassium Sorbate to prevent further yeast growth
  • Add 2 split, scraped, and chopped vanilla beans
  • Rack cider on top of Potassium Sorbate and Vanilla Beans to mix
  • Let it sit for a week or so


  • Sanitize a keg
  • Add concentrate to keg - If you have frozen concentrate, heat it in a pot on the stove until it’s slightly warm, and then pour it into the bottom of the keg. If it’s the liquid kind let it come to room temperature, and then pour it into the keg.
  • Rack the cider on top of the concentrate
  • Hit the keg with 30psi to seal the lid and vent headspace.

From fermenter to glass, this cider takes as little as four weeks, however, Mike is recommending at least six. Also of note, adjusting the amount of concentrate will affect the sweetness of the final product. Typically this cider weighs in at around 7.5% ABV. A note on the vanilla beans: Here in Des Moines these can be hard to find, and when found, they are often expensive. It was recommended that I look on EBay of all places, and that the Arizona Vanilla Company, often has great prices online. I took the recommendation and I’m happy with the quality of their product.