The importance of backups


Although it is extremely rare that data is lost upon our machines it can never be ruled out entirely - whether you’ve got a dedicated host or a virtual machine.

With that in mind it is extremely important that you consider configuring some backups, ideally backups that store the off the machine itself.

We provide backup space that may be easily accessed over rsync (preferred), or over NFS (deprecated).

If you’re unfamiliar with backups we have an example configuration, using the minimal backup2l tool.

If you have any questions about backups please don’t hesitate to get in touch


Yeah, “get VM backup sorted out” has been sat at the top of my todo list for quite a while now!

Something I’ve previously mentioned to Matthew over Twitter - I’d love it if you guys offered larger backup options - something compatible with Dropbox’s pricing - I currently keep about 30gb of photos on Dropbox as backup rather than for cloud access, and I’d feel much happier having you guys host it instead and just using db for unimportant stuff.

Maybe some sort of Amazon Glacier support (or your own version thereof) would be good - their pricing is super cheap!


[quote]… I’d feel much happier having you guys host it instead …[/quote]… but you must guarantee geographical separation from the VM itself! :wink:


backup2l is super easy to get going.

What about on BigV though?


I’m trying to access my backups via NFS but mount is failing:
mount.nfs: Connection reset by peer

I’m using rsync for the backups, but I wanted to see where they were upto, is the backup down or inaccessible by NFS?



NFS not working for me at the moment. I’ve had a few failures of my daily backup run over the last week or so, but not paid much attention (so long as it works the next day).

I’m still using NFS and automount as Bytemark used to recommend this combination, and I only find time to ‘fiddle’ with my vm when necessary.

I see they now state that rsync is preferred, and NFS is ‘deprecated’. Looking at the latest example backup2l script, which now uses rsync after making the backup locally. It might be considered a disadvantage of this that the entire backup tree is maintained locally as well as on the backup server, as opposed to just in one place with direct NFS access. Who knows if/when I find time to change over if needed - rsync fine at the moment, but NFS still not working.


I also get “mount.nfs: Connection reset by peer” when trying to mount. I updated /etc/fstab to use “proto=tcp,mountvers=3,nfsvers=3,intr,nolock”, but that didn’t help.

I don’t mind switching to rsync, but I’d like to access the existing backup to use that to begin with. Is that still possible?


I’m using the following in /etc/fstab and it’s pretty solid for me at the moment. Are you using a different backup server perhaps? /mnt/backup nfs proto=tcp,nolock,noauto,mountvers=3,nfsvers=3,intr 0 0

The ‘noauto’ forces the mounting/unmounting to be done by backup2l in my case. Although it seems to be permanently mounted in practice.

Although it’s sometimes nice to have multiple copies of backups around the place, I don’t like the idea of having a ‘local’ backup, and then having to rsync it to the backup server. It forces you to have disk space just hanging around for this purpose.


I’ve just noticed the same issue with mounting over NFS. My backups are all done by rsync, but whenever I’ve needed to restore a file in the past I’ve always gone in via NFS and grabbed it. As it happens I do have a backup also on the main server but now I realise I’ve no idea how to restore from the backup space if I needed to!


Same problem here since mid-Oct. I use the fstab entry mentioned in the post above and have not altered it recently.

I notice this coincides with an upgrade on the backup server Medium?


Anthony, adding the noauto did not help. I still get “mount.nfs: Connection reset by peer” every time I try to mount it.

For the curious, the IP of the backup I’m on is


Mine is


I’ve been holding off posting, but have been following this thread. I noticd the problem almost the instant people were transitioned from Tiny to Medium backup servers. After multiple emails back and forth with the guys at Bytemark I eventually bit the bullet and spent some time moving my backup mechanism to backup2l.

However I’ve still got some issues.

Prior to this I was using a home grown bash script which just tar|bz2 the relevant directories directly onto the backup server. This would total ~1.5GB and take around 30 minutes.

Now with backup2l it is using differential backups and therefore only transferring ~70MB, however the time taken to rsync the files over to the backup server is getting ridiculous. For the initial transfer (of the ~1.5GB) it took me some 12 hours to get the files rsynced over, but as this was a one off I figured no problems.

Several times for a 70MB transfer it has taken over 4 hours (last nights took nearly 6 hours to transfer 70MB.

backup2l v1.5 by Gundolf Kiefer
Tue Nov 13 01:00:02 GMT 2012
  syncing with backup server
Tue Nov 13 06:36:29 GMT 2012
all.1035     2012-11-13 01:06 |   66.6M |     158   181965 |   65    63 |    0

That is an average transfer rate of 3.4KBps, that’s worse than the last modem I had (and my current record for longest backup/slowest transfer speed).

The quickest I have seen this transfer is ~30 minutes (which is still orders of magnitude slower than it was as that was transferring 22 times more data). Which indicates to me that the machine hosting our backups is either severely overloaded, or has some other issue with it.

The issues are certainly with the transfer and not with anything else, my database dump takes ~6 minutes (and that has remained constant throughout the change from custom script to backup2l) and is the only other major time factor in the process.