I’ve been holding off posting, but have been following this thread. I noticd the problem almost the instant people were transitioned from Tiny to Medium backup servers. After multiple emails back and forth with the guys at Bytemark I eventually bit the bullet and spent some time moving my backup mechanism to backup2l.
However I’ve still got some issues.
Prior to this I was using a home grown bash script which just tar|bz2 the relevant directories directly onto the backup server. This would total ~1.5GB and take around 30 minutes.
Now with backup2l it is using differential backups and therefore only transferring ~70MB, however the time taken to rsync the files over to the backup server is getting ridiculous. For the initial transfer (of the ~1.5GB) it took me some 12 hours to get the files rsynced over, but as this was a one off I figured no problems.
Several times for a 70MB transfer it has taken over 4 hours (last nights took nearly 6 hours to transfer 70MB.
backup2l v1.5 by Gundolf Kiefer
Tue Nov 13 01:00:02 GMT 2012
syncing with backup server
Tue Nov 13 06:36:29 GMT 2012
all.1035 2012-11-13 01:06 | 66.6M | 158 181965 | 65 63 | 0
That is an average transfer rate of 3.4KBps, that’s worse than the last modem I had (and my current record for longest backup/slowest transfer speed).
The quickest I have seen this transfer is ~30 minutes (which is still orders of magnitude slower than it was as that was transferring 22 times more data). Which indicates to me that the machine hosting our backups is either severely overloaded, or has some other issue with it.
The issues are certainly with the transfer and not with anything else, my database dump takes ~6 minutes (and that has remained constant throughout the change from custom script to backup2l) and is the only other major time factor in the process.