I have a filesystem mounted with 9p virtio through KVM, and am backing it up using duplicity to a remote SSH server. I' m trying to speed up the backup process, which seems unreasonably slow to me.
The source size is 20GB in 107.651 files, which are on an ext4 filesystem on the virtual machine host running Ubuntu 14.04, on top of a Raid10 array on a 3ware controller using 15K disks (WD VelociRaptors), no BBWC. The virtual machine itself is Ubuntu 12.04.5 mounting the files with p9 over virtio, driver "path", mode "mapped", write policy "immediate". The destination over SSH is a HP server with 512MB BBWC enabled with 12x 2TB SAS disks, confirmed to be blazingly fast.
If all else fails, I'll just try the duplicity run on the virtual machine host to eliminate the 9p middle layer in accessing the files in order to see if 9p is the issue (which I'm slowly suspecting it is)
Here are the duplicity backup statistics:
--------------[ Backup Statistics ]--------------
StartTime 1483275839.07 (Sun Jan 1 14:03:59 2017)
EndTime 1483332365.62 (Mon Jan 2 05:46:05 2017)
ElapsedTime 56526.55 (15 hours 42 minutes 6.55 seconds)
SourceFiles 107651
SourceFileSize 21612274293 (20.1 GB)
NewFiles 24
NewFileSize 69952 (68.3 KB)
DeletedFiles 11
ChangedFiles 38
ChangedFileSize 6825600 (6.51 MB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 73
RawDeltaSize 47509 (46.4 KB)
TotalDestinationSizeChange 103051 (101 KB)
Errors 0
The python cProfile run returned the following functions took the longest execution time:
29225254 function calls (29223127 primitive calls) in 56578.118 seconds
ncalls tottime percall cumtime percall filename:lineno(function)
107700 28238.712 0.262 28238.712 0.262 {posix.lstat}
107650 28016.367 0.260 28016.367 0.260 {posix.access}
892 190.827 0.214 190.827 0.214 {posix.listdir}
2 49.552 24.776 49.552 24.776 {method 'readline' of 'file' objects}
82 11.113 0.136 11.113 0.136 {open}