I have two HP DL380 G8 servers with 4x 1TB on HP p420 RAID controllers in RAID 1+0 setup. Eth0s are connected to the router, and Eth3&Eth4s are bonded (LACP) and connected directly between machines.
If I run
#!/bin/bash
clear
echo 'Starting disk speed analysis..'
echo -e '\n Reading different size files (1M, 100M, 1G):\n \e[93m'
dd if=/dev/sda of=/dev/zero iflag=direct bs=1M count=1000 &> test-results.log
tail -1 test-results.log
dd if=/dev/sda of=/dev/zero iflag=direct bs=100M count=10 &> test-results.log
tail -1 test-results.log
dd if=/dev/sda of=/dev/zero iflag=direct bs=1G count=1 &> test-results.log
tail -1 test-results.log
echo -e '\n \e[39mWriting different size files (1M, 100M, 1G):\n \e[93m'
dd if=/dev/zero of=/root/testfile oflag=direct bs=1M count=1000 &> test-results.log
tail -1 test-results.log
dd if=/dev/zero of=/root/testfile oflag=direct bs=100M count=10 &> test-results.log
tail -1 test-results.log
dd if=/dev/zero of=/root/testfile oflag=direct bs=1G count=1 &> test-results.log
tail -1 test-results.log
rm test-results.log
echo -e '\e[39m'
I get :
Reading different size files (1M, 100M, 1G):
1048576000 bytes (1.0 GB) copied, 2.81374 s, 373 MB/s
1048576000 bytes (1.0 GB) copied, 1.98058 s, 529 MB/s
1073741824 bytes (1.1 GB) copied, 1.88088 s, 571 MB/s
Writing different size files (1M, 100M, 1G):
1048576000 bytes (1.0 GB) copied, 0.871918 s, 1.2 GB/s
1048576000 bytes (1.0 GB) copied, 3.08039 s, 340 MB/s
1073741824 bytes (1.1 GB) copied, 3.2694 s, 328 MB/s
and
Reading different size files (1M, 100M, 1G):
1048576000 bytes (1.0 GB) copied, 2.80229 s, 374 MB/s
1048576000 bytes (1.0 GB) copied, 2.50451 s, 419 MB/s
1073741824 bytes (1.1 GB) copied, 2.136 s, 503 MB/s
Writing different size files (1M, 100M, 1G):
1048576000 bytes (1.0 GB) copied, 1.64036 s, 639 MB/s
1048576000 bytes (1.0 GB) copied, 3.48586 s, 301 MB/s
1073741824 bytes (1.1 GB) copied, 4.5464 s, 236 MB/s
And this seems to be fair speeds, but if I try to migrate VM of 100Gb size to another machine over the bonded network, I only get ~60MB/s network transfer speed and a short transfer of 120MB/s if that WM is running at the time of transfer.
Network vs storage speed of a single VM transfer
However, storage I/O rates can go quite high.. way above the network speed, o I presume storage speed is not a problem.. right?
I am using XCP-ng Center, connected over VPN. Its a fresh install, XCP-ng is v7.6.
Ideally I would expect around 2x125 MB/s transfer speed between the servers, any ideas why this is not happening?
Maybe anyone with a similar stack could share the experience? Thanks!