1

Take the following 3 commands:

fio --name=write_throughput --numjobs=8 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \
--group_reporting=1

This results in a write speed just over 1GB/s

Difference: 1 job instead of 8.

fio --name=write_throughput --numjobs=1 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \
--group_reporting=1

This results in a write speed around 255MB/s

Difference: 1 job, 4K bs

fio --name=write_throughput --numjobs=1 \
--size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
--direct=1 --verify=0 --bs=4K --iodepth=64 --rw=write \
--group_reporting=1

This results in 8MB/s

I find this confusing. I'm basically looking for a scenario that tells me "how fast is writing a single 10GB file" and these different options give me wildly different output.

I'm not looking for a theoretical maximum. I'm looking for true to life performance. For an application that writes a file as it is generating it. So not having first created a section on the filesystem and then filling it up with bytes, either.

What am I misunderstanding here? Is fio not the tool for this?

KdgDev
  • 205
  • 1
  • 6
  • 20

0 Answers0