3

I'm trying to understand this iostat output:

      tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd Device
   500.30        58.9M        19.4k         0.0k     329.1G     108.5M       0.0k sdc
   500.26        58.9M        19.5k         0.0k     329.1G     109.1M       0.0k sdd
   500.40        58.9M        19.3k         0.0k     329.1G     107.8M       0.0k sde
  3027.72         1.4G        15.3k         0.0k       7.9T      85.6M       0.0k md3

md3 is a mdadm raid5 array with 3 disks, namely sdc, sdd, sde. After rebooting the systems I noticed that the kB_read for md3 are way too high and also kB_read/s doesn't make much sense.

Can anyone explain what I'm seeing here? Thanks

Tried rebooting just to test if it was something transient, but I see the same behavior, here's another set of outputs:

$ iostat -h
      tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd Device
     4.72       567.4k         1.3k         0.0k      13.7G      32.6M       0.0k sde
     4.71       567.1k         1.3k         0.0k      13.7G      32.6M       0.0k sdd
     4.74       568.0k         1.3k         0.0k      13.7G      32.7M       0.0k sdc
    26.95        14.3M         0.1k         0.0k     352.5G       3.6M       0.0k md3

$ iostat -x -k
Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz  aqu-sz  %util
sde              4.62    567.55     4.39  48.73    6.80   122.84    0.10      1.31     0.16  62.47   31.47    13.56    0.00      0.00     0.00   0.00    0.00     0.00    0.03   0.41
sdd              4.61    567.26     4.39  48.77    7.40   123.03    0.10      1.32     0.16  62.23   30.94    13.45    0.00      0.00     0.00   0.00    0.00     0.00    0.04   0.41
sdc              4.64    568.11     4.39  48.62    9.21   122.46    0.10      1.32     0.16  62.54   34.15    13.58    0.00      0.00     0.00   0.00    0.00     0.00    0.05   0.45
md3             26.95  14609.87     0.00   0.00   11.97   542.02    0.00      0.15     0.00   0.00   57.84    36.75    0.00      0.00     0.00   0.00    0.00     0.00    0.32   0.36

The system is a Synology NAS:

Linux myhost 4.4.59+ #25426 SMP PREEMPT Mon Dec 14 18:48:50 CST 2020 x86_64 GNU/Linux synology_apollolake
mdadm - v3.4 - 28th January 2016

Added info of the array:

$ sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
     Raid Level : raid5
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
    Number   Major   Minor   RaidDevice State
       0       8       35        0      active sync   /dev/sdc3
       1       8       51        1      active sync   /dev/sdd3
       2       8       67        2      active sync   /dev/sde3

$ cat /proc/mdstat 
md3 : active raid5 sdc3[0] sde3[2] sdd3[1]
      27335120896 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Adding another test comparing iostat vs /proc/diskstats:

$ iostat -h | grep -E "sdc|sdd|sde|md3"
     0.42        41.7k        18.2k         0.0k      13.7G       6.0G       0.0k sde
     0.41        41.7k        18.2k         0.0k      13.7G       6.0G       0.0k sdd
     0.42        41.9k        18.2k         0.0k      13.8G       6.0G       0.0k sdc
     2.54         1.0M        35.6k         0.0k     352.5G      11.7G       0.0k md3

$ iostat | grep -E "sdc|sdd|sde|md3"
sde               0.42        41.74        18.15         0.00   14386484    6255981          0
sdd               0.41        41.69        18.15         0.00   14372460    6256545          0
sdc               0.42        41.85        18.15         0.00   14425589    6257257          0
md3               2.54      1072.39        35.55         0.00  369661368   12255776          0

$ cat /proc/diskstats | grep -E "sdc |sdd |sde |md3 " 
   8      64 sde 118510 112430 28772968 809485 24872 1535475 12512035 547302 0 288110 1356601
   8      48 sdd 117848 112258 28744920 875289 24860 1535638 12513163 538100 0 271284 1413110
   8      32 sdc 120234 112230 28851178 1103193 24846 1535822 12514587 746998 0 319872 1849980
   9       3 md3 683014 0 739322736 8168833 191860 0 24511552 14120302 0 131259 22257749
JoeSlav
  • 89
  • 2
  • 10
  • Can you show the output of `iostat -x -k` ? – shodanshok Jan 16 '21 at 13:07
  • I modified the question with that output too, thanks for helping @shodanshok – JoeSlav Jan 16 '21 at 15:50
  • On which system (exact version) do you encounter this? – Nikita Kipriyanov Jan 16 '21 at 16:00
  • Thanks @NikitaKipriyanov Added more info above -- do you need other info? – JoeSlav Jan 16 '21 at 17:22
  • 2
    I agree that the ratio between `md3` and the various component devices does not seem to make much sense. As it seems to be a Synology NAS, I would write on their forum to ask for clarifications. – shodanshok Jan 16 '21 at 17:58
  • Not only ratio, I don't believe 3 hard drives could give 1.4G/s of total read speed. It is worth compare `iostat -h` with just `iostat`. I start thinking there is a bug in iostat bundled with Synology DSM, i.e. `-h` conversion is wrong. – Nikita Kipriyanov Jan 16 '21 at 19:55
  • Thanks for the pointers, will open a ticket. – JoeSlav Jan 16 '21 at 22:02
  • by default, `iostat` [parses](https://github.com/sysstat/sysstat/blob/v12.5.2/common.h#L78) `/proc/diskstats` for disk statistics (format is space-separated [values](https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats)), probably check if `/proc/diskstats` is populated correctly? – mforsetti Jan 20 '21 at 03:02
  • Thanks for the pointer @mforsetti, I added to the question a comparison between `iostat` and `/proc/diskstats`. Doesn't make much sense either but I need to look at it carefully. – JoeSlav Jan 20 '21 at 08:36
  • Can you compare the output of `dstat` too? Should be available with `ipkg` – Mark R. Jan 20 '21 at 09:14
  • Sorry, can't install new stuff on these prod systems. – JoeSlav Jan 20 '21 at 12:56

0 Answers0