23

The proc(5) manpage describes iowait as "time waiting for IO to complete". This was mostly explained in an earlier question. My question is: while waiting in blocking IO, does this include waiting on blocking network IO, or only local IO?

Alex J
  • 2,804
  • 2
  • 21
  • 24

4 Answers4

23

It means waiting for "File I/O", that's to say, any read/write call on a file which is in the mounted filesystem, but also probably counts time waiting to swap in or demand-load pages into memory, e.g. libraries not in memory yet, or pages of mmap()'d files which aren't in ram.

It does NOT count time spent waiting for IPC objects such as sockets, pipes, ttys, select(), poll(), sleep(), pause() etc.

Basically it's time that a thread spends waiting for synchronous disc-IO - during this time it is theoretically able to run but can't because some data it needs isn't there yet. Such processes usually show up in "D" state and contribute to the load average of a box.

Confusingly I think this probably includes file IO on network filesystems.

MarkR
  • 2,898
  • 16
  • 13
  • As nfs IO is File I/O too, I guess you are right ;-) – wzzrd Jul 08 '09 at 09:22
  • What about loopback interfaces? How does linux treat this kind of interfaces? – Jalal Mostafa Aug 01 '17 at 13:44
  • `Confusingly I think this probably includes file IO on network filesystems.` https://unix.stackexchange.com/a/203322/120198 according to that network io isn't counted in iowait – Stu Jan 24 '20 at 21:56
3

the iowait time is the amount of time a process spends in the kernel I/O scheduler. As far as I know, this doesn't have anything to do with network I/O insofar as regular socket connections go. However, it will include time spent waiting for network file systems like NFS.

Kamil Kisiel
  • 11,946
  • 7
  • 46
  • 68
2

It does.

Incidentally, one of the servers that I manage is experiencing high iowait which is caused by a bad NFS mount.

top - 06:19:03 up 14 days, 10:15,  3 users,  load average: 9.67, 11.83, 12.31
Tasks: 135 total,   1 running, 134 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.2%sy,  0.0%ni,  0.0%id, 99.7%wa,  0.0%hi,  0.0%si,  0.0%st

top - 06:22:55 up 14 days, 10:19,  3 users,  load average: 10.58, 11.13, 11.89
Tasks: 137 total,   1 running, 136 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.2%sy,  0.0%ni,  0.0%id, 99.8%wa,  0.0%hi,  0.0%si,  0.0%st

And look at the processes in the D state.

root     27011  0.0  0.0      0     0 ?        S    03:12   0:00 [nfsd4]
root     27012  0.0  0.0      0     0 ?        S    03:12   0:00 [nfsd4_callbacks]
root     27013  0.0  0.0      0     0 ?        D    03:12   0:01 [nfsd]
root     27014  0.0  0.0      0     0 ?        D    03:12   0:01 [nfsd]
root     27015  0.0  0.0      0     0 ?        D    03:12   0:01 [nfsd]
root     27016  0.0  0.0      0     0 ?        D    03:12   0:01 [nfsd]
slm
  • 7,355
  • 16
  • 54
  • 72
Sreeraj
  • 464
  • 1
  • 4
  • 15
2

The iowait includes the network calls. I say this, because NFS is handled as many linux local filesystems from the kernel's point of view:

$ vim linux-2.6.38.2/fs/nfs/file.c 

const struct file_operations nfs_file_operations = {
        .llseek         = nfs_file_llseek,
        .read           = do_sync_read,
        .write          = do_sync_write,
        .aio_read       = nfs_file_read,
        .aio_write      = nfs_file_write,
        .mmap           = nfs_file_mmap,
        .open           = nfs_file_open,
        .flush          = nfs_file_flush,
        .release        = nfs_file_release,
        .fsync          = nfs_file_fsync,
        .lock           = nfs_lock,
        .flock          = nfs_flock,
        .splice_read    = nfs_file_splice_read,
        .splice_write   = nfs_file_splice_write,
        .check_flags    = nfs_check_flags,
        .setlease       = nfs_setlease,
};

When processes call a write on file descriptor 5, something like this will happen:

files->fd_array[5]->f_op->write(argv.......)

So, the processes doesn't knows what kind of filesystem are using (vfs magic) and the iowait is the same to a local filesystem.

slm
  • 7,355
  • 16
  • 54
  • 72
c4f4t0r
  • 5,149
  • 3
  • 28
  • 41