5

I'm trying to construct an experiment to measure the effect of ionice. What I'd like to do is to (per another answer on serverfault) cause frequent enough I/O that a sufficiently "niced" process is starved of any I/O.

Based on another answer on serverfault, I think I need to cause at least one actual I/O operation to a common cfq-scheduled device every 250ms. My thought was to write a trivial program that does has a loop that

  • writes to a (configurable) file on the common device,
  • does an fsync() (to force a definite I/O operation),
  • uses usleep() to delay a configurable amount of time
  • periodically uses lseek() to truncate the file (so that I don't fill the file system)

I then start up one instance of the program using ionice -c3 (idle scheduling class) against one file on the common device. I simultaneously run various instances with the default (best-effort) scheduling class, specifying a different file on the common device (varying the delay values).

My hypothesis was that for delay values of 250ms or more on the "best-effort" process, I would see progress made by the "idle" process; for values less than 250ms, I would see little to no progress made by the "idle" process.

My observation was that there was no difference in performance of the two processes; they both made similar progress. Just to be sure (in case the wall-clock indications that "best-effort" process was performing I/O much faster than every 250ms), I started multiple simultaneous instances of the "best-effort" process, specifying no (zero) delay. Still, I saw no difference in performance between the processes in the two scheduling classes.

Just to be sure, I re-checked the scheduler class:

$ cat /sys/block/xvda/queue/scheduler
noop anticipatory deadline [cfq] 

What is it that I'm missing about how the cfq scheduler works?

If it matters, this is on a 2.6.18 kernel.

jhfrontz
  • 273
  • 2
  • 13

1 Answers1

2

I'd try measuring the effect by using a load generator like stress -i n or stress -d n, where "n" is the number of processes. Run that in one window. Try nmon or iostat in another and try a representative application process on the same block device. See how service time changes in iostat with various ionice settings (or test response from within your app).

As for cfq, there seemed to be changes throughout the RHEL5 lifecycle (on 2.6.18). It was noticeable enough on my application servers that I had to move to the noop and deadline elevators because of contention issues.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • OK, I'll give it a try-- though at first glance, `stress` appears to be doing what I'm doing (but in a less deterministic fashion)-- `stress -i` simply calls `sync(2)` in quick succession, which I think may or may not induce I/O, depending on other activity in the system -- unlike my attempt, which always makes sure there is something "dirty" to be flushed via `fsync(2)`. Similarly, `stress -d` appears to be doing a bunch of `write(2)` operations --which I think can be buffered in the kernel until it calls `close(2)`. – jhfrontz Mar 14 '12 at 17:27
  • 1
    I tried running two instances of my test app (one ionice'd to *idle*), each in its own window. I started up a `stress -d 1` in a third window. Prior to starting stress, I got similar (and marked) progress from each test instance. After starting `stress`, both slowed appreciably but made essentially identical progress. I tried running `stress` ionice'd to *idle* and observed the same: even the *idle* `stress` slowed down both test app instances similarly (I would have expected the *best-effort* instance to have been at least less affected). – jhfrontz Mar 14 '12 at 19:40