ReadyNAS Duo v2 has slow read and write performance

2

My ReadyNAS Duo v2 has slow read and write speeds, despite being on a gigabit LAN. I am using 2 x 2TB Western Digital Green drives. I'm seeing read speeds of 3MB/sec and write speeds of 1MB/sec.

Any pointers or suggestions would be most appreciated.

Fidel

Posted 2017-02-19T12:23:32.883

Reputation: 398

Answers

2

Western Digital Green drives are known to be affected by an issue called IDLE3, which is a firmware setting that tells the drive to park its head too frequently. I changed this setting and noticed an improvement in my ReadyNAS Duo v2. The read speed increased from 3 MB/sec to 30 MB/sec and the write speed from 1 MB/sec to 20 MB/sec.

There's a program called idle3ctl which can be used to change the setting. The easiest thing is to take the drives out of the NAS and put them into a computer running linux and change the settings using the following commands:

sudo apt-get idle3

./idle3ctl -d /dev/sda

./idle3ctl -d /dev/sdb

and power the NAS off and turn it back on.

If you'd like to change the setting directly from within the NAS itself, it's a bit more involved. In the past, adjusting the IDLE3 value was possible using these steps. However now that Debian has stopped supporting the 'Squeeze' version, a few more steps are required:

  1. Enable SSH by installing the App called Enable Root SSH Access, available on the netgear website
  2. SSH into your NAS (If using windows, you can use Putty). The username and password are the same that you use to log in through the webpage.
  3. Check how many times the drive heads have been parked. If it's in the thousands, it's likely the heads are being parked too often (My values were around 2.2 million).

    smartctl -A /dev/sda | grep Load_Cycle_Count

    smartctl -A /dev/sdb | grep Load_Cycle_Count

  4. The NAS runs Debian 6 which is called squeeze. That version is now out of date, so you need to tell apt-get where to get updates from. Use the following steps to add new values to /etc/apt/sources.list

    vi /etc/apt/sources.list

    press 'i' to go into insert mode, then add the following lines:

    deb http://archive.debian.org/debian squeeze main

    deb http://archive.debian.org/debian squeeze-lts main

    now press 'escape' to exit insert mode, and type ':wq' to save the file and quit vi

    The URLs came from here

  5. Install Aptitude, which helps resolve missing dependencies and conflicts:

    apt-get install aptitude

  6. Tell apt-get to trust the archive packages (got these commands from here):

    sudo apt-get update -o Acquire::Check-Valid-Until=false

    aptitude install debian-archive-keyring

  7. Update apt-get

    update apt-get

  8. Run the following command to install gcc. Important - don't accept the first solution it gives you. The second one is better because it performs the downgrade that's required.

    aptitude install build-essential

  9. Finally we can install the program called idle3, which allows us to change the value in the firmware.

    cd ~

    wget https://downloads.sourceforge.net/project/idle3-tools/idle3-tools-0.9.1.tgz

  10. General instructions for how to use it can be found here

  11. After downloading the tarball, for example the 0.9.1 release, uncompress it:

    tar xzvf idle3-tools-0.9.1.tar.gz

  12. Change to the source directory, and compile the tool:

    cd idle3-tools-0.9.1

    make

  13. You should now have the idle3ctl executable.

    ls idle3ctl

  14. Check the version

    ./idle3ctl -V

  15. Work out which drive to apply it to:

    cat /proc/partitions

  16. If you have two WD Green drives, they will probably be:

    /dev/sda

    /dev/sdb

  17. Read the IDLE3 value using the following command. This tells you how many seconds the drive waits before parking the heads:

    ./idle3ctl -g105 /dev/sda

  18. To set it to 5 minutes (300 seconds), use the following commands:

    sudo ./idle3ctl -s 138 /dev/sda

    sudo ./idle3ctl -s 138 /dev/sdb

  19. In fact, it might be worthwhile turning it off altogether (as stated here by Daniel Mauerhofer who is a WD employee)

    ./idle3ctl -d /dev/sda

    ./idle3ctl -d /dev/sdb

    I disabled the setting on mine and things work very well. The drives now get powered down by the ReadyNAS software, not the drive itself.

  20. Important - power down the drive using the normal admin page, NOT restart. When it starts up, the performance should be better.

PS. Western digital provides a tool for windows called wdidle3.exe which can be used the change the setting in windows.

Fidel

Posted 2017-02-19T12:23:32.883

Reputation: 398

0

Western Digital Green drives aren't really made to be used in a NAS. Apart from the IDLE3 setting, there's also a feature called TLER which controls how long the drive can spend repairing errors. On proper NAS drives, this duration is kept low. The reason is that if a drive takes too long to respond (because it is repairing an error), the RAID can decide the drive is malfunctioning and take it out of the RAID or initiate reconstruction. Sultana does a good job of describing the issue:

As I've recently come across this very subject I can attempt to explain what most people mean by "RAID Capable".

All of Western Digital's hard drives can be placed in a RAID array, but not all of them support the features that the RE (RAID Edition) drives are capable and somewhat-better-suited for when connected to RAID controllers, whether they be full-hardware add-in cards (Adaptec, LSI, Areca, Intel PCIe and higher-end HighPoint) or onboard firmware controllers (like Intel ICHxR, SiliconImage and Marvell controllers), like Error Recovery Control and double motor head drivers.

TLER is Time-Limited Error Recovery, WD's version of Error Recovery Control (Seagates and Samsung's is called CCLT), which only really comes into play when a drive in the array comes across an error when attempting to read or write to a sector/block/page/etc. For drives on a hardware RAID controller, the controller has its own level of error recovery when attempting to rectify conflicts between the same file/block/page/sector that's supposed to be mirrored (in RAID 1) or stored in parity (in RAID 5).

When a normal desktop drive comes across a read or write error it will retry as many times as possible to read from or write to, recover and remap a bad sector/page/block/etc, sometimes taking up to a few minutes to do so. In that span of time, the RAID controller would see the harddrive as unresponsive, and conflict with the RAID controller's error recovery method and usually will drop an "unresponsive" drive from a RAID array if it takes more than the time set in the card's firmware (usually 10 seconds), even if the drive itself is still in "good health". In a simple RAID mirror, the array will go through a rebuild process which is pretty much just copying data from the undropped drive to the dropped drive to maintain a full mirror, which, when you factor in both the rebuild and reverifcation process, can take a few hours -- depending on the amount of data and the size of the drives that are mirrored. In a RAID 5 array, it can take significantly longer to rebuild.

RAID eition drives (WD's RE2/3/4s and Seagate's Constellation drives) in addition to the hardware and warranty differences, have a setting in the firmware to stop a read or write recovery operation attempt after 7 to 10 seconds, and let the RAID controller just recovery by copying the data from the other drive (in RAID 1) or from parity information (RAID 5). Even on firmware RAID controllers like Intel's onboard ICHxR ROM, the ERC timeout is 10-14 seconds, if I'm not mistaken.

That being said, certain desktop class hard drives can have error recovery control enabled using certain tools in Linux or Windows (SmartMonTools as an example) and make them better suited to use in a RAID array -- matter of fact, WD had a tool available called "TLER.exe" that actually allowed one to change the ERC setting in the drive firmware (however, it would apply the change to every WD drive the tool detected at once), but most WD Green drives (made after 2008/2009) no longer support the function in its firmware, and Seagate Barracuda drives can support enabling CCTL, but will revert back to factory firmware settings if the drives are powered down (in other words, if the system is warm restarted, the settings stick, but if one shuts down and cold-boots, then CCTL goes back to disabled -- the setting is volatile in firmware).

That said, it's the TLER/CCTL Error Recovery Control settings that sometimes make RAID edition drives not really suited for single desktop use on their own, because if they ever come across a similar read/write error, the drive will simply stop the attempt after 7 to 10 seconds, rather than keep attempting as many times as it can like regular desktop drives do.

Phrased another way, desktop drives are fine as in RAID arrays as Enterprise drives, as long as the desktop drives never encounter a read/write error or bad sector, which is an unrealistic expectation. The only instance in which it wouldn't be a problem is using the software RAID natively in Windows, as the OS is natively aware of Dynamic Disks and the mirror/stripe-with-parity configuration information as it's stored on the disk, rather than in firmware ROM or on a hardware BIOS.

Your mileage may vary, in the end, as there are people who've made RAID 5 arrays on their onboard RAID controllers (firmware-RAID) and have had no issues using regular desktop drives, and those who've created RAID 5 arrays on an LSI PCIe card with battery-backup and 256MBs onboard cache using WD RE4 drives and have had issues. RE drives fail and can take out an entire array just as easily as desktop drives in the same place depending on the type of RAID array they're configured in. In the end, it's not recommended to use desktop class drives in any array other than a simple mirror, and not supported in any case, from any known drive manufacturer.

If I'm missing anything, please feel free to chime in

Fidel

Posted 2017-02-19T12:23:32.883

Reputation: 398