Western Digital Green drives aren't really made to be used in a NAS. Apart from the IDLE3 setting, there's also a feature called TLER which controls how long the drive can spend repairing errors. On proper NAS drives, this duration is kept low. The reason is that if a drive takes too long to respond (because it is repairing an error), the RAID can decide the drive is malfunctioning and take it out of the RAID or initiate reconstruction. Sultana does a good job of describing the issue:
As I've recently come across this very subject I can attempt to
explain what most people mean by "RAID Capable".
All of Western Digital's hard drives can be placed in a RAID array,
but not all of them support the features that the RE (RAID Edition)
drives are capable and somewhat-better-suited for when connected to
RAID controllers, whether they be full-hardware add-in cards (Adaptec,
LSI, Areca, Intel PCIe and higher-end HighPoint) or onboard firmware
controllers (like Intel ICHxR, SiliconImage and Marvell controllers),
like Error Recovery Control and double motor head drivers.
TLER is Time-Limited Error Recovery, WD's version of Error Recovery
Control (Seagates and Samsung's is called CCLT), which only really
comes into play when a drive in the array comes across an error when
attempting to read or write to a sector/block/page/etc. For drives on
a hardware RAID controller, the controller has its own level of error
recovery when attempting to rectify conflicts between the same
file/block/page/sector that's supposed to be mirrored (in RAID 1) or
stored in parity (in RAID 5).
When a normal desktop drive comes across a read or write error it will
retry as many times as possible to read from or write to, recover and
remap a bad sector/page/block/etc, sometimes taking up to a few
minutes to do so. In that span of time, the RAID controller would see
the harddrive as unresponsive, and conflict with the RAID controller's
error recovery method and usually will drop an "unresponsive" drive
from a RAID array if it takes more than the time set in the card's
firmware (usually 10 seconds), even if the drive itself is still in
"good health". In a simple RAID mirror, the array will go through a
rebuild process which is pretty much just copying data from the
undropped drive to the dropped drive to maintain a full mirror, which,
when you factor in both the rebuild and reverifcation process, can
take a few hours -- depending on the amount of data and the size of
the drives that are mirrored. In a RAID 5 array, it can take
significantly longer to rebuild.
RAID eition drives (WD's RE2/3/4s and Seagate's Constellation drives)
in addition to the hardware and warranty differences, have a setting
in the firmware to stop a read or write recovery operation attempt
after 7 to 10 seconds, and let the RAID controller just recovery by
copying the data from the other drive (in RAID 1) or from parity
information (RAID 5). Even on firmware RAID controllers like Intel's
onboard ICHxR ROM, the ERC timeout is 10-14 seconds, if I'm not
mistaken.
That being said, certain desktop class hard drives can have error
recovery control enabled using certain tools in Linux or Windows
(SmartMonTools as an example) and make them better suited to use in a
RAID array -- matter of fact, WD had a tool available called
"TLER.exe" that actually allowed one to change the ERC setting in the
drive firmware (however, it would apply the change to every WD drive
the tool detected at once), but most WD Green drives (made after
2008/2009) no longer support the function in its firmware, and Seagate
Barracuda drives can support enabling CCTL, but will revert back to
factory firmware settings if the drives are powered down (in other
words, if the system is warm restarted, the settings stick, but if one
shuts down and cold-boots, then CCTL goes back to disabled -- the
setting is volatile in firmware).
That said, it's the TLER/CCTL Error Recovery Control settings that
sometimes make RAID edition drives not really suited for single
desktop use on their own, because if they ever come across a similar
read/write error, the drive will simply stop the attempt after 7 to 10
seconds, rather than keep attempting as many times as it can like
regular desktop drives do.
Phrased another way, desktop drives are fine as in RAID arrays as
Enterprise drives, as long as the desktop drives never encounter a
read/write error or bad sector, which is an unrealistic expectation.
The only instance in which it wouldn't be a problem is using the
software RAID natively in Windows, as the OS is natively aware of
Dynamic Disks and the mirror/stripe-with-parity configuration
information as it's stored on the disk, rather than in firmware ROM or
on a hardware BIOS.
Your mileage may vary, in the end, as there are people who've made
RAID 5 arrays on their onboard RAID controllers (firmware-RAID) and
have had no issues using regular desktop drives, and those who've
created RAID 5 arrays on an LSI PCIe card with battery-backup and
256MBs onboard cache using WD RE4 drives and have had issues. RE
drives fail and can take out an entire array just as easily as desktop
drives in the same place depending on the type of RAID array they're
configured in. In the end, it's not recommended to use desktop class
drives in any array other than a simple mirror, and not supported in
any case, from any known drive manufacturer.
If I'm missing anything, please feel free to chime in