It works like this:
Most operating systems have a system call that allows a so-called "synchronous write". This means that during a write operation, if a write has completed then it's guaranteed that it was committed to disk.
Synchronous write is therefore non-cached. It blocks the application until it has completed. This kind of operation is obviously slower than cached write which keeps data in OS memory until disk is idle enough and then writes the data.
Some critical software, such as database software, perform synchronous writes for critical data because a half-written update in case of a power loss can be detrimental to the database integrity.
RAID controllers are notoriously slow with RAID-5 writes so this becomes a problem if your application software uses a lot of synchronous writes. For this reason, RAID-5 controllers are equipped with their own caches.
What the RAID controller does is it writes the data to its cache instead and LIES to the OS, telling it that it committed the data to disk whereas the data is actually still in RAID cache.
But what if power was lost while the data was still in RAID controller's buffer? You'd have a half-written and probably inconsistent data on your disks.
You may say that this behaviour defeats the purpose of a synchronous write... if it was ok to have a cached write then the app software wouldn't ask for a sync write in the first place.
The compromise is this: RAID controller still lies to the OS that it committed the data to disk, but to protect this critical data in case of a power failure, RAID controller has a battery that keeps the cache alive for some time until power can be restored.
So after the power comes back and the disks spin up and initialize, the controller still has that data in its cache thanks to the battery and can finish writing your transaction to disk.
Everyone's happy.
This is why RAID controllers usually won't let you enable write cache unless you have a functional and charged battery unit.