I think there are 2 possible definitions for what it's a RAID array "write hole".
The page you mention is taking "write hole" to mean RAID array inconsistency. To understand this, you should take in consideration how a RAID array works. The write operations are sent to the different discs of the array. But as the discs are independent, there is not guarantee about in what order the write operations are really committed (by the discs) to physical media. In other words, when you write blocks to a RAID array, the write operations are not atomic. This is not a problem in the normal operation of the array. But it could be in cases like power-loss events or any other critical failure.
Internal inconsistency of a RAID array can happen in every RAID level that has some sort of data redundancy: RAID 1, 4, 5, 6, etc. RAID 0 is not subject to inconsistency issues, as there is no redundant data that needs to be synchronized among the different discs of the array.
There are several possible strategies to deal with RAID array inconsistency issues:
Linux MD software RAID uses, by default, a "sync" strategy when assembling a RAID array that is marked as "dirty". I.e., for RAID 1 arrays, one of the discs is taken as the master and its data is copied to the other discs. For RAID 4/5/6, the data blocks are read. Then the parity blocks are regenerated and written to the discs. The sync process can be very lengthy. In order to make it much faster, there is feature called the write-intent "bitmap", that keeps tracks of the hot chunks of the array. This bitmap feature would reduce, significantly, the sync process duration, in exchange for some performance loss during write operations.
Hardware RAID arrays with battery-backed memories use a 2-step strategy. First, the data blocks to be written are committed to the memory, that acts as a journal. After this step, the data blocks are sent to the discs. In case of a power-loss event or any other failure, the RAID controller will check that all the data blocks in the memory are really committed to the discs.
There is also a CoW (Copy on Write) strategy, that I will further explain a bit later.
The other possible definition of "write hole" refers to data loss issues in RAID 4/5/6 under certain circumstances (RAID levels 1 and 10 are not subject to this kind of "write hole"). I'm quoting Neil Brown definition of the problem in question:
"The write hole is a simple concept that applies to any stripe+parity RAID layout like RAID4, RAID5, RAID6 etc. The problem occurs when the array is started from an unclean shutdown without all devices being available, or if a read error is found before parity is restored after the unclean shutdown."
I.e., you have, for example, a RAID 5 array and there is a power-loss event. The RAID will try to bring the array to a consistent state. But one of the discs doesn't work any more or some of its sectors cannot be read. Therefore, the parity cannot be regenerated from the data blocks, as some of them are missed. You could say: yes, but we have redundancy in the array. So we could use the parity to regenerate the missing data blocks, no? The answer is no. If you do this, you could get garbage data, potentially, in some data blocks. This is a very serious issue. It's not that some data blocks were written or not (modern journaled filesystems don't have any real issue with this). It's that some data blocks of the array are lost or (if regenerated) they are garbage. Either way, there is a serious issue here.
If we take this stricter definition of "write hole", we see that is a special corner case, that only happens under some circumstances. There must be a critical failure like a power-loss event. And, additionally, some disc has to fail (either completely or partially). But for RAID 4/5/6 (the levels with parity blocks), the risk is there.
This risk can be prevented by using a 2-step write strategy (or write with journal technique that was previously explained). With the help of the journal, all data blocks can be safely written to the discs, even under those corner cases. Hardware RAID with battery backed batteries, if well implemented, is not subject to any "write hole" issues. Linux MD software RAID got also a write with journal feature some years ago, that effectively prevents the "write hole" issue.
I'm not so familiar with ZFS, but I think it uses a CoW (Copy on Write) technique in RAID-Z arrays to avoid any "write hole" issues. It would write all the data plus parity to some unused space, and then it would update the virtual reference to these physical blocks. By using this 2-step process, the write operations are guaranteed to be atomic. So that the write hole issue is effectively prevented.