According to this article a CSV is actually a CsvFs layer that hides and controls access to the underlying NTFS. It provides synchronizing services that help multiple CSV aware actors write to the filesystem without conflict.
Meanwhile, DFS-R is tied to NTFS because it works with low-level structures directly to catch and respond to creation and change events.
DFS is cluster aware because it can use an old-style lun that fails from the active node to the failover node but the whole volume has to fail at once so the DFS database and the filesystem move together. It doesn't support CSV because it doesn't support CsvFs, it wants raw access to vanilla NTFS so it can peek beneath the covers. Covers that CSV layers on top of to do exactly what DFS-R can't stand, someone else writing to the volume without notice.
I suppose they could write it for CsvFs someday but why add that complexity when it gains you nothing but heartache at the DFS-R level? If the CsvFs wound up hosted on a different node than DFS it would constantly be asking that other node to examine the low-level structures and pass all that back and would just be reacting late to every event.
I'm no expert on this low-level stuff but they seem to be pretty incompatible forms of FS magic to me!