Just a bit of info, adding to the confusion:
If You are planning to deploy S2D - now or some time in the future - You can NOT, I repeat NOT use consumer SSD´s!
MS decided (wise enough), that S2D should always ensure, that every bit is written correctly AND SECURELY, before the next is sent. So S2D will only use any on disk cache, if it is fully protected against power loss (read: full PLP). If so, the disk - regardless of type - is as fast as its cache, at least until this is exhausted.
BUT, if You are using consumer SSD´s (no PLP), S2D will per design write-through the cache and wait for the data to be confirmed written directly to the individual NAND circuit. Which pr. design results in write latency being measured in seconds as opposed to microseconds even at relatively low loads!
I have seen a lot of discussions on the subject, but never seen anyone actually finding a workaroud this. One could argue, that dual PSUs and UPS would provide sufficient protetction at least for non-critical workloads, especially if they are replicated. So in specific use cases, it would be relevant to be able to "cheat" S2D into using on disk cache that is not PLP.
But that decision to overrule basic data integrity is NOT up for discussion - it is PLP or no S2D, period!
I learned this the hard way in a really overdimensioned 4 node cluster (256 cores, 1,5Tb RAM, 16x4Tb Samsung QVO 860, 20 relatively small Hyper-V´s), where performance started out acceptable. When replication was set up, performance went over poor to really bad. The VMs went from somewhat slow do completely nonresponsive. Eventually ending up in the whole pool crashing beyond repair. Studying the logs revealed a bunch of errors - all related to write latency, sometimes values were beyond 15 SECONDS...!
We suspected network errors or just bottlenecks (2x10Gbit without RDMA), but no matter what We did to tweak performance (even tried 4x10Gbit with RDMA), We ended up with the same result. So I studied more and stubled upon an article explaining why You should NOT use consumer SSDs with S2D.
Being cheap (and having bought two sets of 16x4Tb consumer disks!) I studied some more, trying to bypass this per-design obstacle. I tried a lot of diffent solutions. With no luck...
So I ended up buing 16x1Tb real datacenter SSDs (Kingston DC500M, the cheapest PLP disks I could find) for testing. And sure enough, all problems dissapeared and HCI is suddently as fast, robust and versatile as claimed. Damn!
Now the same setup is running twice the load with the original network configuration, half as many cores and half as much RAM, but write latency rarely exceeds 200 microseceons. Furthermore, all VM are responsive as h..., users are reporting sublime experience and we have no more errors in backup or syncronisation or anywhere else, for that matter.
The only difference is that disks are now 16x4Tb Kingston DC500M.
So use this hard learned lesson as adwise: do NOT use disks without PLP in HCI...!