No one talk about mayor problem on non SSD, it is fragmentation.
Each 64KiB block is written where it would be without compression, but it it can be compressed, so at least is <=60KiB, then it writes less than 64KiB, bit nest block will go where it would as if the previous one wasn't compress, so a lot of gaps apèars.
Test it with a multi gigabyte file of a virtusl machine of any windows system (they tend to be reduced at 50%, but with a huge >10000 fragments).
And for SSDs there is something not told, how on the hell do it write? I mean, if it does write it uncompressed and then overwrite it with compressed version (for each 64KiB mega blocks), the SSD life is cutted a lot; but if it writes it directly on compressed form, then SSD live could be lo ger or shorter.... longer if you write that 64KiB only at once, shorter, mu h shorter if you write that 64KiB in 4KiB, because it will write such 64KiB (in compressed form) as many times as 64/4=16 times.
The performance penalty is caused because CPU time needed to compress/uncompress be bigger than time gained on not need writting 4KiB blocks... so with a very fast CPU and a very slow disk compression reduces time to write and read, but if SSD is very fast and CPU is quite slow, it will write much slower.
When i talk about fast or slow CPU i mean at that moment, CPU can be in use by 'maths' or other process, so allways think on free cpu, not on CPU specs at paper, same goes for disk/SSD, it can be in use by multiple process.
Say you have 7Zip writting a huge file from another disk with LZMA2, it will use a lot of CPU, so if at the same time you are copying a NTFS compressed file, it has no CPU free, so it will go slower than without NTFS compression, but as soon as 7Zip end using the CPU, such CPU will be able to NTFS compress faster, and at that time NTFS compression can do things faster.
Personally i never use NTFS compression, i prefer PISMO file mount PFO containers (with compression, and it also allows encription, both on the fly and transparent to apps), it gives much better compresion ratio and less CPU impact, while it is a read and write on the fly, no need to decompress prior to use, just mount and use it in read and write mode.
Since PISMO do compression on RAM prior to write on disk, it can make SSD last longer, my tests of NTFS compression makes me think it send data to disk twice, first uncompressed, and after that if it can compress it is overwitten in compressed form.
Why NTFS compressed write speed on my SSD is near 1/2 of non compressed one with files than compress at near 1/2 of its size or lower compressed sizes? In my AMD Threadripper 2950 (32 cores and 64 threads) with 128GiB of ram (fast CPU, very fast CPU) at less than 1% use of it, so there is plenty CPU to do compression faster than SSD max secuential speed, maybe because NTFS compression starts after that 64KiB blocks are sent to disk uncompressed and then overwritten with the compressed version... oh if i do it on a virtual machine running Linux on host and Windows on guest, then Linux cache informs me such clusters are written twice, and speed is much, much faster (Linux is caching the non compressed NTFS writes sent by windows guest and since after it they get overwrited with compressed data, linux do not send uncompressed data to the disk, Linux write cache!!!).
My recomendation, do not use NTFS compression, except inside Virtual machines guests thst runs windows if host is Linux, and never ever if you use the CPU a lotor if your CPU is not fast enough.
Modern SSD has a huge internal ram cache, so that write+overwtite caused by NTFS compression can be mitigated by SSD internal cache system.
My tests where done on "pretty" SSD's with no internal RAM for cache inside the SSD, when i repeat them on the ones with ram cache, write speed is fastet, but not as one would think.
Do your own tests, and use huge files sizes (bigger than total tam installed to avoid cache hidden results).
By the way, something some people do not know about NTFS vompression... any file of 4KiB or lower will never ever get NTFS compress because there is no way to reduce its size at least 4KiB.
NTFS co pression takes bloack of 64KiB, compress them and if it can reduce one cluster (4KiB) then it is written compressed, 64KiB are 16 blocks of 4KiB (consecutives).
If a file of 8KiB when compression ends the final result is more than 4KiB it van not save any cluster, so it is written non compressed,... and so on... pression must gain at least 4KiB.
Ah, and for NTFS compression, the NTFS must be with cluster size of 4KiB.
Try and do a test: Use 128KiB cluster on a NTFS on SSDyou will see a huge performance improve on write an read speeds.
Filesystems on SSD with 4KiB cluster are loosing a lot of their speed, on most cases more than a 50% lost... see any benchmark out there that test with different block sizes, from 512Bytes up to 2MiB, most of SSD write at double speed when on 64KiB (or 128KiB) cluster size than on 4KiB.
Want a real imptivement on your SSD? Do not use 4KiB cluster on filesystem, use 128KiB.
Only use 4KiB cluster if more than 99% of your files are less than 128KiB.
Etc, etc, etc... test, test and test your own case.
Note: Create the system NTFS partition with diskpart in console mode while installing Windows with 128KiB cluster, or from another Windows, but do not let windows format while on installer graphical part (it will allways format it as 4KiB cluster NTFS).
All my Windows are now installed on 128KiB cluster NTFS partition on >400GiB SSD (SLC).
Hope things will get clear, M$ is not saying how iy writes NTFS compressed, my tests tell me it write twice (64KiB uncompressed, then <=60KiB compreesed), not just once (beware of that if on SSD).
Beware: Windows tries to NTFS compress some internal dirs, no matter if you say no NTFS compress, the only way to really avoid such if having NFTS cluster size different than 4KiB, since NTFS compression only works on 4KiB cluster size NTFS partitions
Many software suites have files you never use. Files which are frequently used, are cached in ram anyway. LZW is actually a very simple algorithm so don't expect it to hog the CPU that much. – Uğur Gümüşhan – 2019-02-01T14:44:35.560
@UğurGümüşhan: exactly, I didn't notice any extra CPU utilization even when working with large compressed files off of fast SSDs at high data rates. – Violet Giraffe – 2019-02-01T14:46:28.617