4

LTO tape drives, from their very first generation, offer hardware compression that, theoretically, allows a maximum of 2 - 2.5x the rated data capacity of each cartridge to be stored, with only a slight penalty to read/write rates.

I'm having difficulty finding out what algorithm this hardware compression uses, and what its characteristics are. Specifically, what I'd like to know is:

  • Is this compression based on a standard algorithm (DEFLATE/bzip/gzip/etc)?
  • How is it operating on the incoming data (blocks/files/streams)?
  • Are these characteristics identical across tape standard generations, hardware vendors, or individual drives?
Mikey T.K.
  • 1,367
  • 2
  • 15
  • 29

1 Answers1

5
  • The compression is part of the LTO standard, called SDLC, and is a variant of the LZS algorithm

  • It operates on the data in a block fashion. LTO6 and onward apply this compression to larger data blocks to support higher compression rates.

  • And, since it's part of the standard, it's the same across the entire LTO ecosystem (minus the change in LTO6+).

Mikey T.K.
  • 1,367
  • 2
  • 15
  • 29
  • I've seen "larger data blocks" mentioned in a few places but this does not actually make sense in the context of SLDC, as it is a streaming compression format, nor in the context of LTO, as block size is already configurable and the maximum has not changed. I suspect what they have actually done is increase the history buffer size, but I do not own an LTO-6+ drive to confirm this. Also if this is in fact what they have done, it is interesting they would deviate from the public standard, as SLDC, unlike ALDC, does not define a history buffer size other than 1024 bytes. – Dark Mar 08 '20 at 15:00
  • Found a small piece of info that confirms my previous comment in https://www.quantum.com/iqdoc/doc.aspx?id=15146 - under figure 9 (regarding the 2.5:1 vs 2:1 compression ratio of LTO6+): "achieved with larger compression history buffer" – Dark Aug 28 '20 at 16:33