12

Correct me if I'm wrong: When encrypting a file, GPG creates a one-off AES encryption key and encrypts that key using RSA. This is supposedly to take advantage of AES's superior performance when handling larger amounts of data.

If that's true, then why is gpg --encrypt so much slower than, for example, p7zip's AES-256 encryption?

Jon Staryuk
  • 223
  • 1
  • 2
  • 5

1 Answers1

31

Let's try...

First, I create a 500 MBytes file full of random bytes:

dd if=/dev/urandom of=/tmp/foo bs=1000000 count=500

then I encrypt it using GnuPG, measuring the time taken by that process ("keyID" is the UID of the public key I am using):

time gpg -r "keyID" --cipher-algo AES256 --compress-algo none -o /tmp/bar --encrypt /tmp/foo

Total time on my machine (Intel i7 at 2.7 GHz, 64-bit mode, GnuPG 2.0.22):

3.77s user 0.55s system 99% cpu 4.328 total

so an encryption bandwidth of 115 MBytes per second: that's not so bad ! Now let's try again, but this time without deactivating the compression (i.e. let's remove the "--compress-algo none" options):

20.99s user 0.79s system 98% cpu 22.038 total

which is 5 times slower. So there you have it: it is not the encryption which is "slow", it is the compression (although more than 20 MB/s can still be decently fast for many usages).


115 MB/s is consistent with a "portable" implementation of AES. Code which uses the specialized AES opcodes (which are available on my i7) would be faster, up to, say, 300 to 400 MB/s (though AES-NI opcodes have a very high throughput, they also have a non-negligible latency, which means that the best performance requires parallel processing, i.e. CTR mode; but the OpenPGP standard mandates CFB, which is sequential).

Anyway, since a good mechanical hard disk tops at about 120 MB/s, while a very good Internet access (optic fiber) will be below 10 MB/s, one can say that 115 MB/s of raw encryption speed is sufficient.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Note that you need GnuPG 2.x to take advantage of these AES opcodes. GnuPG 1.4 (or rather the underlying libgcrypt) does not support this. – jlh Oct 03 '17 at 15:16
  • 2
    Retrospectively, my assumption on Internet access speed is a bit outdated. A number of end users have gigabit now, i.e. about 110 MB/s. Still not enough to make encryption the bottleneck. – Thomas Pornin Oct 03 '17 at 18:23
  • What's the rationale in having `--compress-algo` enabled by default? The plaintext/cyphertext will be the same size, won't they? So why compress it, rather than leaving it up to the user? (for instance I'm using `xz` first, and was wondering why my process was slow) Does this vary with `--symmetric`? – Luciano May 20 '19 at 11:33