2

There have been wars fought over RSA, DSA, and I'm sure other public key encryption algorithms, and usually the arguments are "Algorithm A is faster to encode, but algorithm B is faster to decode".

However, from what I understand, the slow asymmetric encryption is only used to encrypt a single hash, after which point symmetric encryption is used. See Public Key, Private Key, Secret Key: Everyday Encryption

This leads me to wonder, in practice, does it actually matter which encryption algorithm is slower or faster than the other when they do the slow asymmetric encryption once and in a fraction of a second, and then proceed with fast symmetric encryption algorithms for the rest of the session?

IQAndreas
  • 6,557
  • 8
  • 32
  • 51
  • I think it depends on the use. When we think about a VPN/SSH with limited rekeying I think you're right. When we think about SSL/TLS where you can have dozens of sessions opening and closing at a time (per user) then I think the speed can matter. It also depends upon the hardware that's running these things, and sometimes networks don't upgrade devices when they should. – RoraΖ Sep 04 '14 at 12:11
  • It sure matters when you have billions of connections...The computational overhead adds up. –  Sep 04 '14 at 12:32
  • @raz How often are SSL/TLS shared keys "re-generated"? Do you know a good article that details the process? – IQAndreas Sep 04 '14 at 12:39
  • Tor had performance problems with their handshake which were improved by switching to a faster algorithm (Curve25519). – CodesInChaos Sep 04 '14 at 12:57
  • Well if you're using ephemeral DH, then they're regenerated for each handshake. SSL/TLS do support session resumes for just this reason, the web servers need the time save. It's not so much the renegotiations that are the trouble, but the sheer volume. A single webpage can open up 4-6 sessions at a time. For a server, multiply that by millions of connections per minute and it really adds up. – RoraΖ Sep 04 '14 at 12:58

2 Answers2

2

Yes, it matters when:

1) there is a constrained hardware and there are constrains on the time that it takes to run the whole operation (protocol) that includes authentication/encryption (e.g., waving a smartcard at a tourniquet when paying for entering a metro). The operation might include several encryption and decryption operations that take place in the smartcard, e.g., one decryption to authenticate the smartcard to the tourniquet, several encryptions to authenticate the tourniquet (e.g., there might be a chain of certificates that should be verified).

Another example is a passport with a chip that is controlled at state borders.

Conclusion: one crypto operation might be fine, but when several crypto operations are required, the resulting protocol at constrained hardware might have usability issues.

2) a number of operations is large (it has been already mentioned in the comments). E.g., a number of SSL sessions.

sta
  • 136
  • 3
0

Like said in comments, it's mainly important on webservers: https connection implies that each visitor creates multiples connections for each page and doing so the encryption became a non negligible part of the resources used on the server.

Since in this case server receive a lot less data than it send, and the client resource consumption is less critical, the server could handle more visitors while using the algorithm A than the B, at the same time not changing its users comfort.

ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
GaelFG
  • 31
  • 3