94

If I have a website or mobile app, that speaks to the server through a secured SSL/TLS connection (i.e. HTTPS), and also encrypt the messages sent and received in-between user and server on top of the already secure connection, will I be doing unnecessary moves? Or is double-encryption a common method? If so, why?

Anders
  • 64,406
  • 24
  • 178
  • 215
Lighty
  • 2,368
  • 1
  • 23
  • 36
  • 128
    One reason would be to utilize the highly effective double `ROT13` encryption algorithm. – Keavon Mar 15 '16 at 23:02
  • 24
    You might want different levels of encryption - you want to encrypt the user's text message so that only the recipient can read it, and you also want to encrypt the protocol message that carries that, so only your server can read it. – user253751 Mar 16 '16 at 04:37
  • 15
    Some legacy applications might have been using their own encryption on different channels than TLS long ago. When the developers convert them to TLS, they probably don't want to introduce new bug possibilities by removing the old encryption layer. – Guntram Blohm Mar 16 '16 at 09:24
  • 1
    @Keavon The ultimate meet-in-the-middle attack! – Dmitry Grigoryev Mar 16 '16 at 12:15
  • 1
    @DmitryGrigoryev How can you have a meet-in-the-middle attack with no key? – user253751 Mar 17 '16 at 02:41
  • 1
    There might be a predefined security requirement on the encryption of all messages from user to server. This requirement might be higher or in some way not match perfectly with HTTPS. To gain more control and to be able to point towards a piece of code that **exactly** meets the requirement introduced in paragraph X of the specification for a review document this might be the easiest choice for a developer. – Simply G. Mar 17 '16 at 07:20
  • 3
    @immibis How can you apply ROT13 twice and pretend the text is encrypted? – Dmitry Grigoryev Mar 17 '16 at 07:33
  • 8
    @DmitryGrigoryev - that's an _old_ joke about ineffective security for exactly the reason you're asking - it _sounds_ good but is totally useless. – FreeMan Mar 17 '16 at 13:29
  • Why depends on the problem. If the problem is interception in transit, HTTPS is arguably sufficient. If the problem is interception at either endpoint where HTTPS is terminated, HTTPS doesn’t provide protection. If a mobile app stores sensitive information and it’s compromised, so is the data stored in the app. If that data is encrypted, it adds a layer protection for the data even if the app is compromised. I’m glossing over key management and it's concerns here because that didn’t seem to be the topic, double encryption was. – Paraplastic2 Mar 17 '16 at 13:50
  • @DmitryGrigoryev That's like saying "ROT13 is vulnerable to DROWN attacks because I can pour water into the recipient's airway". – user253751 Mar 17 '16 at 18:32
  • Request parameters can end up in the server access logs, this may be undesirable so then encrypting them might help. – flup Mar 17 '16 at 18:50
  • I read somewhere that double-encrypting DES actually *decreases* the security of the encryption, which is why Triple-DES became standard. – SplashHit Mar 17 '16 at 20:11
  • @Keavon Pshaw, I do 10,001 rounds of ROT13 so attackers have to take more time to get the plaintext back. IFNKOVHGROGHPRM! – ErikE Mar 18 '16 at 00:10
  • TLS only encrypts from you to the server. The server gets the plaintext. If you were storing data that you encrypted on your end, then the server would not be able to decrypt your data. They can only store it for you. So it's not "double encryption" to encrypt inside of TLS, because from the standpoint of protecting you from the server admin, it's not encrypted at all by TLS. – Rob Mar 18 '16 at 01:38
  • In addition to these comments, and alluded to in some of the answers, SSL/TLS is only _point to point_ security not _end to end_ security. While there is nuance there, a malicious entity can MitM the process (Bluecoat for example), decode your SSL stream, and then re-initiate a connection to the server. – Brian Redbeard Mar 18 '16 at 05:42
  • Some communications through an untrusted space (aka the Internet) must be double wrapped via an IPSEC tunnel within another IPSEC using two different implementations of IPSEC with different keys so that a successful attack against the outer wrapper implementation only reveals a further layer of encryption that is not attackable using the same exploit. – Randall Mar 18 '16 at 21:09
  • To be sure, to be sure... – copper.hat Mar 19 '16 at 22:01
  • Historically, one of the weak links in the Enigma machine, used by the Germans in WWII, was when they decided to double encrypt everything. The second encryption introduced a pattern that was visible to the human brain, making the encryption easier to break. – pojo-guy Mar 19 '16 at 23:09
  • Consider using a VPN tunnel for HTTPS traffic while sending a PGP-encrypted email with a small truecrypt volume attachment containing an encrypted backup of your server which holds your database of users (where sensitive values are encrypted rather than freely visible in plaintext) that is normally stored on an HDD with full-disk encryption. Lots of examples of legitimate "double encryption" here, though admittedly real life will rarely be as convoluted as this example containing at least six layers of encryption. – kwah Mar 20 '16 at 10:27
  • @SplashHit [triple-des](https://en.wikipedia.org/wiki/Triple_DES) was to provide forward and backward compatibility with regular DES, not because "two isn't enough". – JDługosz Mar 21 '16 at 06:14
  • @pojo-guy got a link for that? – JDługosz Mar 21 '16 at 06:15
  • @JDługosz The note about the pattern weakness of using enigma for double encryption was from a television interview with one of the surviving cryptologists from the WWII team. It was long enough ago that I'm not sure internet connections were a common consumer product at the time I saw the interview. – pojo-guy Mar 21 '16 at 15:29
  • @JDługosz The technical explanation is that the enigma algorithm was managed by gear driven machines, and the decryption algorithm was sufficiently close to the encryption algorithm (functionally) that the second encryption layer was a partial decryption. From that they could tell the length of the keys, and use common german words (social engineering) to finally decrypt the actual message. For example, a 5 character key was almost invariably "hitler" – pojo-guy Mar 21 '16 at 19:31
  • Maybe you are thinking of how the session key was listed twice before changing to it for the body of the message? That was the critical flaw that allowed it to be cracked! It wasn't double encoded; it was repeated in the same stream. Encryption and decryption were actually the *same* process, as in rot13. So yes, I imagine double encoding with the same plugboard settings but different wheel set-up would create a wealth of tye kind of cycle info they used. I don't recall that in the histories I've read. – JDługosz Mar 21 '16 at 23:08
  • Maybe they fear that the channel might be downgraded and thus want to have an additional safety – BlueWizard Apr 04 '16 at 13:37

15 Answers15

121

It's not uncommon, but it may not be required. A lot of developers seem to forget that HTTPS traffic is already encrypted - just look at the number of questions about implementing client side encryption on this website - or feel that it can't be trusted due to well-publicised issues such as the Lenovo SSL MitM mess.

However, most people weren't affected by this, and there aren't any particularly viable attacks against TLSv1.2 around at the moment, so it doesn't really add much.

On the other hand, there are legitimate reasons for encrypting data before transmission in some cases. For example, if you're developing a storage application, you might want to encrypt using an app on the client side with a key known only to the user - this would mean that the server would not be able to decrypt the data at all, but it could still store it. Sending over HTTPS would mean that an attacker also shouldn't be able to grab the client-encrypted data, but even if they did, it wouldn't matter. This pattern is often used by cloud based password managers.

Essentially, it depends on what you're defending against - if you don't trust SSL/TLS, though, you probably can't trust the encryption code you're sending (in the web application case) either!

Matthew
  • 27,233
  • 7
  • 87
  • 101
  • 7
    Come to think of it, there's a reason to not jettison all those stackoverflow questions involving client-side web browser encryption in javascript. They render superfish and friends useless, and packaged corporate interceptors easily defeated (the problem converts to an arms race). – Joshua Mar 15 '16 at 18:06
  • 48
    If the HTTPS layer is compromised via a MITM attack, surely the client-side JavaScript encryption would be made... less secure. – wizzwizz4 Mar 15 '16 at 22:22
  • @Joshua What many people fail to realize is that SuperFish is not an SSL/TLS/HTTPS attack. It is an attack on the key sharing part of the infrastructure. So in reality almost any "secure" implementation of a public key code on your computer would be compromised. – Aron Mar 16 '16 at 00:26
  • @wizzwizz: Hence my reference to arms race. The re-tooling would immediately defeat all dumb capture tools and all pre-existing tools the author bothered to break, so they have to move again. And small dedicated companies can push out new updates every day so so. Your IT shop not so much. – Joshua Mar 16 '16 at 01:14
  • There are VPN-based MiTM attacks that can be used to snoop on HTTPS packets (on Android, at least, though I assume other platforms would be equally vulnerable) if you have physical access to the client device/endpoint. In theory adding your own encryption into the mix would defeat that kind of thing, as long as the person isn't able to exploit their physical client access in a way that gives them your internal encryption keys. – aroth Mar 16 '16 at 12:28
  • 7
    If you work for a large company, your IT department may be intentionally MITM-ing your SSL connections -- several companies sell ["SSL inspection" proxies](https://insights.sei.cmu.edu/cert/2015/03/the-risks-of-ssl-inspection.html) -- this usually involves installing "compromised" root certs on corporate devices. These "inspection" tools wouldn't be able to do anything to compromise custom per-application encryption, however. – Frank Farmer Mar 17 '16 at 22:34
  • 1
    If you work for a large company and circumvent filters or traffic inspection, your IT department may come after you. – ASA Mar 21 '16 at 09:54
  • 1
    based on @FrankFarmer there is now also scope for malicious browser plugins to perform a MITM by replacing the XMLHttpRequest or Fetch Prototypes in the browser with there own sitting on top of the native browser versions and capture all input before sending to the browsers HTTPS handler or after the HTTPS handler has decrypted the data. – Martin Barker Apr 22 '20 at 18:26
100

HTTPS only provides encryption while the message is in transit over the network/internet.

If the message is stored or processed by an intermediary (e.g. a message queue) at some point between the client and the server that finally processes it then it will not remain encrypted whilst it is in the intermediary.

Also, if the TLS/SSL is terminated at the service perimeters (e.g. on a load balancer) then it may not be encrypted on the internal network. This may be a problem where high security is required, for example in some regulated environments.

In both of these cases, message level encryption will ensure that the data is encrypted at all points in between the client and the final receiver.

As @honze said, this is called defense in depth and it is intended to ensure that even if a system is partially compromised (e.g. they manage to do a man-in-the-middle attack to compromise the SSL/TLS, or exploit a vulnerability in the message queue to get at the data at rest) the attacker cannot get at the protected data.

Mike Goodwin
  • 2,151
  • 1
  • 11
  • 13
  • 19
    This seems a more clear answer about why you'd want to encrypt something sent by SSL. – Steve Sether Mar 15 '16 at 19:37
  • 2
    Note to add: this also allows the client to control who is allowed to decrypt the final package. For example, with backups that are sent to a cloud. You wouldn't need to share the decryption key with the cloud provider, meaning even if the cloud provider is "hacked" or legally forced to divulge their data, then the client is still the only one with the keys. – NotMe Mar 17 '16 at 16:49
30

I'd like to share my experience on the title question. It's not really related to the complete question itself, but this answers the question "why would someone double-encrypt?"

In the past I worked for an organization that handles the communication between care providers (doctors, hospitals, etc.) and insuring organizations (mutualities). We kind of acted like a router.

The schema was roughly the following:

care provider 1 \                   / insuring organization 1
care provider 2 ---- router (us) ---- insuring organization 2
care provider 3 /                   \ insuring organization 3

We had the following protection:

  1. End-to-end encryption: Care provider 1 needs to send patient info to insuring organization 1. This info is privacy-sensitive and therefore needs to be encrypted. At our level we have no right to know what data is being sent to the insuring organization.
  2. Care-provider - router encryption: The care provider sends information as metadata for us to be able to handle it. This information needs to be encrypted. The contract stated that the messages still had to be encrypted even inside our network so that only one of our servers ever knows the metadata of the information being sent. Since we have several pipes (load balancers, firewall, etc.), encryption is required at this level as well.
  3. HTTPS to avoid MITM attacks: Not only did our data need to be protected, but the HTTP metadata needed to be protected as well, therefore HTTPS.

I hope this sheds some light on why several layers of encryption can be required.

Michael
  • 2,391
  • 2
  • 19
  • 36
15

You are right. This is a multi layer security concept known as defense in depth.

The encrypted messages are likely to address end to end encryption and the SSL/TLS addresses the encryption of the metadata. This is a useful pattern.

honze
  • 1,106
  • 1
  • 8
  • 19
  • 1
    It does kinda answer the question, but it's a little thin for what I'm looking for in an answer... +1 still. – Lighty Mar 15 '16 at 09:21
9

HTTPS is encrypted in transit and decrypted at the ends. So the obvious situation where you might want to double-encrypt is where you don't want one (or possibly both!) of the ends to see the cleartext.

Some situations I can think of off the top of my head:

  • encrypted email through webmail providers. If I send a GPG encrypted message through Gmail which I access over an HTTPS connection, it's encrypted twice. Because I don't want gmail to read the contents.

  • encrypted backup services. I want to use HTTPS to stop my login credentials being stolen, but I don't want the backup service to see "inside" the backups.

  • payment gateways. You could imagine one where an encrypted message is sent between a secure payment hardware token and a bank, via a user's insecure device and a merchant's site. The link in the middle should be HTTPS, but that's not sufficient: it needs to be encrypted at the insecure PC and less-secure merchant's website.

Note that S/MIME provides for "triple wrap" (sign/encrypt/sign) : https://www.rfc-editor.org/rfc/rfc2634 , so if you consider signing as well as encryption even more possibilities may make sense.

pjc50
  • 2,986
  • 12
  • 17
8

I wanted to give an additional reason: Standardization.

I have an application that for security reasons, all data flowing into and out of it must be encrypted. Because it's already encrypted once, the data is permitted to flow over both http (legacy) and https (current) connections. It makes much more sense to encrypt twice than to create a version of the application that runs unencrypted over https and encrypted over http.

DKATyler
  • 203
  • 1
  • 3
  • 3
    An argument could be made for desupporting the http protocol, however, maintaining ssl certs and current ssl software is tricky and has a lower reliability than http for our environment. – DKATyler Mar 16 '16 at 05:39
  • I understand your point, but someone who would make an Application where security is an important aspect, will force HTTPS, and disable HTTP, or in case of a web application, the HTTP pub would be empty, and only HTTPS would be supported. – Lighty Mar 16 '16 at 08:23
  • @Lighty Trust me, the real world doesn't operate on sensible principles like that. Reality leads to much harsher situations. – Chris Hayes Mar 17 '16 at 00:42
3

It is best practices when dealing with highly sensitive information such as financial, medical, military, or psychological data. The basic idea of multiple encryptions is to prevent any unauthorized user from retrieving the data. Suppose the initial possible combinations for the encryption method was 1 billion. By applying another encryption method on top of it, we could multiply the possibilities to 1b ^ 3. It would require an unauthorized user to take longer to decrypt the data. While the encryption is still not perfect, it is better.

At one of the organizations I worked at we utilized multiple encryptions. This is a simplification of the previous flow:

  • Audit both client and server devices and components for clearance over software
  • Encrypt the data on storage
  • Compress data
  • Encrypt data with proprietary software
  • Begin connection with server
  • Audit both client and server devices and components for clearance over connection
  • Compress transmission
  • Send data over encryption connection; If the connection is dropped, restart entire process.
  • Upon successful file completing, audit data for consistency

If you aren't dealing with an environment that is network heavy with sensitive data then this is overkill.

The strategy behind this method ensures that the devices, components (MAC), and IP have been authenticated. Encrypting data is standard procedure and so is sending over HTTPS. Some organizations go beyond the basic security and also require darknet-like networks utilizing Freenet, I2P, IPsec/VPN, or Tor to connect. Regardless of encryption, the data compression will reduce the required storage and network resources; however, it will offset your performance to RAM or processing. Finally, the reason we restarted the connection after a disconnect is we discovered a way to hijack the data stream via man-in-the-middle.

Ultimately, there is no perfect way to encrypt data forever, but focus your efforts to encrypting until the data or information becomes irrelevant or you produce a superior way to encrypt data.

LJones
  • 107
  • 1
  • 6
3

There are a number of reasons for sending encrypted data over an encrypted connection:

  • even if the encryption of the connection is broken (e.g. MitM, possible but challenging with HTTPS, and leading to interception of all transmitted data), the data is still encrypted
  • the HTTPS server may not be trusted, and is responsible for relaying the data to another server
  • similarly, the HTTPS server may relay the data to another server over an unencrypted connection, and having the client encrypt the data before transmission reduces the load on the HTTPS server which would otherwise have to encrypt the data from all clients instead being able to pass it straight through
Micheal Johnson
  • 1,746
  • 1
  • 10
  • 14
2

API obfuscation

Even if all communication is encrypted via HTTPS, the client can still have the option to see his traffic before encryption with various debugging tools. Especially if you use a browser environment or an app with https provided by an underlying system.

In this case you could encrypt your data with a static key, so the client can not easily read and manipulate the traffic. Of course this is only obfuscation, since the key needs to be stored somewhere on the clients machine (at least in RAM), but with software source code it is always just obfuscation. The user would have to spend some considerable effort to recover your key and decrypt your traffic to read an manipulate his requests.

Examples could be a web-based game, which submits the players high-score.

Falco
  • 1,493
  • 10
  • 14
  • Or figure out how to access the game's admin/super-user methods. – Armfoot Mar 17 '16 at 12:10
  • A better method for securing APIs is authentication. If the user must be authenticated using a strong authentication system such (e.g. public key) before being able to use administrative features then the API doesn't need to be obfuscated. Similarly, features such as score submission can use a temporary "signature" (YouTube uses something like this) which must be generated at least once and passed with every request - while this can always be reverse-engineered, it can be made very complicated (and could even be based on the content of the request, making it different for every request). – Micheal Johnson Mar 17 '16 at 14:03
  • @MichealJohnson this won't help at all, if you send `authId=746788553 score=200` the user just needs to change the score value via man in the middle (easy on his own device) if you obfuscate, he will have a hard time – Falco Mar 17 '16 at 18:14
  • 2
    You also send `signature=21a87c7b0a6005df838b17b9aafd0dc1`, where `signature` is generated by an obfuscated algorithm (preferably changed every few weeks) from `authId`, `score`, and the current time. You don't have to obfuscate the transmission of the signature via a second layer of encryption, because even though the user knows it's a signature and knows it's value, if they change the score then the signature won't be valid anymore (because the score is used in the calculation) and by the time they've reverse-engineered the obfuscation the algorithm has hopefully been changed. – Micheal Johnson Mar 18 '16 at 14:44
  • @MichealJohnson This is a really good point and will achieve the same benefit. - But your previous statement didn't fit this `which must be generated at least once and passed with every request` sounds more like a static value per session, your second comment is much better – Falco Mar 18 '16 at 16:09
  • I said "at least once", thus meaning "may require re-generation, possibly for every request". I was also fighting with the character limit on comments. – Micheal Johnson Mar 18 '16 at 16:47
1

If I understood correctly, the Tor network works that way:

Alice writes a letter to Dave and encrypts it three times: first with Dave's key, then adds Dave's address, encrypts the package with Craig's key, adds Craig's address and encrypts the package with Bob's key.

She then sends the letter to Bob, who decrypts it, and finds Craig's address, and forwards it to him.

Craig decrypts it, finds Dave's address, and forwards it to him. Dave decrypts it and finds that the letter is for him

In a perfect world, no one except Alice and Dave could now tell that Dave is indeed the recipient of that letter, because it COULD BE that he had found Emily's address inside the envelope and forwarded it.

A second application would be that you encrypt a message with both your private key and the recipient's public key. The recipient decrypts the message with your public key and his private key, and can thus obtain the information that the message is from you and for him. But usually, a HMAC is used to make sure the message is indeed from a certain sender and has not been tampered with.

forest
  • 64,616
  • 20
  • 206
  • 257
Alexander
  • 2,143
  • 2
  • 16
  • 22
1

The main reason for multi levels encryption is separation of concern.

Often a set of data may be processed by multiple servers, possibly controlled by multiple organizations, not all of whom are completely trusted with the entire data. In most cases, most of these intermediate servers only need to act on parts of the data, and so if they don't need to see some parts of data, that part can be encrypted. You'd give the intermediate access to the data they need to see encrypted in a key that they have and encrypted blob(s) they can pass on to other servers for further processing.

The simplest example is email with GPG and TLS encryption. The main job of a mail transfer agent (email relays) is to transfer email from one hop to the next. They need to see the mail routing information to do their job, but they shouldn't need to see the messages itself. Thus you'd double encrypt the email connection with one key that the mail transfer agent can understand and the message with another key that only the recipient understands.

Another example is calendar/notification scheduling service. You put events into your calendar, to be notified by your calendar application that something is happening at a certain time. The calendar had no need to know what the event is, who are involved in the events, nor where the event is.

A secondary reason for multiple encryption is as an insurance in case one of the encryption layer is broken. IMO, this is a much weaker reason because you need to consider that every unnecessary additional layer increases the implementation complexity and complexity is the enemy of security.

Lie Ryan
  • 31,089
  • 6
  • 68
  • 93
1

I don't see this mentioned here, but I think it's slightly more important than a comment. They might do that for perfect forward secrecy. An attacker might not know the key to your HTTPS connection, but they might record every single byte and store it for years. Then they may hack you down the line, discover a vulnerability, or compel you to reveal your server's private key later, then go back in history and decrypt your messages. By having a temporary ephemeral key encrypting messages underneath the HTTPS connection, the attacker would still be unable to read the messages, or at the very least significantly delayed.

Chloe
  • 1,668
  • 3
  • 15
  • 30
  • That's not really double encryption, though; it's only (mostly) about how the session key is derived, agreed upon or shared. Basically, if the session key is derived in such a way that someone who has a complete record of the plain text of all the data that was transferred back and forth during the key exchange cannot later recreate the session key, then you have PFS; if someone with such a record can recreate the session key, then you don't. PFS is awfully nice to have, but it (potentially) solves a very different problem, as indicated by examples of untrusted intermediary *endpoints*. – user Oct 07 '17 at 10:51
0

To avoid problems with PCI compliance, where the developer wishes to use payment gateway and puts the compliance onus on the 3rd party.

The fields in the form post can be encrypted client side, so the developer doesn't have any unencrypted card details pass through their systems (so a step further than not storing them).

Notably, this is on top of HTTPS. Thus the website doesn't even see the unencrypted data, only the user and the payment gateway.

Example with the Brain Tree payment gateway: https://www.braintreepayments.com/blog/client-side-encryption/

Alex KeySmith
  • 319
  • 1
  • 9
0

Flaws in both algorithms and implementations likely exist right now, that have yet to be discovered. Ideally these flaws wouldn't exist but they do.

If you encrypt with two different algorithms and only one of them is flawed, you're still okay and your data is safe. If the first layer is broken, then an attacker is only able to get ciphertext. If the second layer is broken, an attacker is not able to get through the first layer.

Double-encrypting(or triple or quadruple or..) can be a good way to avoid putting all of your eggs in one basket.

Shelvacu
  • 2,333
  • 4
  • 16
  • 29
  • 1
    You are assuming that the double encryptions don't interact in a way that means the combination is easier to break than either algorithm on its own. Do you have a proof of that? – Martin Bonner supports Monica Mar 18 '16 at 09:02
  • @MartinBonner [No, I don't](http://crypto.stackexchange.com/questions/33797/can-double-encrypting-act-in-a-way-that-means-the-combination-is-easier-to-break) – Shelvacu Mar 18 '16 at 09:25
  • It's seems a plausible assumption to me - but it smells of developing your own crypto algorithm (even if it is from well respected building blocks). – Martin Bonner supports Monica Mar 18 '16 at 10:15
  • Yes, it is proven that combining two cyphers correctly will be at least as strong as the better of the two. If the passes are separate (not combined), a middle conditioning pass can be used to prevent "known plaintext" packet metadata from the outer cypher. – JDługosz Mar 21 '16 at 06:27
  • @JDługosz What is your definition of a cipher? Does ROT13 count? – Shelvacu Mar 21 '16 at 06:52
  • As defined in [this book](http://www.amazon.com/Cryptography-Engineering-Principles-Practical-Applications/dp/0470474246) in particular, which explains the principle in detail. Yes, rot13 qualifies as a *stream cypher* in the most general definition. Repeating keys and trivial keys would be disqualified if you used [a definition](https://en.wikipedia.org/wiki/Stream_cipher) that demanded a pseudorandom key stream with a long period. ... – JDługosz Mar 21 '16 at 07:03
  • ... But look at [Caesar Salad^h^h^h^h^hCypher](https://en.wikipedia.org/wiki/Caesar_cipher) to see that it does commonly apply to [anything](https://en.wikipedia.org/wiki/Substitution_cipher) – JDługosz Mar 21 '16 at 07:03
0

Not exactly HTTPS problem, but another valid use case of double encryption is in commonly used in Tor in case "I don't believe the delivery guy and want to stay anonymous by using more steps".

Every "delivery guy" decrypts only the envelope to find out the other delivery guy. The communication in this case is encapsulated and encrypted in SOCKS proxy.

Jakuje
  • 5,229
  • 16
  • 31