1

CVE-2022-22720 (Apache HTTP Server 2.4.52 vulnerability) mentions that the risk is with HTTP Request Smuggling.

My understanding of HTTP Request Smuggling is that a front server A transmits to a back server B a request. That request can be "enriched" with extra contact that gets interpreted by server B.

I see how this can be a problem when server A has some intelligence about how to process the request. An example would be that it terminates a call, makes some authorization decisions and requests something from B, that B provides blindly.

But if A is only a load balancer and the request that reaches B is fully authorized by B? Would HTTP Request Smuggling still be an issue? (this scenario is globally similar to B being on the front, it is pushed to the back for performance/availability reasons)

WoJ
  • 8,957
  • 2
  • 32
  • 51

3 Answers3

3

Request smuggling attacks involve placing both the Content-Length header and the Transfer-Encoding header into a single HTTP request and manipulating these so that the front-end and back-end servers process the request differently.

With any more than one server interpreting responses you will be at risk of HTTP Request Smuggling. Here is an example:

POST / HTTP/1.1
Host: yourwebsite.com
Transfer-Encoding: chunked
Content-Length: 53

0

GET /secret.txt HTTP/1.1
Host: yourwebsite.com   
Foo: x     

Because the content-length header is set to 53, the front end will interpret the request completely normally. But if the back end utilizes chunked encoding, it will read the request entirely different. The back end will see this request cut off at the "0", and anything beyond that 0 will be interpreted as an entire new request. This will usually be appended ON TOP of the next person requesting anything from the web server. That is why the "Foo: x" header is included. The back end server will see the next web request like so:

GET /secret.txt HTTP/1.1
Host: yourwebsite.com   
Foo: xGET / HTTP1.1Host:yourwebsite.com

As for the next user requesting, their request is completely ignored because the header is ignored.
At that point, because the request has made it to the back end, it is trusted by the back end server (in your situation a load balancer) and either a response is given or it is passed on further.

ex7lted
  • 50
  • 5
  • Just to complement: it's not just Content-Length and Chunked, it's anything about altering the size of the messages. The TE-CL is one way, they are other (like double CL in old days, or a lot of parsing issues). – regilero Aug 29 '22 at 09:42
2

Here is nice explanation of request smuggling: https://portswigger.net/web-security/request-smuggling

I'd say that request smuggling would be possible even if front server is just a load balancer.

From the linked document:

Most HTTP request smuggling vulnerabilities arise because the HTTP specification provides two different ways to specify where a request ends: the Content-Length header and the Transfer-Encoding header.

For example: Attacker sends both Content-Length and Transfer-Encoding: chunck -header. Front-end server uses the Content-Length header and the back-end server uses the Transfer-Encoding header. Header is passed as-is to back end and here we go.

Marcel
  • 3,494
  • 1
  • 18
  • 35
ex4
  • 121
  • 4
1

Request smuggling is about altering the number of messages in the HTTP protocol. You think there's only one message (like, one request, or one response) and another actor thinks there's more (like 1, 2 or 1 and half, etc.). It could also be that the load balancer counts 2 and the backends is counting 3, etc.

They're a lot of ways to make this happen. Usually you try to make information about the size of the message hazardous. And to do that you exploit strange HTTP syntax (doubling headers, conflicting headers, oversized attributes, control characters, bad timings, ...).

Being a Load Balancer OR being the backend server is sometimes not very important, usually a successful attack requires having different software on the load balancer and the backend server, that's important to get different interpretation of the same stream by the 2 actors.

Also, very important, HTTP pipelining is a key feature (introduced in HTTP/1.1, and it's not just Keepalive, it's having several chained messages). With pipelining you allow the stream of HTTP content to contain more than 1 message, this explains why a load balancer can be impacted by several responses when only one is expected, or why a backend can think there's more than 1 request while the load balancer thinks only one request was sent. Without pipelining any of the actor would just cut the stream after the first message.

One of the protection in HTTP servers against this bad usage of HTTP (altering the number of messages) is that this will usually generate bad queries, errors, that will generate either 400: Bad request or some 50x messages. Because the attacker is playing with parsing issues. And the HTTP RFC states that any error message MUST also close the connection (with the header Connection: close and the tcp/ip socket really closed after that). It also states several times that strange syntax should generates errors.

This prevents some of the smuggling attacks, where the hidden query of hidden response would come after an error. If you close the communication channel any remaining content cannot harm any of the actors. So, send error, and close the channel after sending the error.

The linked CVE talks about not implementing this protection with error messages (closing the http pipeline by closing the keepalive channel when emiting errors), or not doing it well enough (maybe a read buffer was not flushed from the remaining data, for example).

I'm not sure this flaw was chained with real attacks, but it could facilitate attacks. You can generate HTTP activity after an error in the response stream. Not implementing the forced connection: close on errors is a flaw that a lot of HTTP servers have, not only previous versions of Apache. I think what we see here is a way of preventing future issues by being more strict on HTTP RFC, and that's something Apache is doing more and more recently, like with the HttpProtocolOptions Strict directive.

regilero
  • 449
  • 2
  • 4