This is a follow up of another topic (Is allowing unfiltered curl request from a website a vulnerability?) on which I am doing some private research.
Given:
A publicly reachable webservice that accepts any url and performs a curl get request on it. The service operates without authentication.
The linked topic already states that unfiltered access is a security issue. But something on this topic surfaces my thoughts periodically and it took me a while to think about: Can such a service made sufficiently secure against SSRF and alike?
Obvious steps:
- cURL is mighty, restrict allowed schemes to http(s) and ftp (solves file, gopher, dict, etc. issues)
- prevent access to entire loopback: localhost, 127.0.0.1-127.0.0.255 (I was totally unaware that the entire network of 127.x.x.x points to your machine 0_o)
- prevent access to 0.0.0.0
- disallow broadcast IP 255.255.255.255 (although unlikely that something serves anything on the allowed schemes above)
- prevent private IPs to avoid access to internal networks (impersonalization of a server, which is part of the private network?) -> 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 (thanks wiki)
But there is more, right?
- if cURL is configured to follow redirects, the redirect must be validated the same way as the original IP, since forging a redirect is trivial
- IPv6: Everything for the nice old IPv4 address space must be redone there too, right?
- Are there some ports to filter by? Remember schemes are restricted to http(s) and ftp. This may still technically be a port scanner but not necessary malicious but getting a website from port 675 might be ok.
- Prevent DoS remote urls: Implement some sort of token (CSRF token alike), introduce timeouts between multiple requests, say 1 second or ban IPs which keep hammering. (Does of course not solve DDoS, but preventing DDoS is probably outside of the scope here)
One last thing I cannot fully get my head around:
What's with DNS? Is it possible to register a DNS entry to point to localhost or private networks?
On my machine I technically can perform a GET of http://my.box
and get my router.
Now how can somebody mitigate that risk?
Is performing a nslookup
a solution? If I get an IP, validate the IP.
If not, it may be anything, deny.
I keep forgetting what my NAS does so I can reach it via host name in my local network, but being paranoid is probably a good way here.