10

Here's some background about my problem:

  • I have a web service running on Heroku, with a dynamic IP address. Static IPs on Heroku are not an option.
  • I need to connect to an external web service which is behind a firewall. The people who operate the external web service will only open their firewall to a specific static IP.

My attempted solution is to use Squid on a separate server with a static IP to forward-proxy requests from Heroku to the external service. That way, the external service always sees the proxy server's static IP, instead of the Heroku service's dynamic IP.

Since my proxy server can't rely on an IP address for authentication (that's the problem to begin with!), it must rely on a username and password. Further, the username and password cannot be transmitted in clear text, because if an attacker were to intercept that clear text, then they could connect to my proxy pretending to be me, make outbound requests using my proxy's static IP, and thus evade the external web service's firewall.

Therefore, the Squid proxy must only accept connections over HTTPS, not HTTP. (The connection to the external web service might be HTTP or HTTPS.)

I'm running Squid 3.1.10 on CentOS 6.5.x, and here's my squid.conf so far. For troubleshooting purposes only, I have temporarily enabled both HTTP and HTTPS proxying, but I only want to use HTTPS.

#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 21      # ftp
acl Safe_ports port 443     # https
acl Safe_ports port 70      # gopher
acl Safe_ports port 210     # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280     # http-mgmt
acl Safe_ports port 488     # gss-http
acl Safe_ports port 591     # filemaker
acl Safe_ports port 777     # multiling http
acl CONNECT method CONNECT

# Authorization

auth_param digest program /usr/lib64/squid/digest_pw_auth -c /etc/squid/squid_passwd
auth_param digest children 20 startup=0 idle=1
auth_param digest realm squid
auth_param digest nonce_garbage_interval 5 minutes
auth_param digest nonce_max_duration 30 minutes
auth_param digest nonce_max_count 50

acl authenticated proxy_auth REQUIRED

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
#http_access allow localhost
http_access allow authenticated

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128

https_port 3129 cert=/etc/squid/ssl/cert.pem key=/etc/squid/ssl/key.pem

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Disable all caching
cache deny all

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:       1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .       0   20% 4320

Using this setup, HTTP proxying works fine, but HTTPS proxying does not.

Here's an HTTP proxy request from a local box:

$ curl --proxy http://my-proxy-server.example:3128 \
  --proxy-anyauth --proxy-user redacted:redacted -w '\n' \
  http://urlecho.appspot.com/echo?body=OK
OK

Good, that's what I expected. This results in a line in /var/log/squid/access.log:

1390250715.137     41 my.IP.address.redacted TCP_MISS/200 383 GET http://urlecho.appspot.com/echo? redacted DIRECT/74.125.142.141 text/html

Here's another request, this time with HTTPS:

$ curl --proxy https://my-proxy-server.example:3129 \
  --proxy-anyauth --proxy-user redacted:redacted -w '\n' \
  http://urlecho.appspot.com/echo?body=OK

curl: (56) Recv failure: Connection reset by peer

Nothing in access.log after this one, but in cache.log:

2014/01/20 20:46:15| clientNegotiateSSL: Error negotiating SSL connection on FD 10: error:1407609C:SSL routines:SSL23_GET_CLIENT_HELLO:http request (1/-1)

Here's the above again, more verbosely:

$ curl -v --proxy https://my-proxy-server.example:3129 \
  --proxy-anyauth --proxy-user redacted:redacted -w '\n' \
  http://urlecho.appspot.com/echo?body=OK
* Adding handle: conn: 0x7f9a30804000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f9a30804000) send_pipe: 1, recv_pipe: 0
* About to connect() to proxy my-proxy-server.example port 3129 (#0)
*   Trying proxy.server.IP.redacted...
* Connected to my-proxy-server.example (proxy.server.IP.redacted) port 3129 (#0)
> GET http://urlecho.appspot.com/echo?body=OK HTTP/1.1
> User-Agent: curl/7.30.0
> Host: urlecho.appspot.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
* Recv failure: Connection reset by peer
* Closing connection 0

curl: (56) Recv failure: Connection reset by peer

Looks like an SSL error. However, I'm reusing a subdomain-wildcard SSL certificate, shown in the above config as cert.pem and key.pem, that I've successfully deployed on other web servers. Moreover, accessing the proxy server directly with curl works, or at least establishes a connection past the SSL stage:

$ curl https://my-proxy-server.example:3129
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>ERROR: The requested URL could not be retrieved</title>

[--SNIP--]

<div id="content">
<p>The following error was encountered while trying to retrieve the URL: <a href="/">/</a></p>

<blockquote id="error">
<p><b>Invalid URL</b></p>
</blockquote>

<p>Some aspect of the requested URL is incorrect.</p>

<p>Some possible problems are:</p>
<ul>
<li><p>Missing or incorrect access protocol (should be <q>http://</q> or similar)</p></li>
<li><p>Missing hostname</p></li>
<li><p>Illegal double-escape in the URL-Path</p></li>
<li><p>Illegal character in hostname; underscores are not allowed.</p></li>
</ul>

[--SNIP--]

Any ideas what I'm doing wrong? Is what I'm attempting even possible? Thanks in advance.

David
  • 163
  • 1
  • 2
  • 5
  • 2
    I do not think that is how Squid is supposed to work. I should be able to make either an HTTP or an HTTPS request proxied over an HTTPS connection. I do not see anything in the documentation to suggest otherwise. Regardless, I tried what you suggest anyway, and it did not work (same result as above). – David Jan 21 '14 at 19:54
  • My previous comment was in reply to comments from another user that appear to have been deleted. Just wanted to note that I cross-posted this question to the Squid mailing list: http://www.mail-archive.com/squid-users@squid-cache.org/msg93592.html – David Jan 23 '14 at 20:43
  • As someone who had a similar scenario, we tried the proxy approach, succeeded and then favoured to move the application away from Heroku to a provider like Virtual/Dedicated machine with a static IP. There is an extra overhead of maintaining a forward proxy server just for this purpose. – Shyam Sundar C S Dec 28 '14 at 20:21

2 Answers2

1

@David , as per your thread in Squid ML - I would suggest going with Stunnel solution. Your authentication would be the SSL certs on both ends of the tunnel, the rest goes "clear text" within that tunnel, or you could do Digest as you wish.

I have used similar solution to "authenticate" NFS endpoints with great success.

Sample use of such authentication could be seen in LinuxGazette's Secure Communication with stunnel

Droopy4096
  • 670
  • 3
  • 7
1

You can see how it's done in this small Docker image: yegor256/squid-proxy. The problem with your code is that the configuration goes after the acl instruction. Just swap them and it all starts working.

yegor256
  • 1,806
  • 3
  • 16
  • 29