13

I use squid on my network to cache content. I launch chrome with a command line switch that tells it to use the proxy.

For the most part this works great as squid caches a large amount of content and with a limited number of users it performs fine.

when I visit a website that is HTTPS using chrome, the first page loads very slowly. The status bar on chrome says "Waiting for proxy tunnel...". Chrome uses the CONNECT verb to tunnel through the proxy and establish HTTPS with the server. Subsequent pages are fast because Chrome keeps the connection open.

I checked my squid3 logs. Here are some of the CONNECT entries. The second column is the response time in milliseconds

1416064285.231   2926 192.168.12.10 TCP_MISS/200 0 CONNECT www.google.com:443 - DIRECT/74.125.136.105 -
1416064327.076  49702 192.168.12.10 TCP_MISS/200 1373585 CONNECT r2---sn-q4f7sn7l.googlevideo.com:443 - DIRECT/173.194.141.152 -
1416064345.018  63250 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064372.020  63038 192.168.12.10 TCP_MISS/200 1809 CONNECT www.facebook.com:443 - DIRECT/31.13.91.2 -
1416064393.040  64218 192.168.12.10 TCP_MISS/200 25346 CONNECT clients4.google.com:443 - DIRECT/173.194.32.196 -
1416064408.040  63021 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064408.040  62515 192.168.12.10 TCP_MISS/200 619 CONNECT ssl.gstatic.com:443 - DIRECT/173.194.32.207 -
1416064427.019  90301 192.168.12.10 TCP_MISS/200 2663948 CONNECT r2---sn-q4f7sn7l.googlevideo.com:443 - DIRECT/173.194.141.152 -
1416064429.019  63395 192.168.12.10 TCP_MISS/200 1339 CONNECT clients6.google.com:443 - DIRECT/173.194.32.195 -
1416064439.015  63382 192.168.12.10 TCP_MISS/200 764 CONNECT talkgadget.google.com:443 - DIRECT/173.194.32.199 -
1416064446.896 170270 192.168.12.10 TCP_MISS/200 2352814 CONNECT r20---sn-q4f7dm7z.googlevideo.com:443 - DIRECT/208.117.252.121 -
1416064471.010  62969 192.168.12.10 TCP_MISS/200 516 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064524.010  63389 192.168.12.10 TCP_MISS/200 764 CONNECT talkgadget.google.com:443 - DIRECT/173.194.32.195 -
1416064534.014  63003 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064542.010  63387 192.168.12.10 TCP_MISS/200 2114 CONNECT www.facebook.com:443 - DIRECT/31.13.91.2 -
1416064553.010  63376 192.168.12.10 TCP_MISS/200 470 CONNECT clients4.google.com:443 - DIRECT/173.194.32.194 -
1416064561.010  63379 192.168.12.10 TCP_MISS/200 1807 CONNECT mail.google.com:443 - DIRECT/173.194.32.213 -
1416064597.019  63003 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064600.126      0 192.168.12.10 TCP_DENIED/403 3630 CONNECT www.google-analytics.com:443 - NONE/- text/html
1416064610.122  10959 192.168.12.10 TCP_MISS/200 626 CONNECT avatars0.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.129  10968 192.168.12.10 TCP_MISS/200 576 CONNECT avatars1.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.130  10968 192.168.12.10 TCP_MISS/200 628 CONNECT avatars1.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.130  10967 192.168.12.10 TCP_MISS/200 576 CONNECT avatars1.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.133  10972 192.168.12.10 TCP_MISS/200 576 CONNECT avatars1.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.133  10970 192.168.12.10 TCP_MISS/200 627 CONNECT avatars0.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.135  10972 192.168.12.10 TCP_MISS/200 576 CONNECT avatars3.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.135  10972 192.168.12.10 TCP_MISS/200 628 CONNECT avatars2.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.260  11098 192.168.12.10 TCP_MISS/200 574 CONNECT avatars3.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064610.316  11155 192.168.12.10 TCP_MISS/200 1063 CONNECT avatars3.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064611.722  13327 192.168.12.10 TCP_MISS/200 17113 CONNECT github.com:443 - DIRECT/192.30.252.128 -
1416064619.130  19005 192.168.12.10 TCP_MISS/200 141 CONNECT avatars3.githubusercontent.com:443 - DIRECT/185.31.17.133 -
1416064639.016  95397 192.168.12.10 TCP_MISS/200 1037 CONNECT talkgadget.google.com:443 - DIRECT/173.194.32.194 -
1416064643.210   4739 192.168.12.10 TCP_MISS/200 4085 CONNECT dgafology.com:443 - DIRECT/198.74.52.100 -
1416064662.010  64990 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -
1416064665.219  65086 192.168.12.10 TCP_MISS/200 3851 CONNECT collector.githubapp.com:443 - DIRECT/54.236.179.219 -
1416064685.276   4003 192.168.12.10 TCP_MISS/200 3956 CONNECT qa.sockets.stackexchange.com:443 - DIRECT/198.252.206.25 -
1416064689.142   3750 192.168.12.10 TCP_MISS/200 357 CONNECT qa.sockets.stackexchange.com:443 - DIRECT/198.252.206.25 -
1416064709.014  78381 192.168.12.10 TCP_MISS/200 779 CONNECT clients6.google.com:443 - DIRECT/173.194.32.193 -
1416064721.010  63396 192.168.12.10 TCP_MISS/200 764 CONNECT talkgadget.google.com:443 - DIRECT/173.194.32.193 -
1416064725.013  63001 192.168.12.10 TCP_MISS/200 545 CONNECT mtalk.google.com:5228 - DIRECT/74.125.136.188 -

Some of the connections take over 60000 milliseconds!

Here are a few GETs for comparison

1416064691.281     68 192.168.12.10 TCP_MISS/200 412 GET http://serverfault.com/questions/ticks? - DIRECT/198.252.206.16 text/plain
1416064693.492     70 192.168.12.10 TCP_MISS/200 309 GET http://serverfault.com/search/titles? - DIRECT/198.252.206.16 application/json
1416064693.548    126 192.168.12.10 TCP_MISS/200 739 GET http://serverfault.com/content/img/progress-dots.gif - DIRECT/198.252.206.16 image/gif

Overall squid performance is excellent! But for CONNECT it is very slow.

I found out you can use echo and nc to request a tunnel from the command line.

I made two connections back to back using this technique

ericu@ericu-desktop:~$ echo -e -n 'CONNECT russiatoday.com:443\r\n\r\n' | nc 192.168.12.95 3127
HTTP/1.0 200 Connection established

ericu@ericu-desktop:~$ echo -e -n 'CONNECT russiatoday.com:443\r\n\r\n' | nc 192.168.12.95 3127
HTTP/1.0 200 Connection established

In the logs,

1416065033.065   3079 192.168.12.10 TCP_MISS/200 0 CONNECT russiatoday.com:443 - DIRECT/62.213.85.4 -
1416065034.090    208 192.168.12.10 TCP_MISS/200 0 CONNECT russiatoday.com:443 - DIRECT/62.213.85.4 -

The first connection took 3079 milliseconds, but the second one only 208. So, it seems that Squid isn't always slow.

Later, I did the same thing again but used tcpdump to capture the traffic from squid to the server.

1416070989.180    732 192.168.12.10 TCP_MISS/200 0 CONNECT russiatoday.com:443 - DIRECT/62.213.85.4 -

As you can see reported latency is 732ms.

Here is the output from the tcpdump capture of the first 3 packets, SYN from my box, SYN ACK from the remote and ACK from my box. My understanding is that this is the 3-way handshake needed to establish the connection

11:03:08.973995 IP 192.168.12.95.34778 > 62.213.85.4.443: Flags [S], seq 1280719736, win 14600, options [mss 1460,sackOK,TS val 605181173 ecr 0,nop,wscale 7], length 0
11:03:09.180753 IP 62.213.85.4.443 > 192.168.12.95.34778: Flags [S.], seq 614920595, ack 1280719737, win 14480, options [mss 1460,sackOK,TS val 1284340103 ecr 605181173,nop,wscale 7], length 0
11:03:09.180781 IP 192.168.12.95.34778 > 62.213.85.4.443: Flags [.], ack 1, win 115, options [nop,nop,TS val 605181225 ecr 1284340103], length 0

Elapsed time is 206.8 ms in that exchange. So in this example squid has 526 ms of latency if my analysis is correct. An extra ~500 ms of latency is huge.

But my analysis may be flawed I think because the "response time" for a CONNECT in squid just measures the total time the tunnel was kept open.

I edited my logformat directive for squid to add the DNS resolution time in milliseconds.

1416072432.918 580 776 192.168.12.10 TCP_MISS/200 0 CONNECT russiatoday.com:443 - DIRECT/62.213.85.4 -
1416072446.823 - 185 192.168.12.10 TCP_MISS/200 0 CONNECT russiatoday.com:443 - DIRECT/62.213.85.4 -

The second column is DNS lookup time, the 3rd is "response time" which may not mean much for CONNECT. The column shows up as - because squid has internal DNS caching. This means that squid used its internal DNS cache for the next connect. This explains the first page view being slow and subsequent ones being relatively fast. This also explains the extra ~500 ms of latency. I am using bind9 running on local host forwarding to googles DNS on ipv4. How am I getting ~500ms of latency for a simple DNS lookup?

Running nslookup using 8.8.8.8 directly and bypassing my local server:

ericu@katz:~$ time nslookup russiatoday.com 8.8.8.8
Server:     8.8.8.8
Address:    8.8.8.8#53

Non-authoritative answer:
Name:   russiatoday.com
Address: 62.213.85.4


real    0m0.056s
user    0m0.004s
sys 0m0.008s

This shows 56 ms of latency for the whole lookup. Pinging that server gives a latency of around 50 ms, so this makes sense.

So something about squid and bind9 don't agree with each other?

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
Eric Urban
  • 233
  • 1
  • 2
  • 7
  • Are you running some firewall somewhere between the proxy and the public net range or between the desktop rig and the proxy ? – Xavier Lucas Nov 15 '14 at 16:11
  • Yes, I am using another machine that uses `iptables` to act as a NAT + firewall for my internet connection. – Eric Urban Nov 15 '14 at 16:26
  • Make sure your rules use netfilter states (NEW,ESTABLISHED) to allow traffic and not only ip:port couples. A bit of tcpdump to see what's taking time would definately help (for instance check DNS requests). – Xavier Lucas Nov 15 '14 at 16:29
  • how would that be any different that just having a rule `iptables -A chain_name -j ACCEPT`. I mean sure, I could add it, but I don't see why it would matter. – Eric Urban Nov 15 '14 at 16:35
  • 1
    It's definately faster to check for the state of an existing connection than testing a bunch of rules for each packet. In my experience, I saw dramatic performance loss without such setup. Best way to analyze your issue : tcpdump. – Xavier Lucas Nov 15 '14 at 16:44
  • Its somehow DNS related, I updated my post. – Eric Urban Nov 15 '14 at 17:34
  • Could be related to squid behaviour in recent versions regarding ipv6. – Xavier Lucas Nov 15 '14 at 17:50
  • bind9 has ipv6 off. I switched dnssec-validation from 'auto' to 'off' in bind9. I have no idea what the difference is but it is noticeably faster. – Eric Urban Nov 15 '14 at 17:51
  • It's not related to bind supporting ipv6 or not. The difference is you won't try to validate records with trusted keys while you did before. Due to the fact that DNSSEC is not enough supported yet it could waste time for "nothing" during DNS resolution process (for instance missing RRSIG). – Xavier Lucas Nov 15 '14 at 18:04
  • this discussion definitely lacks on important test - `time wget -O - https://rissiatoday.com` – drookie Nov 18 '14 at 17:18
  • Just an additional data point: I have the same problem. By experimenting I discovered that the slowness (many seconds of delays) only occurs when I use Google chrome. When I use Apple Safari instead, the long waits disappear. – jochen Jul 21 '15 at 11:26

3 Answers3

13

I know it is a old question but i had the same problem and solved using this in squid.conf

dns_v4_first on

Regards

Juliano Piassa
  • 177
  • 1
  • 2
  • 9
  • 1
    Awesome, thanks so much! I was blaming Chrome the whole time I was having this annoying issue. Should have thought of this as I do have a problem with the IPv6 networking on my vm. – piit79 Nov 10 '17 at 08:08
  • 1
    To the point! Thank you. – Marinos An Apr 16 '20 at 10:34
2

Posting this as I think it will be helpful to anyone running squid with a pfSense box. Thanks to Juliano for their answer! The same setting can be found under (in your pfSense box) Services > Squid Proxy Server > General as Resolve DNS IPv4 First. Below is a screenshot.

pfSense squid proxy settings

darshanags
  • 121
  • 2
-2

I had to set "connect_timeout 2.0" because it was attempting ipv6 dns resolution first and then shifting to ipv4 after a default 60 second timeout.