2

I'm running a Fedora 11 server with Apache 2. I'm trying to optimize so things are as fast as possible from the server side, and I'm noticing (via Firebug for Firefox) that upon loading the homepage of one of the sites on the web server that for every file it loads (HTML, CSS, JavaScript, GIF, PNG, JPG, etc.), it does a DNS lookup. All of the files it is looking up are local to the server, so I'm surprised to see it even do a DNS lookup. Also, each of these lookups is in the 150-450ms range, which is way too high for my liking.

I've tried adjusting /etc/resolve.conf to use Google's Public DNS servers. I restarted the network service and tapped the page again, but the numbers didn't go down. I've reverted back to the default DNS servers since I didn't see any gain.

Any ideas on what is causing it to: a) do the dns lookup in the first place, and b) take so long when doing the actual lookup?

Thanks in advance.

Travis
  • 141
  • 1
  • 5
  • Just to be clear, you are running firefox from your webserver? – Zoredache Mar 17 '10 at 18:34
  • No. I am running Firefox on my machine, accessing a web site that resides on the web server. I have also had people who do not reside on my network access the site and they are seeing the same DNS Lookup issues that I'm seeing. – Travis Mar 17 '10 at 18:42

9 Answers9

3

Any call to a DNS name requires a lookup, even if it's local, so that part is expected. However, it should cache the record for as long as the TTL, so as long as you are using the name DNS name for all of your objects on the page, it shouldn't have to do the DNS lookup multiple times. You don't happen to be using unique cnames for each object on the page?

Check the TTL setting for your zone to confirm that it's set to something reasonable.

As for the longer times, it could be from either the DNS server or the DNS client. Try testing using nslookup to do DNS queries directly against the DNS server to see if you get the same response time. You may want to walk the domain name path from the TLD down to your domain name (or cnames) to see where it slows down.

A way to rule out (or in) your DNS client is to watch a public site like google.com with firebug to see if it is also slow.

Scott Forsyth
  • 16,339
  • 3
  • 36
  • 55
  • I ran dig (on the domain in question) in 2 locations: directly on the web server, and on my machine (mac). Both query times were in the 0-1ms range, but I did come across something interesting. On the web server, it displayed an authority section, and additional section: ;; AUTHORITY SECTION: com. 87426 IN NS B.GTLD-SERVERS.NET. com. 87426 IN NS L.GTLD-SERVERS.NET. etc. I did not receive the authority section when querying with my Mac. Also, I did check some other sites that I knew weren't using CDNs and they didn't have the DNS issues I'm having. – Travis Mar 17 '10 at 18:55
  • To answer your other questions: I'm not referring to any other CNAMES, I'm using absolute paths in the HTML to refer to the files in question; The TTL is set to 86400 – Travis Mar 17 '10 at 19:10
  • The authority section is probably fine. Your web server must have DNS installed on it. If DNS queries are slow from outside the server then it sounds like it's not related to the server itself. You've narrowed it down to a DNS issue. You mentioned a CDN. Are you using a CDN's DNS? If so, is that different than the 0-1ms range results you got from testing the domain? A CDN is going to have some unique DNS settings that will attempt to have minimal or no DNS caching. Have you tested all DNS records for the CDN? – Scott Forsyth Mar 17 '10 at 19:15
  • I mentioned CDNs because I wanted to test sites that were similar to mine, in that they didn't have CDNs. The server is a cloud server with Rackspace. Could it be something relative to cloud hosting? – Travis Mar 17 '10 at 19:34
  • Your description seemed like a DNS issue. Just to confirm, in firebug you can tell that it's the DNS lookup for the objects that is taking 150-450ms? What I don't understand is why that report shows the high numbers but your direct DNS test is 0-1ms. Are you sure that the "dns lookup" is the issue, or is it downloading each of the objects? If it's truly DNS relaed, then the cloud shouldn't come into play, but if it's object related then the cloud may be a factor. Try to narrow down the issue into the small part possible. i.e. can you view a single image by itself and repro? – Scott Forsyth Mar 18 '10 at 01:39
  • I'm attaching a screenshot of my Firebug. The teal section is the DNS Lookup section of the file request: http://www.mediafire.com/imageview.php?quickkey=2z22n5mzydj – Travis Mar 18 '10 at 13:38
  • You're right, it does look like DNS is coming into play on each request, and that it's taking a long time to load. I can't even guess. What happens if you test the images individually? Can you repro with a single image? – Scott Forsyth Mar 19 '10 at 04:15
  • When I do it with a single image, I'm getting roughly 50ms per request. I think the biggest concern is that I'm encountering a DNS lookup for every file request. I can deal with 150ms lookup times if it only happens once, but it shouldn't be happening any more than that. In my Apache vhost I have gzip compression turned on for all text based files, and expires headers for gif,png,jpg,css,and js set to 30 days (the layout doesn't change often for the site in question). I've turned off both of sets of directives, but no change. Is my machine making the DNS requests or is the server? – Travis Mar 19 '10 at 13:14
  • Since firebug can see them, the DNS requests are from your client. From the command prompt, type, on 3 different lines: "nslookup" "set d2" "yourdomain.com". See if there is an odd TTL or other expiry times. It seems that your DNS server is forcing a fresh lookup on each request, and isn't as fast as it could be. If your domain name a 2nd level domain (i.e. domain.com) or third or fourth (i.e. images.domain.com)? To resolve DNS, the DNS server must traverse the whole path from the told level domain to the final domain. Somewhere in the process may be slow. – Scott Forsyth Mar 19 '10 at 15:20
  • Since I am making the request from my machine, I'm assuming that takes the blame I've been placing on my own web server and places it onto the DNS servers I'm using. Is that correct? I ran the nslookup you mentioned and it gave me a lot of output, but I didn't see much useful data. I ran another one by doing: nslookup, set type=any, problemdomain.com. and I got the following pieces of information: serial = 1267985218, refresh = 3600, retry = 300, expire = 1814400, minimum = 300. Do any of those results throw up red flags? – Travis Mar 23 '10 at 16:55
  • I think it's a DNS issue, but I'm not sure if it's because you're testing on your machine. Can you reproduce the issue if you test from another computer? I assume you can. Those settings don't throw any warning flags. Nothing stands out to me, but why don't you send an email to rackspace and see if they can tell you what may be different with their environment that would case that. If you give them the URL, they can hopefully tell you what the issue is. – Scott Forsyth Mar 24 '10 at 14:26
2

I had a very similar problem and solved it. It was a problem with our iptables configuration, which I understand was custom in-house, so you probably don't have the same problem but I thought I would link it up just incase.

Only receiving one document at a time from new web server

"Removing -m limit --limit 1/s from our iptables configuraton solved the problem presented."

2

I've just been troubleshooting this exact problem with one of our servers - multiple DNS lookups per page showing up in firebug, one for each item that gets loaded. We've found that the issue was that in the Apache config, KeepAlive was set to Off. Changing this to "On" enabled multiple requests to be made per TCP connection and prevented the DNS lookups for each item.

We've found loading times are now half to a third what they used to be, and the DNS requests aren't showing up in firebug anymore.

More info:

http://httpd.apache.org/docs/current/mod/core.html#keepalive

Beerey
  • 252
  • 1
  • 4
  • 10
0

Are you logging the domain name, or doing some other reverse DNS lookup for the users' IP addresses?

It's also often a problem, when you're using domain names instead of IP addresses somewhere in your apache config files.

Chris Lercher
  • 3,982
  • 9
  • 34
  • 41
  • No. HostnameLookups is set to Off and looking through both the access log and combined logs, they are only logging IPs. – Travis Mar 17 '10 at 18:41
0

what are your ping times to your dns servers? If you've got high latency to your dns servers then you will get high latency dns lookups. If your dns servers are overloaded, think about adding a caching name server to you network, this will improve performance.

Your gateway can also be configured to give dns traffic higher priority, thus keeping looks up fast.

I've never used a windows dns server before, but maybe you can explore the unix route. Low latency, high concurrency servers are what unix really is good at.

The Unix Janitor
  • 2,388
  • 14
  • 13
  • I pinged the following 4 servers (may be just 2 servers, can't tell): the 2 IPs in my resolve.conf (0ms ping) and the 2 nameservers for the domain in question (0ms). Seems like it has to be something system-based... – Travis Mar 18 '10 at 13:09
0

Check the paths you're using for your content. Do you have absolute or relative paths used, and are you using an fqdn for content?

Example:

<img src="http://mysite.mydomain.com/mypic.png" />

instead of:

<img src="mypic.png" />

Depending on your flavour of DNS, and other factors, the fully qualified domain name might trigger a DNS lookup every time.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
  • The site in question was built using WordPress, and WP sets variables for use within your layouts to refer to files within your "template directory", "stylesheet directory" etc.. Those variables, by default, have the FQDN built in. I have adjusted those variables and I am now using path based sources (i.e. /images/blah.png). There have been no changes in the DNS times by making that adjustment. – Travis Mar 18 '10 at 13:32
  • Any chance that the setting is cached and hasn't taken effect yet? If they were absolute URLs previously, that sounds like a good lead why DNS is used on each request. If you view-source on the webpage, are the URLs relative or absolute? – Scott Forsyth Mar 19 '10 at 04:17
  • The URLs are relative now. The changes took effect immediately after adjusting the script and I have turned caching off on my browser when testing everything, to ensure I am accessing it as a user would when they first visit the site. A note on the caching: I've visited other sites with the same "no-cache" settings and it has not done a DNS lookup for every file, so I know that setting on my browser is not causing the multiple DNS lookups. – Travis Mar 19 '10 at 15:15
0

Could this be caused by the Keep-Alive issue described here: http://linuxmafia.com/pipermail/sf-lug/2010q2/007698.html ??

For some background see the section 14.10 here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html and also (especially section 8.1.2) here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html

Basically HTTP version 1.1 by default uses persistent connections. I had the same issue as you and it turns out that my local web server forces the browser to close the connection (issues "Connection: close" header in response).

thoughtcriminal
  • 345
  • 2
  • 4
  • 8
0

This sounds more like a client issue than a server issue. The server almost certainly isn't doing DNS lookups - it's whatever client you're using to connect to it. If I'm wrong on that point, you should definitely clarify that, because the post reads as if it's the server doing the looking up, and that would simply be creepy. :)

Guess: Your client browser is configured for automatic detection, and your WPAD.dat / PAC file uses a method that requires DNS resolution - something like IsInNet, DnsResolve, or similar.

This might be combined with a disabled DNS cache on the client.

On Windows, IE used to have a built-in DNS cache, but I think that's subsequently gone away in favour of using the Windows DNS Client (dnscache) service. No idea about other browsers or platforms. (Have you tried other browsers?)

So, my $0.02: turn off all proxy settings, and see what you get. Or turn on an explicit proxy and off autodetection, and the proxy becomes responsible for all name resolution. Fun!

TristanK
  • 8,953
  • 2
  • 27
  • 39
-1

I'm seeing the same behavior in firebug for an external site that I'm analyzing. AFAICT, there's nothing wrong with the DNS server - dig shows a completely reasonable TTL. But for some reason, FF is taking hundreds of ms to do the DNS lookup portion of a request. Maybe a bug in firebug?

  • Hmm! The site I'm looking at has many images, and it looks like the DNS lookup times get longer in chunks of 6 images requests - I know the FF will only open a certain number of connections to a web server. I wonder if the time FF spends waiting for existing connections to close is logged as "DNS Lookup" by firebug? – Firebus Dec 01 '10 at 01:42
  • 1
    Hi Firebus, and welcome to Server Fault. Please keep in mind that this is not a forum, and "me too" posts don't belong in the answer section. If you have an ANSWER, then feel free to put it in the answer section. Otherwise, please use the "add comment" feature to add a comment to the original question or some other answer. Following these basic rules will help to keep Server Fault a great place to find answers to complex questions. Thanks! – Jed Daniels Dec 01 '10 at 02:03