0
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 54.88.231.116
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 100.24.246.89
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 34.197.189.129
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 3.221.133.86
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 3.224.11.4
    Aug  9 23:14:45 dnsmasq[11657]: reply registry-1.docker.io is 54.210.105.17
    Aug  9 23:14:50 dnsmasq[11657]: query[A] gitlab.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:50 dnsmasq[11657]: forwarded gitlab.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:50 dnsmasq[11657]: reply gitlab.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:50 dnsmasq[11657]: query[AAAA] gitlab.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:50 dnsmasq[11657]: forwarded gitlab.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:50 dnsmasq[11657]: reply gitlab.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq[11657]: query[A] registry.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:51 dnsmasq[11657]: forwarded registry.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:51 dnsmasq[11657]: query[AAAA] registry.mydomain.com.home from 192.168.1.20
    Aug  9 23:14:51 dnsmasq[11657]: forwarded registry.mydomain.com.home to 192.168.1.2
    Aug  9 23:14:51 dnsmasq[11657]: reply registry.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:51 dnsmasq[11657]: reply registry.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq[11657]: query[AAAA] registry.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:51 dnsmasq[11657]: cached registry.mydomain.com.home is NODATA-IPv6
    Aug  9 23:14:51 dnsmasq[11657]: query[A] gitlab.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:51 dnsmasq[11657]: cached gitlab.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq[11657]: query[A] registry.mydomain.com.home from 192.168.1.21
    Aug  9 23:14:52 dnsmasq[11657]: cached registry.mydomain.com.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq[11657]: query[A] registry-1.docker.io.home from 192.168.1.21
    Aug  9 23:14:52 dnsmasq[11657]: forwarded registry-1.docker.io.home to 192.168.1.2
    Aug  9 23:14:52 dnsmasq[11657]: query[AAAA] registry-1.docker.io.home from 192.168.1.20
    Aug  9 23:14:52 dnsmasq[11657]: forwarded registry-1.docker.io.home to 192.168.1.2
    Aug  9 23:14:52 dnsmasq[11657]: reply registry-1.docker.io.home is NXDOMAIN
    Aug  9 23:14:52 dnsmasq[11657]: reply registry-1.docker.io.home is NODATA-IPv6

These requests come from a kubernetes pod. inside the pod, it's config is

bash-4.4$ cat /etc/resolv.conf
nameserver 10.96.0.10
search gitlab-managed-apps.svc.cluster.local svc.cluster.local cluster.local home
options ndots:5

if i do an nslookup, it seems to work

bash-4.4$ nslookup registry.mydomain.com
nslookup: can't resolve '(null)': Name does not resolve

Name:      registry.mydomain.com
Address 1: 104.18.61.234
Address 2: 104.18.60.234
Address 3: 2606:4700:30::6812:3dea
Address 4: 2606:4700:30::6812:3cea
bash-4.4$

but i still get .home appended

Aug  9 23:44:13 dnsmasq[11657]: query[AAAA] gitlab.mydomain.com.home from 192.168.1.20
Aug  9 23:44:13 dnsmasq[11657]: cached gitlab.mydomain.com.home is NXDOMAIN
Aug  9 23:44:13 dnsmasq[11657]: query[A] gitlab.mydomain.com.home from 192.168.1.21
Aug  9 23:44:13 dnsmasq[11657]: cached gitlab.mydomain.com.home is NODATA-IPv4

The kubernetes host's dns is:

root@node-a:/etc$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.

nameserver 127.0.0.53
search home

I'm using coredns, with the following config:

apiVersion: v1
data:
  Corefile: |
    mydomain.com {
        log
        forward . 1.1.1.1 1.0.0.1 9.9.9.9
        reload
    }
    .:53 {
        log
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        #proxy . /etc/resolv.conf
        forward . 192.168.1.2:53 {
            except mydomain.com
        }
        cache 30
        loop
        reload
    }

I've tried editing configs to poing to 1.1.1.1, no dice. For some reason something somewhere is appending .home to the end of domain names

tail -f pihole.log |grep alpine
Aug 10 00:03:59 dnsmasq[11657]: query[AAAA] dl-cdn.alpinelinux.org.home from 192.168.1.20
Aug 10 00:03:59 dnsmasq[11657]: cached dl-cdn.alpinelinux.org.home is NXDOMAIN
Aug 10 00:03:59 dnsmasq[11657]: query[A] dl-cdn.alpinelinux.org.home from 192.168.1.20
Aug 10 00:03:59 dnsmasq[11657]: cached dl-cdn.alpinelinux.org.home is NODATA-IPv4
Aug 10 00:03:59 dnsmasq[11657]: query[A] dl-cdn.alpinelinux.org.home from 192.168.1.21
Aug 10 00:03:59 dnsmasq[11657]: cached dl-cdn.alpinelinux.org.home is NODATA-IPv4
Aug 10 00:03:59 dnsmasq[11657]: query[AAAA] dl-cdn.alpinelinux.org.home from 192.168.1.21
Aug 10 00:03:59 dnsmasq[11657]: cached dl-cdn.alpinelinux.org.home is NXDOMAIN

My DNS path is as follows:

Pod -> CoreDNS -> Pihole (for ads) ->Bind9 -> cloudflared 1.1.1.1/1.0.0.1

given that i see .home being appended (and failing to resolve) in pihole, I don't think the problem is bind9 or cloudflared, it's either the pod config, coredns or pihole. Where is this comming from?

I've somewhat worked around the problem (so far) by changing the gitlab runner deployment to use the following dns properties:

dnsConfig:
  nameservers:
    - 1.1.1.1
    - 9.9.9.9
  options:
    - name: ndots
      value: "2"
    - name: edns0
  dnsPolicy: None

Thanks!

Evan R.
  • 161
  • 7
  • 3
    Why `ndots 5` in the first place? It sounds like this environment is deliberately set up to trigger the `search` functionality when resolving basically any name. (Which would appear to be what you are asking about how to stop...?) – Håkan Lindqvist Aug 09 '19 at 23:23
  • 3
    This is expected behavior, that's how the resolver works. 'ndots:5' means 'treat anything with less than 5 dots' as potentially not fully qualified (append your host's domain name if it fails). It sounds like this is a common complaint / problem people run in with Kubernetes using such a high value. See [Kubernetes pods /etc/resolv.conf ndots:5 option and why it may negatively affect your application performances](https://pracucci.com/kubernetes-dns-resolution-ndots-options-and-why-it-may-affect-application-performances.html) for a detailed explanation and some possible work arounds. –  Aug 09 '19 at 23:25
  • @yoonix, could you post your comment as answer? It will make your answer more visible for people who have similar issue. – PjoterS Aug 12 '19 at 10:00

1 Answers1

1

Posting this answer based on @yoonix commet as community support for better visibility for other users with same issue.

In OP case ndots was set to 5 (default value is 1). This means that if ndots is set to 5 and the name contains less than 5 dots inside it, the syscall will try to resolve it sequentially going through all local search domains first and - in case none succeed - will resolve it as an absolute name only at last.

ndots:n

sets a threshold for the number of dots which must appear in a name before an initial absolute query will be made. The default for n is 1, meaning that if there are any dots in a name, the name will be tried first as an absolute name before any search list elements are appended to it.

In OP update ndots value was set to 2 and it works now.

  options:
    - name: ndots
      value: "2"

More detailed information about ndots can be found here.

PjoterS
  • 615
  • 3
  • 11