18. March 2020 3 min read
CoreDNS cannot resolve external domain name
You just initialized the Kubernetes cluster, managed to get weave running and now Coredns is running as well. All works well until you try to access external network from within one of the pods. External network is not accessible, and there is no visible indication that something is wrong as all pods are running and acting normally. If you check logs of your coredns pod you will probably see something like below
$ kubectl logs coredns-84ddd9d996-nrwpn -n kube-system
2020-03-17T15:35:29.849Z [ERROR] plugin/errors: 2 domain.name. A: dial udp [2a02:1800:100::43:1]:53: connect: cannot assign requested address
2020-03-17T15:35:29.850Z [ERROR] plugin/errors: 2 domain.name. AAAA: dial udp [2a02:1800:100::43:1]:53: connect: cannot assign requested address
Googling this brings you to a lot of old and obsolete issues, but the solution is far more simpler. If you check your CoreDNS ConfigMap you will notice, that it forwards to node`s local /etc/resolv.conf file for additional domain name resolution where the pod is running.
kubectl get configmap coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-03-16T23:30:39Z"
name: coredns
namespace: kube-system
resourceVersion: "180"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: e43fd88d-2539-45ba-9353-6b6e407e5044
This is strange and not really maintainable as you would need to update each node`s resolv.conf file, but there is a way to change this configuration file and then force update all the nodes
# Store the ConfigMap into yaml file
kubectl get configmap coredns -n kube-system -o yaml > coredns.yaml
# Edit the config map to remove metadata to only include name and namespace
# change forward . /etc/resolv.conf to
# forward . 8.8.8.8 8.8.4.4
# be aware of 2 spaces after .
vim coredns.yaml
# Deploy the new ConfigMap
kubectl apply -f coredns.yaml
# Force update the CoreDNS deployment - review script below before executing!
./force-update-deployment coredns -n kube-system
This will now spawn the updated CoreDNS pods which contain the correct configuration with external access.
wget https://raw.githubusercontent.com/zlabjp/kubernetes-scripts/master/force-update-deployment
chmod +x force-update-deployment
You can test this as they do in official documentation with:
kubectl exec -ti dnsutils -- nslookup the-mori.com