-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Description
I created a 5 nodes kubeadm cluster in AWS using kubeadm using following,
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
And apply bridget manifest by simply downloading the configuration,
curl -O https://raw.githubusercontent.com/kvaps/bridget/master/bridget.yaml
Bridget pods are running,
kubectl -n kube-system get pod -l app=bridget -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bridget-49l64 1/1 Running 0 4m37s 10.20.1.62 ip-10-20-1-62 <none> <none>
bridget-4prxh 1/1 Running 0 4m37s 10.20.1.172 ip-10-20-1-172 <none> <none>
bridget-gvz57 1/1 Running 0 4m37s 10.20.1.95 ip-10-20-1-95 <none> <none>
bridget-qz59n 1/1 Running 0 4m37s 10.20.1.157 ip-10-20-1-157 <none> <none>
bridget-z5j2d 1/1 Running 0 4m37s 10.20.1.15 ip-10-20-1-15 <none> <none>
But coredns pods are not passing healthcheck,
coredns-66bff467f8-fmc4f 0/1 Running 0 5m10s 10.244.1.2 ip-10-20-1-95 <none> <none>
coredns-66bff467f8-jn4xj 0/1 Running 0 9m39s 10.244.2.2 ip-10-20-1-172 <none> <none>
Looks like its having problem while trying to reach k8s endpoint (service object)
kubectl logs coredns-66bff467f8-fmc4f -n kube-system
Output:
I0625 22:08:45.801746 1 trace.go:116] Trace[60780408]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-25 22:08:15.801233016 +0000 UTC m=+310.027901299) (total time: 30.000480721s):
Trace[60780408]: [30.000480721s] [30.000480721s] END
E0625 22:08:45.801770 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0625 22:08:45.802757 1 trace.go:116] Trace[340007387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-25 22:08:15.802351838 +0000 UTC m=+310.029020177) (total time: 30.000388737s):
Trace[340007387]: [30.000388737s] [30.000388737s] END
E0625 22:08:45.802772 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0625 22:08:45.803735 1 trace.go:116] Trace[1304066831]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-25 22:08:15.803378498 +0000 UTC m=+310.030046784) (total time: 30.000337496s):
Trace[1304066831]: [30.000337496s] [30.000337496s] END
E0625 22:08:45.803753 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
kubectl version
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:33:59Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- ifconfig on one of the node
ifconfig
cbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.244.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
ether d6:60:82:58:3b:0c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 270 (270.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
describe coredns pod
kubectl describe po coredns-66bff467f8-fmc4f -n kube-system
Name: coredns-66bff467f8-fmc4f
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: ip-10-20-1-95/10.20.1.95
Start Time: Thu, 25 Jun 2020 22:03:02 +0000
Labels: k8s-app=kube-dns
pod-template-hash=66bff467f8
Annotations: <none>
Status: Running
IP: 10.244.1.2
IPs:
IP: 10.244.1.2
Controlled By: ReplicaSet/coredns-66bff467f8
Containers:
coredns:
Container ID: docker://9b763857bff24d1820d23d4e31c42944d93ab4059c647ef4ea4d02a767a586f3
Image: k8s.gcr.io/coredns:1.6.7
Image ID: docker-pullable://k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Running
Started: Thu, 25 Jun 2020 22:03:05 +0000
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-88rpd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-88rpd:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-88rpd
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m35s default-scheduler Successfully assigned kube-system/coredns-66bff467f8-fmc4f to ip-10-20-1-95
Normal Pulling 8m34s kubelet, ip-10-20-1-95 Pulling image "k8s.gcr.io/coredns:1.6.7"
Normal Pulled 8m32s kubelet, ip-10-20-1-95 Successfully pulled image "k8s.gcr.io/coredns:1.6.7"
Normal Created 8m32s kubelet, ip-10-20-1-95 Created container coredns
Normal Started 8m32s kubelet, ip-10-20-1-95 Started container coredns
Warning Unhealthy 3m28s (x31 over 8m28s) kubelet, ip-10-20-1-95 Readiness probe failed: HTTP probe failed with statuscode: 503
Metadata
Metadata
Assignees
Labels
No labels