[init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [your.hostname1.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 hostname1-ip] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [your.hostname1.com localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [your.hostname1.com localhost] and IPs [hostname1-ip 127.0.0.1 ::1] [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 20.502026 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node your.hostname1.com as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node your.hostname1.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "your.hostname1.com" as an annotation [bootstraptoken] using token: gnafk2.7b1lq8543rhbcsbz [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root:
[preflight] running pre-flight checks [discovery] Trying to connect to API Server "hostname1-ip:6443" [discovery] Created cluster-info discovery client, requesting info from "https://hostname1-ip:6443" [discovery] Requesting info from "https://hostname1-ip:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "hostname1-ip:6443" [discovery] Successfully established connection with API Server "hostname1-ip:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "your.hostname3.com" as an annotation
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
看到joined可以确认Node节点添加成功
配置Node节点
在Master节点上执行kubectl get nodes命令,可以看到节点状态:
1 2 3 4
NAME STATUS ROLES AGE VERSION your.hostname3.com NotReady <none> 19m v1.12.2 your.hostname2.com NotReady <none> 6s v1.12.2 your.hostname1.com Ready master 110m v1.12.2
会看到有一些Pod并没有正常运行(在这里是两个kube-proxy和两个kube-flannel),使用kubectl describe pod kube-proxy-96kzl --namespace=kube-system查看其中一个。返回内容最下面部分的Events记录了一些信息:
1 2 3 4 5
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning DNSConfigForming 10m (x151 over 80m) kubelet, your.hostname3.com Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 127.0.0.1 xx.xx.xx.xx xx.xx.xx.xx Warning FailedCreatePodSandBox 46s (x172 over 80m) kubelet, your.hostname3.com Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods:
1.Run ‘modprobe – ‘ to load missing kernel modules;
2.Provide the missing builtin kernel ipvs support
[discovery] Failed to connect to API Server “hostname1-ip:6443”: token id “ztsrrn” is invalid for this cluster or it has expired. Use “kubeadm token create” on the master node to creating a new valid token
在添加Node节点时报的错误,是因为token过期。在Master节点上执行kubeadm token create重新生成token。并且需要重新获取CA的hash值,否则会出现cluster CA found in cluster-info configmap is invalid的错误。再执行openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //',获取到新token以及新CA的hash值后,重新添加Node节点即可
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)
没有配置kubelet,导致TLS证书不匹配
Node节点手动下载了相关镜像,但是Pod状态仍不发生改变
这个是时候可能是相关Pod在某个地方卡住了,可以使用kubectl delete pod xxx --namespace=kube-system将卡住的Pod删除,集群会自动重新生成相关Pod,然后就能使用手动下载的镜像了