0. Based on the environment

IP Address Host name node
10.0.0.63 k8s-master1 master1
10.0.0.65 k8s-node1 node1
10.0.0.66 k8s-node2 node2

1. Briefly

kubeadm It's a rapid deployment launched by the official Community kubernetes Clustering tools
The deployment environment is suitable for learning and using k8s Related software and functions

2. Installation requirements

3 Taichung pure centos virtual machine , Version is 7.x And above 
 Machine configuration  2 nucleus 4G above  x3 platform 
 Server network interworking 
 prohibit swap Partition 

3. Learning goals

Learn how to use kubeadm To install a cluster , Easy to learn k8s Related knowledge

4. Environmental preparation

# 1.  Turn off the firewall function 
systemctl stop firewalld
systemctl disable firewalld

# 2. close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0

# 3.  close swap
swapoff -a  #  temporary 
sed -ri 's/.*swap.*/#&/' /etc/fstab    #  permanent 


# 4.  Server planning 
cat > /etc/hosts << EOF
10.0.0.63 k8s-master1
#10.0.0.64 k8s-master2
10.0.0.65 k8s-node1
10.0.0.66 k8s-node2
EOF

#5.  Temporary host name configuration method :
 hostnamectl set-hostname  k8s-master1
 bash

#6.  Time synchronization configuration 
yum install -y ntpdate
ntpdate time.windows.com

# Open the forward 
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#7.  Time synchronization 
echo '*/5 * * * * /usr/sbin/ntpdate -u ntp.api.bz' >>/var/spool/cron/root
systemctl restart crond.service
crontab -l
#  All of the above can be copied and pasted to run directly , But the host name configuration needs to be modified 

5. docker install [ All nodes need to be installed ]

# Source add 
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum clean all
yum install -y bash-completion.noarch

#  Install the specified version  
yum -y install docker-ce-18.09.9-3.el7

# You can also view the version installation 
yum list docker-ce --showduplicates | sort -r

# start-up docker
systemctl enable docker
systemctl start docker
systemctl status docker

6. docker To configure cgroup drive [ All nodes ]

rm -f /etc/docker/*
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl enable docker.service
  
 Pull flanel Mirror image :
  docker pull lizhenliang/flannel:v0.11.0-amd64

7. Mirror to accelerate [ All nodes ]

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
systemctl restart docker

# If there are too many sources, it's easy to make mistakes .  Delete one if it's wrong .bak Try it 
# Retain  curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

 This is the acceleration of alicloud configuration , Just add the alicloud acceleration source directly .
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

8.kubernetes Source configuration [ All nodes ]

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

9. install kubeadm,kubelet and kubectl[ All nodes ]

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet

10.   Deploy Kubernetes Master [ master 10.0.0.63]

kubeadm init \
  --apiserver-advertise-address=10.0.0.63 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16
  
# Add environment variables after success [master]:  
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After initialization, get token:

kubeadm join 10.0.0.63:6443 --token 2cdgi6.79j20fhly6xpgfud
--discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
remember token, Use... In the back

Be careful :

W0507 00:43:52.681429    3118 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher

10.1 Error handling

Report errors 1: Need modification docker Drive as systemd  /etc/docker/daemon.json Add... To the document :   "exec-opts": ["native.cgroupdriver=systemd"]

Report errors 2: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
When this error occurs , yes cpu Limited , take cpu It is amended as follows 2 nucleus 4G The above configuration is enough

Report errors 2: When this error occurs , yes cpu Limited , take cpu It is amended as follows 2 nucleus 4G The above configuration is enough

Report errors 3: An error occurred when joining the cluster :

W0507 01:19:49.406337   26642 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master2 yum.repos.d]# kubeadm join 10.0.0.63:6443 --token q8bfij.fipmsxdgv8sgcyq4 \
>     --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
>     --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
W0507 01:20:26.246981   26853 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


 terms of settlement :
 perform : kubeadm reset  Rejoin 

10.2. kubectl Command tool configuration [master]

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Get node information 
# kubectl get nodes

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   master   2m59s   v1.18.0
k8s-node1     NotReady   <none>   86s     v1.18.0
k8s-node2     NotReady   <none>   85s     v1.18.0

# You can get the status information of other hosts , Prove that the cluster is complete , another k8s-master2  Not joining the cluster , It's because we have to do more master, There's no more .

10.2. Install network plug-ins [master]

[ Straight on master On the operation ] Upload kube-flannel.yaml, And implement :
kubectl apply -f kube-flannel.yaml
kubectl get pods -n kube-system

 Download address :
https://www.chenleilei.net/soft/k8s/kube-flannel.yaml

[ It has to be all up and running , Otherwise there is a problem .]
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5dq4s              1/1     Running   0          13m
coredns-7ff77c879f-v68pc              1/1     Running   0          13m
etcd-k8s-master1                      1/1     Running   0          13m
kube-apiserver-k8s-master1            1/1     Running   0          13m
kube-controller-manager-k8s-master1   1/1     Running   0          13m
kube-flannel-ds-amd64-2ktxw           1/1     Running   0          3m45s
kube-flannel-ds-amd64-fd2cb           1/1     Running   0          3m45s
kube-flannel-ds-amd64-hb2zr           1/1     Running   0          3m45s
kube-proxy-4vt8f                      1/1     Running   0          13m
kube-proxy-5nv5t                      1/1     Running   0          12m
kube-proxy-9fgzh                      1/1     Running   0          12m
kube-scheduler-k8s-master1            1/1     Running   0          13m


[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    master   14m   v1.18.0
k8s-node1     Ready    <none>   12m   v1.18.0
k8s-node2     Ready    <none>   12m   v1.18.0

11. take node1 node2 Join in master

node1 node2 Join the cluster configuration

 In the node to join, execute the following command to join :
kubeadm join 10.0.0.63:6443 --token fs0uwh.7yuiawec8tov5igh \
    --discovery-token-ca-cert-hash sha256:471442895b5fb77174103553dc13a4b4681203fbff638e055ce244639342701d
    
# This configuration is installed in master There was a hint when I was in the hospital , Note that the first thing to configure is cni The network plugin 
# After joining successfully ,master Node detection :
[root@k8s-master1 docker]#  kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    master   14m   v1.18.0
k8s-node1     Ready    <none>   12m   v1.18.0
k8s-node2     Ready    <none>   12m   v1.18.0

12 token Create and query

 Default token Will save 24 disappear , Not available after expiration , If it needs to be rebuilt token, Can be found in master The node is regenerated using the following command :

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
 result :
3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4


 new token Join the cluster method :
kubeadm join 10.0.0.63:6443 --discovery-token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash 3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4

13. install dashboard Interface

wget https://www.chenleilei.net/soft/k8s/dashboard.yaml
kubectl apply -f dashboard.yaml
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.94.43     <none>        8000/TCP        7m58s
kubernetes-dashboard        NodePort    10.1.187.162   <none>        443:30001/TCP   7m58s


13.1 Access test

10.0.0.63 10.0.0.64 10.0.0.65 Access to any role in the cluster 30001 All ports can access dashboard page .

13.2 obtain dashboard token, That is to create service account And bind default cluster-admin Administrator cluster role

# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

 Will duplicate token  Fill in to   In the picture above  token Options , And select token Sign in 

14. Verify that the cluster is working properly

 There are three aspects to verify whether the cluster state is normal :
1.  Whether the application can be deployed normally 
2.  Whether the cluster network is normal 
3.  Within a cluster dns Whether the resolution is normal 

14.1 Verify deployment applications and log queries

# Create a nginx application 
kubectl create deployment  k8s-status-checke --image=nginx
# expose 80 port 
kubectl expose deployment k8s-status-checke --port=80  --target-port=80 --type=NodePort

# Delete this deployment
kubectl delete deployment k8s-status-checke

# Query log :
[root@k8s-master1 ~]# kubectl logs -f nginx-f89759699-m5k5z

14.2 Verify whether the cluster network is normal

1.  Get an app address 
[root@k8s-master1 ~]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED   READINESS 
pod/nginx   1/1    Running   0          25h   10.244.2.18   k8s-node2   <none>      <none>

2.  Through any node ping The application of ip
[root@k8s-node1 ~]# ping 10.244.2.18
PING 10.244.2.18 (10.244.2.18) 56(84) bytes of data.
64 bytes from 10.244.2.18: icmp_seq=1 ttl=63 time=2.63 ms
64 bytes from 10.244.2.18: icmp_seq=2 ttl=63 time=0.515 ms

3.  Access to the node 
[root@k8s-master1 ~]# curl -I 10.244.2.18
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Sun, 10 May 2020 13:19:02 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes

4.  Query log 
[root@k8s-master1 ~]# kubectl logs -f nginx
10.244.1.0 - - [10/May/2020:13:14:25 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" "-"

14.3 Verify inside the cluster dns Whether the resolution is normal

 Check DNS:
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5dq4s              1/1     Running   1          4d   # Sometimes dns There will be problems 
coredns-7ff77c879f-v68pc              1/1     Running   1          4d   # Sometimes dns There will be problems 
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d

# Sometimes dns There will be problems , resolvent :
1.  export yaml file 
kubectl get deploy coredns -n kube-system -o yaml >coredns.yaml
2.  Delete coredons
kubectl delete -f coredns.yaml

 Check :
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d

coredns Has deleted 

3.  The reconstruction coredns
kubectl apply -f coredns.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5mmjg              1/1     Running   0          13s
coredns-7ff77c879f-t74th              1/1     Running   0          13s
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d
 Log review :
coredns-7ff77c879f-5mmjg:
[root@k8s-master1 ~]# kubectl logs coredns-7ff77c879f-5mmjg -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

coredns-7ff77c879f-t74th:
[root@k8s-master1 ~]# kubectl  logs coredns-7ff77c879f-t74th -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b


#k8s Create a container validation dns
[root@k8s-master1 ~]# kubectl run -it --rm --image=busybox:1.28.4 sh
/ # nslookup kubernetes
Server:    10.1.0.10
Address 1: 10.1.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.1.0.1 kubernetes.default.svc.cluster.local
# adopt  nslookup Parsing  kubernetes  Can appear parsing , explain dns Normal work 

15. Cluster certificate problem handling [kuberadm Deployed solutions ]

1.  Delete default secret, Use a self signed certificate to create a new secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
kubectl create secret generic kubernetes-dashboard-certs \
--from-file=/etc/kubernetes/pki/apiserver.key --from-file=/etc/kubernetes/pki/apiserver.crt -n kubernetes-dashboard

 Using binary deployment, the certificate here needs to be modified according to the path stored at that time .

2.  The certificate needs to be modified after configuration dashboard.yaml file , Rebuild dashboard
wget https://www.chenleilei.net/soft/k8s/recommended.yaml
vim recommended.yaml
 find : kind: Deployment, Find here and look again  args  See these two lines :
- --auto-generate-certificates
- --namespace=kubernetes-dashboard

 Change it to [ Insert two lines of certificate address in the middle ]:
- --auto-generate-certificates
- --tls-key-file=apiserver.key
- --tls-cert-file=apiserver.crt
- --namespace=kubernetes-dashboard

[ Modified , Can be used directly : wget https://www.chenleilei.net/soft/k8s/dashboard.yaml]

3.  Reapply after modification  recommended.yaml
kubectl apply -f recommended.yaml

 After the application , You can see that a rollover is triggered , Then open the browser again and find that the certificate has been displayed normally , It's not safe anymore .
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-r9h5r   1/1     Running   0          2d1h
kubernetes-dashboard-5d8766c7cc-trdsv        1/1     Running   0          93s   <--- Scroll to update .

4.  See the new access port :
 kubectl get svc -n kubernetes-dashboard
 [root@k8s-master1 ~]#  kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.187.60    <none>        8000/TCP        6m34s
kubernetes-dashboard        NodePort    10.1.242.240   <none>        443:31761/TCP   6m34s


5.  Open it with Google browser and you'll find it's ready to open 
   #1. Be careful , If you forget to log in token, To regenerate the 
   # kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
   # kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
  
   
   #2. You can also query the previous token  Log in 
    kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

15.1 Screenshot of browser before and after certificate change :

Before replacement :

After replacement :

After changing the certificate , There will be one more to go on , But unsafe tips , If the certificate is not replaced, there is no such prompt .

15.2 Error handling :

15.2.1 problem 1 k8s-node An error occurred when the node was added :

k8s-node An error occurred when the node was added :
W0315 22:16:20.123204    5795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 Treatment method :
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
 Add and rejoin :
kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6     --discovery-token-ca-cert-hash sha256:ca1aa9cb753a26d0185e3df410cad09d8ec4af4d7432d127f503f41bc2b14f2a
 there token from kubadm Server generation .

15.2.2 problem 2: web Page cannot access processing :

 The reconstruction dashboard
# Delete :
kubectl delete -f dashboard.yaml

# Delete and create :
kubectl create -f dashboard.yaml
# Create an account :
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# Check the password :
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

 Just open the login again 


 Use the following command to see which node is assigned :
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE  READINESS GATES
dashboard-metrics-scraper-694557449d-vnrvt   1/1     Running   0          8m56s   10.244.1.13  k8s-node1   <none>         <none>
kubernetes-dashboard-85fc8fbf64-t4cdw        1/1     Running   0          3m8s    10.244.2.18  k8s-node2   <none>         <none>

15.2.3 problem 3: Deploy dashboard Failure

 It could be a network problem , Need to switch to another network , such as ***, And then redeploy .
1.  Or copy the following and save it as  dashboard.yaml  Delete the original dashboard Redeployment 
2.  Or download it from my personal server : wget https://www.chenleilei.net/soft/k8s/recommended.yaml

3.  Check the status for problems , If the image is downloaded successfully 
   kubectl get pods -n kubernetes-dashboard -o wide
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta8
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --tls-key-file=apiserver.key
            - --tls-cert-file=apiserver.crt
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

16. stay k8s A central office nginx

[root@k8s-master1 ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
[root@k8s-master1 ~]# kubectl get pod,svc
NAME                        READY   STATUS             RESTARTS   AGE
pod/nginx-f89759699-dnfmg   0/1     ImagePullBackOff   0          3m41s

ImagePullBackOff Report errors : 
 Check k8s journal :  kubectl describe pod nginx-f89759699-dnfmg
 result :
  Normal   Pulling    3m27s (x4 over 7m45s)  kubelet, k8s-node2  Pulling image "nginx"
  Warning  Failed     2m55s (x2 over 6m6s)   kubelet, k8s-node2  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/library/nginx/manifests/sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422: net/http: TLS handshake timeout

 You can see it's because docker Download Image error , Need to update other docker Source 
[root@k8s-master1 ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

 Use one of these node node docker Come on pull nginx:
 And then I found the mistake :
[root@k8s-node1 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
54fec2fa59d0: Pulling fs layer 
4ede6f09aefe: Pulling fs layer 
f9dc69acb465: Pulling fs layer 
Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout  # The source has not been modified 

 After modifying the source again :
[root@k8s-master1 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
54fec2fa59d0: Pull complete 
4ede6f09aefe: Pull complete 
f9dc69acb465: Pull complete 
Digest: sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest


 Run again : 
kubectl delete pod,svc nginx
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort


 This is a k8s The process of checking the failed pull image :
1. k8s Deploy nginx Failure , Check nodes  kubectl get pod,svc
2.  Check k8s journal :  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
...net/http: TLS handshake timeout [ When this fault occurs, you can see that the source has not been replaced ]
3.  modify docker The source is alicloud . And then restart it docker
cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service 
4.  Again using docker pull  To download a nginx Mirror image ,  I found that I can pull successfully 
5.  Delete docker Download good nginx Mirror image  docker image rm -f [ Mirror name ]
6. k8s Remove deployment failed nginx   kubectl delete deployment nginx
7.  Recreate the image  kubectl create deployment nginx --image=nginx
8. k8s Re deploy the application : kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

17. Exposed applications

1. create mirror 
kubectl create deployment nginx --image=nginx

2. Exposed applications 
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

18. Optimize : k8s Auto completion tool

yum install -y bash-completion
source <(kubectl completion bash)
source /usr/share/bash-completion/bash_completion

19. The problems in this section are as follows :

 One . token How to deal with the overdue :
    every other 24 Hours , Previously created token It will expire , You will not be able to log in to the cluster dashboard page , At this point, you need to regenerate token
    Generate command :
   kubeadm token create
   kubeadm token list
   openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
   
   Inquire about token   
   openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
    
   Then use the new token Let the new server join :
   kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6     --discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
   
   
  Two . dashboard Login password access 
 kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')
 
 
  3、 ... and . k8s The process of checking the failed pull image 
 1. k8s Deploy nginx Failure , Check nodes  kubectl get pod,svc
 2.  Check k8s journal :  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
...net/http: TLS handshake timeout [ When this fault occurs, you can see that the source has not been replaced ]
 3.  modify docker The source is alicloud . And then restart it docker
cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service 
 4.  Again using docker pull  To download a nginx Mirror image ,  I found that I can pull successfully 
 5.  Delete docker Download good nginx Mirror image  docker image rm -f [ Mirror name ]
 6. k8s Remove deployment failed nginx   kubectl delete deployment nginx
 7.  Recreate the image  kubectl create deployment nginx --image=nginx
 8. k8s Re deploy the application : kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort

20. YAML The attachment [ Please save as  .yaml For the suffix ]

http://www.chenleilei.net/soft/kubeadm Quickly deploy a Kubernetes colony yaml.zip