Use kubeadm Deploy k8s colony

Environmental Science

IP Address Host name node
10.0.0.63 k8s-master1 master1
10.0.0.63 k8s-master2 master2
10.0.0.65 k8s-node1 node1
10.0.0.66 k8s-node2 node2

1. Briefly

kubeadm It's a rapid deployment launched by the official Community kubernetes Clustering tools
The deployment environment is suitable for learning and using k8s Related software and functions

2. Installation requirements

3 Taichung pure centos virtual machine , Version is 7.x And above 
 Machine configuration  2 nucleus 4G above  x3 platform 
 Server network interworking 
 prohibit swap Partition 

3. Learning goals

Learn how to use kubeadm To install a cluster , Easy to learn k8s Related knowledge

4. Environmental preparation

# 1.  Turn off the firewall function 
systemctl stop firewalld
systemctl disable firewalld

# 2. close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0

# 3.  close swap
swapoff -a
# Or add the command to boot 
echo "swapoff -a" >>/etc/profile

# 4.  Server planning 
cat > /etc/hosts << EOF
10.0.0.63 k8s-master1
10.0.0.64 k8s-master2
10.0.0.65 k8s-node1
10.0.0.66 k8s-node2
EOF

#5.  Temporary host name configuration method :
 hostnamectl set-hostname  k8s-master1
 bash

#6.  Time synchronization configuration 
yum install -y ntpdate
ntpdate time.windows.com

# Open the forward 
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#  All of the above can be copied and pasted to run directly , But the host name configuration needs to be modified 

5. docker install [ All nodes need to be installed ]

# Source add 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#  Install the specified version  
yum -y install docker-ce-18.09.9-3.el7

# You can also view the version installation 
yum list docker-ce --showduplicates | sort -r

# start-up docker
systemctl start docker

6. docker To configure cgroup drive [ All nodes ]

rm -f /etc/docker/*
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

7. Mirror to accelerate [ All nodes ]

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
systemctl restart docker


# If there are too many sources, it's easy to make mistakes .  Delete one if it's wrong .bak Try it 
# Retain  curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

 This is the acceleration of alicloud configuration 
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors

8.kubernetes Source configuration [ All nodes ]

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

9. install kubeadm,kubelet and kubectl[ All nodes ]

yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
systemctl enable kubelet

10.   Deploy Kubernetes Master [ master 10.0.0.63]

kubeadm init \
  --apiserver-advertise-address=10.0.0.63 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16

After initialization, get token:

kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6
--discovery-token-ca-cert-hash sha256:ca1aa9cb753a26d0185e3df410cad09d8ec4af4d7432d127f503f41bc2b14f2a
remember token, Use... In the back

10.1. kubectl Command tool configuration [master]:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Get node information 
# kubectl get nodes

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   Ready      master   38m     v1.17.0

10.2. Install network plug-ins [master]

 Upload kube-flannel.yaml, And implement :
kubectl apply -f kube-flannel.yaml
kubectl get pods -n kube-system

[ It has to be all up and running , Otherwise there is a problem .]
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-6jlmq               1/1     Running   1          14h
coredns-9d85f5447-qfw8w               1/1     Running   1          14h
etcd-k8s-master1                      1/1     Running   3          14h
kube-apiserver-k8s-master1            1/1     Running   3          14h
kube-controller-manager-k8s-master1   1/1     Running   3          14h
kube-flannel-ds-amd64-6lq9g           1/1     Running   1          14h
kube-flannel-ds-amd64-88hwc           1/1     Running   0          13h
kube-flannel-ds-amd64-dgkwm           1/1     Running   0          13h
kube-flannel-ds-amd64-pmh75           1/1     Running   0          13h
kube-proxy-7xhnk                      1/1     Running   3          14h
kube-proxy-kdp8h                      1/1     Running   0          13h
kube-proxy-rg72z                      1/1     Running   0          13h
kube-proxy-xnx5m                      1/1     Running   0          13h
kube-scheduler-k8s-master1            1/1     Running   3          14h

11. take node1 node2 Join in master

node1 Add configuration

 In the node to join, execute the following command to join :
kubeadm join 10.0.0.63:6443 --token fs0uwh.7yuiawec8tov5igh \
    --discovery-token-ca-cert-hash sha256:471442895b5fb77174103553dc13a4b4681203fbff638e055ce244639342701d
    
    


# This configuration is installed in master There was a hint when I was in the hospital , Note that the first thing to configure is 	cni The Internet :

# node1 Join in master:
[root@k8s-node1 ~]# kubeadm join 10.0.0.63:6443 --token v3p5pj.1wsrd7ybs6z14o9f \
>     --discovery-token-ca-cert-hash sha256:ef5fb837f9b66fa9742b236b373203df63b1d71651dfc7211f40150d20516d85
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# Return value to judge right or wrong ,0 Yes , All other values are wrong 
[root@k8s-node1 ~]# echo $?
0

# After joining successfully ,master Node detection :
[root@k8s-master ~]#  kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   22m    v1.17.0
k8s-node1    Ready    <none>   2m9s   v1.17.0    <---- node1  Successfully joined the cluster 




#node2 Join in master:
[root@k8s-node2 docker]# kubeadm join 10.0.0.63:6443 --token v3p5pj.1wsrd7ybs6z14o9f \
>     --discovery-token-ca-cert-hash sha256:ef5fb837f9b66fa9742b236b373203df63b1d71651dfc7211f40150d20516d85
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 docker]# echo $?
0
# master2  Join in   Same operation .


# After joining successfully ,master Node detection :
[root@k8s-master1 docker]#  kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   76m     v1.17.0
k8s-master2   Ready    <none>   79s     v1.17.0
k8s-node1     Ready    <none>   8m23s   v1.17.0
k8s-node2     Ready    <none>   6m      v1.17.0

12 token Create and query

 Default token Will save 24 disappear , Not available after expiration , If it needs to be rebuilt token, Can be found in master The node is regenerated using the following command :

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

kubeadm join 10.0.0.63:6443 --discovery-token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash 63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

13. Create a nginx Mirror image

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort


kubectl get pod,svc

Check the status ,nginx It has been started with port :31907:

At this time, you can use the Clustered 3 One server in the middle of the term to test access :

14. Deploy dashboard The graphical interface

 Upload  kubernetes-dashboard.yaml 

[root@k8s-master ~]# kubectl apply -f dashboard.yaml 
[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-566cddb686-xjgks   1/1     Running   0          39s
kubernetes-dashboard-7b5bf5d559-8ssvj        1/1     Running   0          39s

15. obtain dashboard token, That is to create service account And bind default cluster-admin Administrator cluster role

# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
    # kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

 Will duplicate token  Fill in to   In the picture above  token Options , And select token Sign in 

Interface after login :

Error handling :

k8s-node An error occurred when the node was added :
W0315 22:16:20.123204    5795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 Treatment method :
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
 Add and rejoin :
kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6     --discovery-token-ca-cert-hash sha256:ca1aa9cb753a26d0185e3df410cad09d8ec4af4d7432d127f503f41bc2b14f2a
 there token from kubadm Server generation .

web Page cannot access processing :

 The reconstruction dashboard
 Delete :
kubectl delete -f dashboard.yaml

 Delete and create :
kubectl create -f dashboard.yaml
 Create an account :
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
 Check the password :
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

 Just open the login again 

YAML The attachment [ Please save as  .yaml For the suffix ]