编程知识 cdmana.com

Thirty thousand word no pit construction based on docker + k8s + gitlab / SVN + Jenkins + harbor continuous integrated delivery environment!!

In the high concurrency scenario, the author has developed it , Simple to offer 、 Stable 、 Extensible delayed message queuing framework , It has precise timing task and delay queue processing function . Since the open source for more than half a year , It has successfully provided precise timing scheduling scheme for more than ten small and medium-sized enterprises , It has withstood the test of production environment . In order to benefit more children's shoes , Now give the open source framework address :

https://github.com/sunshinelyz/mykit-delay

PS: Welcome to Star Source code , It's fine too pr Your blazing code .

Write it at the front

Recently K8S 1.18.2 Build on the cluster of version DevOps Environmental Science , During this period, we encountered various pits . at present , During the construction of the environment, all kinds of pits have been filled out , I wish to record , And share it with you ! The article and what it takes to build the environment yml The document has been included in :https://github.com/sunshinelyz/technology-binghe and https://gitee.com/binghe001/technology-binghe . If the document is of some help to you , Don't forget to give it Star Oh !

Server planning

IP

Host name

node

operating system

192.168.175.101

binghe101

K8S Master

CentOS 8.0.1905

192.168.175.102

binghe102

K8S Worker

CentOS 8.0.1905

192.168.175.103

binghe103

K8S Worker

CentOS 8.0.1905

Installation environment version

Software name

Software version

explain

Docker

19.03.8

Provide container environment

docker-compose

1.25.5

Define and run applications that consist of multiple containers

K8S

1.8.12

It's an open source , It is used to manage containerized applications on multiple hosts in the cloud platform ,Kubernetes The goal is to make the deployment of containerized applications simple and efficient (powerful),Kubernetes Provides application deployment , planning , to update , A mechanism of maintenance .

GitLab

12.1.6

Code warehouse ( And SVN Just install one )

Harbor

1.10.2

Private image warehouse

Jenkins

2.89.3

Continuous integration delivery

SVN

1.10.2

Code warehouse ( And GitLab Just install one )

JDK

1.8.0_202

Java Operating infrastructure

maven

3.6.3

Build the basic plug-in for the project

Server password free login

Execute the following command on each server .

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 

take binghe102 and binghe103 On the server id_rsa.pub File copy to binghe101 The server .

[root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
[root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103

stay binghe101 Execute the following command on the server .

cat ~/.ssh/102 >> ~/.ssh/authorized_keys
cat ~/.ssh/103 >> ~/.ssh/authorized_keys

And then authorized_keys Copy the files to binghe102、binghe103 The server .

[root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys

Delete binghe101 Node ~/.ssh Under the 102 and 103 file .

rm ~/.ssh/102
rm ~/.ssh/103

install JDK

You need to install... On each server JDK Environmental Science . To Oracle The official download JDK, I got it here JDK Version is 1.8.0_202, After downloading, unzip and configure the system environment variables .

tar -zxvf jdk1.8.0_212.tar.gz
mv jdk1.8.0_212 /usr/local

Next , Configure system environment variables .

vim /etc/profile

The configuration items are as follows .

JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH

Next, execute the following command to make the system environment variables effective .

source /etc/profile

install Maven

To Apache The official download Maven, I downloaded it here Maven Version is 3.6.3. After downloading, extract and configure the system environment variables directly .

tar -zxvf apache-maven-3.6.3-bin.tar.gz
mv apache-maven-3.6.3-bin /usr/local

Next , It is to configure system environment variables .

vim /etc/profile

The configuration items are as follows .

JAVA_HOME=/usr/local/jdk1.8.0_212
MAVEN_HOME=/usr/local/apache-maven-3.6.3-bin
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH MAVEN_HOME PATH

Next, execute the following command to make the system environment variables effective .

source /etc/profile

Next , modify Maven Configuration file for , As shown below .

<localRepository>/home/repository</localRepository>

take Maven Download the Jar The package is stored in /home/repository Under the table of contents .

install Docker Environmental Science

This document is based on Docker 19.03.8 Build version Docker Environmental Science .

Create... On all servers install_docker.sh Script , The script is as follows .

export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
systemctl enable docker.service
systemctl start docker.service
docker version

On each server for install_docker.sh Scripts give executable rights , And execute the script .

install docker-compose

Be careful : Install... On each server docker-compose

1. download docker-compose file

curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose 

2. by docker-compose Files give executable rights

chmod a+x /usr/local/bin/docker-compose

3. see docker-compose edition

[root@binghe ~]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

install K8S Cluster environment

This document is based on K8S 1.8.12 Version to build K8S colony

install K8S Based on the environment

Create... On all servers install_k8s.sh Script files , The contents of the script file are as follows .

# Configure alicloud image Accelerator 
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

# install nfs-utils
yum install -y nfs-utils
yum install -y wget

# start-up nfs-server
systemctl start nfs-server
systemctl enable nfs-server

# Turn off firewall 
systemctl stop firewalld
systemctl disable firewalld

# close SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

#  close  swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# modify  /etc/sysctl.conf
#  If you have configuration , The modified 
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
#  There may be no , Additional 
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
#  Execute a command to apply 
sysctl -p

#  To configure K8S Of yum Source 
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#  Uninstall old version K8S
yum remove -y kubelet kubeadm kubectl

#  install kubelet、kubeadm、kubectl, What I'm installing here is 1.18.2 edition , You can also install 1.17.2 edition 
yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2

#  modify docker Cgroup Driver by systemd
# #  take /usr/lib/systemd/system/docker.service This line in the file  ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# #  It is amended as follows  ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
#  If not modified , Adding  worker  The following errors may be encountered when a node 
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

#  Set up  docker  Mirror image , Improve  docker  Image download speed and stability 
#  If you visit  https://hub.docker.io  The speed is very stable , You can also skip this step 
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

#  restart  docker, And start the  kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version

On each server for install_k8s.sh Scripts give executable rights , And execute the script .

initialization Master node

Only in binghe101 Operations performed on the server .

1. initialization Master Node network environment

Be careful : The following command needs to be executed manually on the command line .

#  Only in  master  Node execution 
# export  The order is only in the current  shell  Effective in conversation , Open a new  shell  After window , If you want to continue the installation process , Please re execute  export  command 
export MASTER_IP=192.168.175.101
#  Replace  k8s.master  by   What you want  dnsName
export APISERVER_NAME=k8s.master
# Kubernetes  The network segment of the container group , After the installation of the network segment , from  kubernetes  establish , It didn't exist in the physical network beforehand 
export POD_SUBNET=172.18.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

2. initialization Master node

stay binghe101 Create... On the server init_master.sh Script files , The contents of the document are as follows .

#!/bin/bash
#  Terminate execution on script error 
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m Make sure you have set the environment variable  POD_SUBNET  and  APISERVER_NAME \033[0m"
  echo  At present POD_SUBNET=$POD_SUBNET
  echo  At present APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


#  See full configuration options  https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

# kubeadm init
#  According to the network speed of the server , You need to wait  3 - 10  minute 
kubeadm init --config=kubeadm-config.yaml --upload-certs

#  To configure  kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

#  install  calico  The network plugin 
#  Reference documents  https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo " install calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml

give init_master.sh Script files can execute permissions and execute scripts .

3. see Master The initialization result of the node

(1) Make sure all container groups are in Running state

#  Execute the following command , wait for  3-10  minute , Until all the container groups are in  Running  state 
watch kubectl get pod -n kube-system -o wide

The specific implementation is as follows .

[root@binghe101 ~]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                          binghe101: Sun May 10 11:01:32 2020

NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES          
calico-kube-controllers-5b8b769fcd-5dtlp   1/1     Running   0          118s   172.18.203.66     binghe101   <none>           <none>          
calico-node-fnv8g                          1/1     Running   0          118s   192.168.175.101   binghe101   <none>           <none>          
coredns-546565776c-27t7h                   1/1     Running   0          2m1s   172.18.203.67     binghe101   <none>           <none>          
coredns-546565776c-hjb8z                   1/1     Running   0          2m1s   172.18.203.65     binghe101   <none>           <none>          
etcd-binghe101                             1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-apiserver-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-controller-manager-binghe101          1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-proxy-dvgsr                           1/1     Running   0          2m1s   192.168.175.101   binghe101   <none>           <none>          
kube-scheduler-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>

(2) see Master Node initialization result

kubectl get nodes -o wide

The specific implementation is as follows .

[root@binghe101 ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION         CONTAINER-RUNTIME
binghe101   Ready    master   3m28s   v1.18.2   192.168.175.101   <none>        CentOS Linux 8 (Core)   4.18.0-80.el8.x86_64   docker://19.3.8

initialization Worker node

1. obtain join Command parameter

stay Master node (binghe101 The server ) Execute the following command to get join Command parameter .

kubeadm token create --print-join-command

The specific implementation is as follows .

[root@binghe101 ~]# kubeadm token create --print-join-command
W0510 11:04:34.828126   56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

among , There is a line of output .

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

This line of code is what you get join command .

Be careful :join In the command token The effective time of is 2 Hours ,2 Within hours , You can use this token Initialize any number of worker node .

2. initialization Worker node

For all worker Node execution , ad locum , Is in the binghe102 The server and binghe103 Execute on the server .

Execute the following commands manually in the command .

#  Only in  worker  Node execution 
# 192.168.175.101  by  master  Node's intranet  IP
export MASTER_IP=192.168.175.101
#  Replace  k8s.master  For initialization  master  The... Used in node  APISERVER_NAME
export APISERVER_NAME=k8s.master
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

#  Replace with  master  Node  kubeadm token create  Command output join
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

The specific implementation is as follows .

[root@binghe102 ~]# export MASTER_IP=192.168.175.101
[root@binghe102 ~]# export APISERVER_NAME=k8s.master
[root@binghe102 ~]# echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
[root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
W0510 11:08:27.709263   42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

According to the output, we can see that ,Worker The node has joined K8S colony .

Be careful :kubeadm join… Namely master Node kubeadm token create Command output join.

3. View the initialization results

stay Master node (binghe101 The server ) Execute the following command to view the initialization result .

kubectl get nodes -o wide

The specific implementation is as follows .

[root@binghe101 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
binghe101   Ready    master   20m     v1.18.2
binghe102   Ready    <none>   2m46s   v1.18.2
binghe103   Ready    <none>   2m46s   v1.18.2

Be careful :kubectl get nodes Add... After the order -o wide Parameters can output more information .

restart K8S Problems caused by clustering

1.Worker The node fails to start

Master Node IP The address has changed , Lead to worker The node cannot be started . Need to be reinstalled K8S colony , And make sure that all nodes have a fixed intranet IP Address .

2.Pod Crash or no normal access

Restart the server and use the following command to check Pod Operating state .

kubectl get pods --all-namespaces

Find many Pod be not in Running state , here , You need to use the following command to delete the abnormal Pod.

kubectl delete pod <pod-name> -n <pod-namespece>

Be careful : If Pod It's using Deployment、StatefulSet Wait for the controller to create ,K8S A new Pod As a substitute , rebooted Pod It usually works .

K8S install ingress-nginx

Be careful : stay Master node (binghe101 Execute on the server )

1. establish ingress-nginx Namespace

establish ingress-nginx-namespace.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx

Execute the following command to create ingress-nginx Namespace .

kubectl apply -f ingress-nginx-namespace.yaml

2. install ingress controller

establish ingress-nginx-mandatory.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---

Execute the following command to install ingress controller.

kubectl apply -f ingress-nginx-mandatory.yaml

3. install K8S SVC:ingress-nginx

It's mainly used for exposure pod:nginx-ingress-controller.

establish service-nodeport.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

Execute the following command to install .

kubectl apply -f service-nodeport.yaml

4. visit K8S SVC:ingress-nginx

see ingress-nginx Deployment of the namespace , As shown below .

[root@binghe101 k8s]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-796ddcd9b-vfmgn        1/1     Running   1          10h
nginx-ingress-controller-58985cc996-87754   1/1     Running   2          10h

In the command line server command line, enter the following command to view ingress-nginx Port mapping of .

kubectl get svc -n ingress-nginx 

The details are as follows .

[root@binghe101 k8s]# kubectl get svc -n ingress-nginx 
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.96.247.2   <none>        80/TCP                       7m3s
ingress-nginx          NodePort    10.96.40.6    <none>        80:30080/TCP,443:30443/TCP   4m35s

therefore , Can pass Master node (binghe101 The server ) Of IP Address and 30080 Port number to access ingress-nginx, As shown below .

[root@binghe101 k8s]# curl 192.168.175.101:30080       
default backend - 404

You can also open it in the browser http://192.168.175.101:30080 To visit ingress-nginx, As shown below .

K8S install gitlab Code warehouse

Be careful : stay Master node (binghe101 Execute on the server )

1. establish k8s-ops Namespace

establish k8s-ops-namespace.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: Namespace
metadata:
  name: k8s-ops
  labels:
    name: k8s-ops

Execute the following command to create the namespace .

kubectl apply -f k8s-ops-namespace.yaml 

2. install gitlab-redis

establish gitlab-redis.yaml file , The contents of the document are as follows .

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: sameersbn/redis
        imagePullPolicy: IfNotPresent
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 10
          timeoutSeconds: 5
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/redis

---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis

First , Execute the following command on the command line to create /data1/docker/xinsrv/redis Catalog .

mkdir -p /data1/docker/xinsrv/redis

Execute the following command to install gitlab-redis.

kubectl apply -f gitlab-redis.yaml 

3. install gitlab-postgresql

establish gitlab-postgresql.yaml, The contents of the document are as follows .

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  selector:
    matchLabels:
      name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql
        imagePullPolicy: IfNotPresent
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/postgresql
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql

First , Execute the following command to create /data1/docker/xinsrv/postgresql Catalog .

mkdir -p /data1/docker/xinsrv/postgresql

Next , install gitlab-postgresql, As shown below .

kubectl apply -f gitlab-postgresql.yaml

4. install gitlab

(1) Configure user name and password

First , Use... On the command line base64 Code for user name and password for transcoding , In this example , The user name used is admin, The password for admin.1231

Transcoding is as follows .

[root@binghe101 k8s]# echo -n 'admin' | base64 
YWRtaW4=
[root@binghe101 k8s]# echo -n 'admin.1231' | base64 
YWRtaW4uMTIzMQ==

The user name after transcoding is :YWRtaW4= The password for :YWRtaW4uMTIzMQ==

Also can be base64 Decode the encoded string , for example , Decoding the cipher string , As shown below .

[root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode 
admin.1231

Next , establish secret-gitlab.yaml file , It's mainly for users to configure GitLab Username and password , The contents of the document are as follows .

apiVersion: v1
kind: Secret
metadata:
  namespace: k8s-ops
  name: git-user-pass
type: Opaque
data:
  username: YWRtaW4=
  password: YWRtaW4uMTIzMQ==

Execute the contents of the configuration file , As shown below .

kubectl create -f ./secret-gitlab.yaml

(2) install GitLab

establish gitlab.yaml file , The contents of the document are as follows .

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  selector:
    matchLabels:
      name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: sameersbn/gitlab:12.1.6
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: GITLAB_TIMEZONE
          value: Beijing
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: git-user-pass
              key: password
        - name: GITLAB_ROOT_EMAIL
          value: 12345678@qq.com
        - name: GITLAB_HOST
          value: gitlab.binghe.com
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "30022"
        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"
        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00
        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/gitlab
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30088
    - name: ssh
      port: 22
      targetPort: ssh
      nodePort: 30022
  type: NodePort
  selector:
    name: gitlab

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gitlab
  namespace: k8s-ops
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: gitlab.binghe.com
    http:
      paths:
      - backend:
          serviceName: gitlab
          servicePort: http

Be careful : In the configuration GitLab when , When monitoring the host , Out of commission IP Address , You need to use a host name or domain name , In the above configuration , I'm using gitlab.binghe.com Host name .

Execute the following command on the command line to create /data1/docker/xinsrv/gitlab Catalog .

mkdir -p /data1/docker/xinsrv/gitlab

install GitLab, As shown below .

kubectl apply -f gitlab.yaml

5. installation is complete

see k8s-ops Namespace deployment , As shown below .

[root@binghe101 k8s]# kubectl get pod -n k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          11s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h

You can also use the following command to view .

[root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          36s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h

The effect is the same .

Next , see GitLab Port mapping of , As shown below .

[root@binghe101 k8s]# kubectl get svc -n k8s-ops
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                     AGE
gitlab       NodePort    10.96.153.100   <none>        80:30088/TCP,22:30022/TCP   2m42s
postgresql   ClusterIP   10.96.203.119   <none>        5432/TCP                    32m
redis        ClusterIP   10.96.107.150   <none>        6379/TCP                    10h

here , You can see , Can pass Master node (binghe101) The host name gitlab.binghe.com And port 30088 To be able to access GitLab. Because I use the virtual machine here to build the relevant environment , Access virtual machine mapped on the local machine gitlab.binghe.com when , You need to configure the local hosts file , In the machine hosts Add the following configuration items to the file .

192.168.175.101 gitlab.binghe.com

Be careful : stay Windows Operating system ,hosts The file is located in the following directory .

C:\Windows\System32\drivers\etc

Next , You can link through the browser :http://gitlab.binghe.com:30088 To visit GitLab 了 , As shown below .

here , You can use the user name root And password admin.1231 To log in GitLab 了 .

Be careful : The user name here is root instead of admin, because root yes GitLab The default super user .

The login interface is shown below .

Here we are ,K8S install gitlab complete .

install Harbor Private warehouse

Be careful : There will be Harbor Private warehouse installed in Master node (binghe101 The server ) On , It is recommended that other servers be installed in the actual production environment .

1. download Harbor Offline installation version of

wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

2. decompression Harbor Installation package

tar -zxvf harbor-offline-installer-v1.10.2.tgz

After successful decompression , A will be generated in the current directory of the server harbor Catalog .

3. To configure Harbor

Be careful : here , I will Harbor The port of is changed to 1180, If not modified Harbor The port of , The default port is 80.

(1) modify harbor.yml file

cd harbor
vim harbor.yml

The modified configuration items are as follows .

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
### And put https Comment out , Otherwise, it will report an error when installing :ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

(2) modify daemon.json file

modify /etc/docker/daemon.json file , If not, create , stay /etc/docker/daemon.json Add the following to the file .

[root@binghe~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.175.101:1180"]
}

It can also be used on the server ip addr Command to view all the IP Address segment , Configure it to /etc/docker/daemon.json In file . here , The content of my configured file is as follows .

{
    "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
    "insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.175.101:1180"]
}

4. Install and start harbor

When the configuration is complete , Enter the following command to install and start Harbor

[root@binghe harbor]# ./install.sh 

5. Sign in Harbor And add accounts

After successful installation , Type in the browser address bar http://192.168.175.101:1180 Open the link , As shown in the figure below .

enter one user name admin And password binghe123, Login system , As shown in the figure below

Next , We choose user management , Add an administrator account , Package for follow-up Docker Mirror and upload Docker Mirror preparation . The steps to add an account are as follows .

The password here is Binghe123.

Click OK , As shown below .

here , Account binghe It's not the administrator yet , Now select binghe Account , Click on “ Set as Administrator ”.

here ,binghe The account is set as the administrator . Here we are ,Harbor Installation of is complete .

6. modify Harbor port

If installed Harbor after , You need to modify Harbor The port of , You can modify Harbor The port of , here , I will 80 Port changed to 1180 Port as an example

(1) modify harbor.yml file

cd harbor
vim harbor.yml

The modified configuration items are as follows .

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
### And put https Comment out , Otherwise, it will report an error when installing :ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

(2) modify docker-compose.yml file

vim docker-compose.yml

The modified configuration items are as follows .

ports:
      - 1180:80

(3) modify config.yml file

cd common/config/registry
vim config.yml

The modified configuration items are as follows .

realm: http://192.168.175.101:1180/service/token

(4) restart Docker

systemctl daemon-reload
systemctl restart docker.service

(5) restart Harbor

[root@binghe harbor]# docker-compose down
Stopping harbor-log ... done
Removing nginx             ... done
Removing harbor-portal     ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing redis             ... done
Removing registry          ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing harbor-log        ... done
Removing network harbor_harbor
 
[root@binghe harbor]# ./prepare
prepare base dir is set to /mnt/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
 
[root@binghe harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db   ... done
Creating redis       ... done
Creating registry    ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal     ... done
Creating nginx             ... done
 
[root@binghe harbor]# docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                             PORTS

install Jenkins( General practice )

1. install nfs( If it has been installed before , You can omit this step )

Use nfs The biggest problem is the right to write , have access to kubernetes Of securityContext/runAsUser Appoint jenkins Running in the container jenkins Users of uid, In order to specify nfs Directory permissions , Give Way jenkins The container can write ; It can be unlimited , Let all users write . And here for simplicity , Let all users write .

If it has been installed before nfs, This step can be omitted . Find a host , install nfs, here , I think that Master node (binghe101 The server ) Installation on nfs For example .

On the command line, enter the following command to install and start nfs.

yum install nfs-utils -y
systemctl start nfs-server
systemctl enable nfs-server

2. establish nfs share directory

stay Master node (binghe101 The server ) To create a /opt/nfs/jenkins-data Catalog as nfs Shared directory for , As shown below .

mkdir -p /opt/nfs/jenkins-data

Next , edit /etc/exports file , As shown below .

vim /etc/exports

stay /etc/exports Add the following line to the file file .

/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)

there ip Use kubernetes node Node ip Range , hinder all_squash Option will map all users accessed to nfsnobody user , No matter what user you are visiting , It will eventually compress into nfsnobody, So just put /opt/nfs/jenkins-data The owner of is changed to nfsnobody, So no matter what user comes to visit, they have write permission .

This option is available on many machines due to the user uid Nonstandard results in different users starting the process , But it works when you have write access to a shared directory at the same time .

Next , by /opt/nfs/jenkins-data Directory authorization , And reload nfs, As shown below .

chown -R 1000 /opt/nfs/jenkins-data/
systemctl reload nfs-server

stay K8S Use the following command on any node in the cluster to verify :

showmount -e NFS_IP

If you can see /opt/nfs/jenkins-data It means ok 了 .

The details are as follows .

[root@binghe101 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24

[root@binghe102 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24

3. establish PV

Jenkins In fact, as long as the corresponding directory is loaded, the previous data can be read , But because of deployment Unable to define storage volume , So we can only use StatefulSet.

First create pv,pv It's for StatefulSet The use of , Every time StatefulSet All starts will pass volumeClaimTemplates This template to create pvc, So there has to be pv, Talent pvc binding .

establish jenkins-pv.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
spec:
  nfs:
    path: /opt/nfs/jenkins-data
    server: 192.168.175.101
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 1Ti

I gave it here 1T Storage space , It can be configured according to the actual situation .

Execute the following command to create pv.

kubectl apply -f jenkins-pv.yaml 

4. establish serviceAccount

establish service account, because jenkins You need to be able to dynamically create slave, So it has to have some authority .

establish jenkins-service-account.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins

In the above configuration , Created a RoleBinding And a ServiceAccount, And will RoleBinding Is bound to this user . therefore ,jenkins The container must use this ServiceAccount Just run it , Otherwise RoleBinding It will not have .

RoleBinding It's easy to understand , because jenkins Need to create and delete slave, That's why you need the above permissions . as for secrets jurisdiction , It is https certificate .

Execute the following command to create serviceAccount.

kubectl apply -f jenkins-service-account.yaml 

5. install Jenkins

establish jenkins-statefulset.yaml file , The contents of the document are as follows .

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  selector:
    matchLabels:
      name: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: docker.io/jenkins/jenkins:lts
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 32100
          resources:
            limits:
              cpu: 4
              memory: 4Gi
            requests:
              cpu: 4
              memory: 4Gi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
  # pvc  Templates , Corresponding to the previous  pv
  volumeClaimTemplates:
    - metadata:
        name: jenkins-home
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Ti

jenkins You need to pay attention to the number of copies when deploying , You should have as many copies as you have pv, Again , How many times will storage be consumed . I've only used one copy here , Therefore, only one has been created before pv.

Use the following command to install Jenkins.

kubectl apply -f jenkins-statefulset.yaml 

6. establish Service

establish jenkins-service.yaml file , The contents of the document are as follows .

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  # type: LoadBalancer
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  #externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      nodePort: 31888
      targetPort: 8080
      protocol: TCP
    - name: jenkins-agent
      port: 32100
      nodePort: 32100
      targetPort: 32100
      protocol: TCP
  type: NodePort

Use the following command to install Service.

kubectl apply -f jenkins-service.yaml 

7. install ingress

jenkins Of web The interface needs to be accessed from outside the cluster , Here we choose to use ingress. establish jenkins-ingress.yaml file , The contents of the document are as follows .

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: jenkins
              servicePort: 31888
      host: jekins.binghe.com

here , It should be noted that host Must be configured as domain name or host name , Otherwise, an error will be reported , As shown below .

The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name, not an IP address

Use the following command to install ingress.

kubectl apply -f jenkins-ingress.yaml 

Last , Because I use the virtual machine here to build the relevant environment , Access virtual machine mapped on the local machine jekins.binghe.com when , You need to configure the local hosts file , In the machine hosts Add the following configuration items to the file .

192.168.175.101 jekins.binghe.com

Be careful : stay Windows Operating system ,hosts The file is located in the following directory .

C:\Windows\System32\drivers\etc

Next , You can link through the browser :http://jekins.binghe.com:31888 To visit Jekins 了 .

Physical machine installation SVN

here , In the Master node (binghe101 The server ) Installation on SVN For example .

1. Use yum install SVN

Execute the following command on the command line to install SVN.

yum -y install subversion 

2. establish SVN library

Execute the following commands in turn .

# establish /data/svn
mkdir -p /data/svn 
# initialization svn
svnserve -d -r /data/svn
# Create code warehouse 
svnadmin create /data/svn/test

3. To configure SVN

mkdir /data/svn/conf
cp /data/svn/test/conf/* /data/svn/conf/
cd /data/svn/conf/
[root@binghe101 conf]# ll
 Total usage  20
-rw-r--r-- 1 root root 1080 5 month   12 02:17 authz
-rw-r--r-- 1 root root  885 5 month   12 02:17 hooks-env.tmpl
-rw-r--r-- 1 root root  309 5 month   12 02:17 passwd
-rw-r--r-- 1 root root 4375 5 month   12 02:17 svnserve.conf
  • To configure authz file ,
vim authz

The content after configuration is as follows .

[aliases]
# joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average

[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
SuperAdmin = admin
binghe = admin,binghe

# [/foo/bar]
# harry = rw
# &joe = r
# * =

# [repository:/baz/fuz]
# @harry_and_sally = rw
# * = r

[test:/]
@SuperAdmin=rw
@binghe=rw
  • To configure passwd file
vim passwd

The content after configuration is as follows .

[users]
# harry = harryssecret
# sally = sallyssecret
admin = admin123
binghe = binghe123
  • To configure svnserve.conf
vim svnserve.conf

The configured file is as follows .

### This file controls the configuration of the svnserve daemon, if you
### use it to allow access to this repository.  (If you only allow
### access through http: and/or file: URLs, then this file is
### irrelevant.)

### Visit http://subversion.apache.org/ for more information.

[general]
### The anon-access and auth-access options control access to the
### repository for unauthenticated (a.k.a. anonymous) users and
### authenticated users, respectively.
### Valid values are "write", "read", and "none".
### Setting the value to "none" prohibits both reading and writing;
### "read" allows read-only access, and "write" allows complete 
### read/write access to the repository.
### The sample settings below are the defaults and specify that anonymous
### users have read-only access to the repository, while authenticated
### users have read and write access to the repository.
anon-access = none
auth-access = write
### The password-db option controls the location of the password
### database file.  Unless you specify a path starting with a /,
### the file's location is relative to the directory containing
### this configuration file.
### If SASL is enabled (see below), this file will NOT be used.
### Uncomment the line below to use the default password file.
password-db = /data/svn/conf/passwd
### The authz-db option controls the location of the authorization
### rules for path-based access control.  Unless you specify a path
### starting with a /, the file's location is relative to the
### directory containing this file.  The specified path may be a
### repository relative URL (^/) or an absolute file:// URL to a text
### file in a Subversion repository.  If you don't specify an authz-db,
### no path-based access control is done.
### Uncomment the line below to use the default authorization file.
authz-db = /data/svn/conf/authz
### The groups-db option controls the location of the file with the
### group definitions and allows maintaining groups separately from the
### authorization rules.  The groups-db file is of the same format as the
### authz-db file and should contain a single [groups] section with the
### group definitions.  If the option is enabled, the authz-db file cannot
### contain a [groups] section.  Unless you specify a path starting with
### a /, the file's location is relative to the directory containing this
### file.  The specified path may be a repository relative URL (^/) or an
### absolute file:// URL to a text file in a Subversion repository.
### This option is not being used by default.
# groups-db = groups
### This option specifies the authentication realm of the repository.
### If two repositories have the same authentication realm, they should
### have the same password database, and vice versa.  The default realm
### is repository's uuid.
realm = svn
### The force-username-case option causes svnserve to case-normalize
### usernames before comparing them against the authorization rules in the
### authz-db file configured above.  Valid values are "upper" (to upper-
### case the usernames), "lower" (to lowercase the usernames), and
### "none" (to compare usernames as-is without case conversion, which
### is the default behavior).
# force-username-case = none
### The hooks-env options specifies a path to the hook script environment 
### configuration file. This option overrides the per-repository default
### and can be used to configure the hook script environment for multiple 
### repositories in a single file, if an absolute path is specified.
### Unless you specify an absolute path, the file's location is relative
### to the directory containing this file.
# hooks-env = hooks-env

[sasl]
### This option specifies whether you want to use the Cyrus SASL
### library for authentication. Default is false.
### Enabling this option requires svnserve to have been built with Cyrus
### SASL support; to check, run 'svnserve --version' and look for a line
### reading 'Cyrus SASL authentication is available.'
# use-sasl = true
### These options specify the desired strength of the security layer
### that you want SASL to provide. 0 means no encryption, 1 means
### integrity-checking only, values larger than 1 are correlated
### to the effective key length for encryption (e.g. 128 means 128-bit
### encryption). The values below are the defaults.
# min-encryption = 0
# max-encryption = 256

Next , take /data/svn/conf In the catalog svnserve.conf File copy to /data/svn/test/conf/ Under the table of contents . As shown below .

[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/
cp: Is it covered? '/data/svn/test/conf/svnserve.conf'?y

4. start-up SVN service

(1) establish svnserve.service service

establish svnserve.service file

vim /usr/lib/systemd/system/svnserve.service

The contents of the document are as follows .

[Unit]
Description=Subversion protocol daemon
After=syslog.target network.target
Documentation=man:svnserve(8)

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/svnserve
#ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
ExecStart=/usr/bin/svnserve --daemon $OPTIONS
PrivateTmp=yes

[Install]
WantedBy=multi-user.target

Next, execute the following command to make the configuration take effect .

systemctl daemon-reload

After the command is executed successfully , modify /etc/sysconfig/svnserve file .

vim /etc/sysconfig/svnserve 

The contents of the modified document are as follows .

# OPTIONS is used to pass command-line arguments to svnserve.
# 
# Specify the repository location in -r parameter:
OPTIONS="-r /data/svn"

(2) start-up SVN

First of all to see SVN state , As shown below .

[root@itence10 conf]# systemctl status svnserve.service
● svnserve.service - Subversion protocol daemon
   Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:svnserve(8)

You can see , here SVN It didn't start , Next , Need to start the SVN.

systemctl start svnserve.service

Set up SVN The service starts automatically .

systemctl enable svnserve.service

Next , You can download and install TortoiseSVN, Input link svn://192.168.0.10/test And enter the user name binghe, password binghe123 To connect SVN 了 .

Physical machine installation Jenkins

Be careful : install Jenkins It needs to be installed before JDK and Maven, I will also Jenkins Installed in the Master node (binghe101 The server ).

1. Enable Jenkins library

Run the following command to download repo File and import GPG secret key :

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

2. install Jenkins

Execute the following command to install Jenkis.

yum install jenkins

Next , modify Jenkins Default port , As shown below .

vim /etc/sysconfig/jenkins

The two modified configurations are as follows .

JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java"
JENKINS_PORT="18080"

here , Have already put Jenkins The port of is from 8080 It is amended as follows 18080

3. start-up Jenkins

Enter the following command on the command line to start Jenkins.

systemctl start jenkins

To configure Jenkins Boot up .

systemctl enable jenkins

see Jenkins Operating state .

[root@itence10 ~]# systemctl status jenkins
● jenkins.service - LSB: Jenkins Automation Server
   Loaded: loaded (/etc/rc.d/init.d/jenkins; generated)
   Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 71 (limit: 26213)
   Memory: 550.8M

explain ,Jenkins Successful launch .

To configure Jenkins Running environment

1. Sign in Jenkins

After the first installation , Need configuration Jenkins Operating environment . First , Visit the link in the browser address bar http://192.168.0.10:18080, open Jenkins Interface .

According to the prompt, use the following command to find the password value on the server , As shown below .

[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
71af861c2ab948a1b6efc9f7dde90776

Put the password 71af861c2ab948a1b6efc9f7dde90776 Copy to text box , Click to continue . Will jump to custom Jenkins page , As shown below .

here , You can choose “ Install the recommended plug-ins ”. After that, it will jump to a page to install plug-ins , As shown below .

There may be download failures in this step , Directly negligible .

2. Installing a plug-in

Plugins that need to be installed

  • Kubernetes Cli Plugin: The plug-in can be directly in Jenkins Use in kubernetes Command line to operate .
  • Kubernetes plugin: Use kubernetes You need to install the plug-in
  • Kubernetes Continuous Deploy Plugin:kubernetes Deploy plug-ins , It can be used as needed

There are more plugins to choose from , Clickable System management -> Manage plug-ins to manage and add , Install the corresponding Docker plug-in unit 、SSH plug-in unit 、Maven plug-in unit . Other plug-ins can be installed as needed . As shown in the figure below .

3. To configure Jenkins

(1) To configure JDK and Maven

stay Global Tool Configuration Middle configuration JDK and Maven, As shown below , open Global Tool Configuration Interface .

And then we're going to configure JDK and Maven 了 .

Because I will be on the server Maven Installed in the /usr/local/maven-3.6.3 Under the table of contents , therefore , Need to be in “Maven To configure ” To configure , As shown in the figure below .

Next , To configure JDK, As shown below .

Be careful : Don't check “Install automatically”

Next , To configure Maven, As shown below .

Be careful : Don't check “Install automatically”

(2) To configure SSH

Get into Jenkins Of Configure System Interface configuration SSH, As shown below .

find SSH remote hosts To configure .

When the configuration is complete , Click on Check connection Button , Will be displayed Successfull connection. As shown below .

thus ,Jenkins The basic configuration is complete .

Jenkins Release Docker The project to K8s colony

1. adjustment SpringBoot Project configuration

Realization ,SpringBoot Of the module where the startup class is located in the project pom.xml Need to introduce packaging into Docker Image configuration , As shown below .

  <properties>
     <docker.repostory>192.168.0.10:1180</docker.repostory>
        <docker.registry.name>test</docker.registry.name>
        <docker.image.tag>1.0.0</docker.image.tag>
        <docker.maven.plugin.version>1.4.10</docker.maven.plugin.version>
  </properties>

<build>
    <finalName>test-starter</finalName>
  <plugins>
            <plugin>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-maven-plugin</artifactId>
   </plugin>
   
   <!-- docker Of maven plug-in unit , Official website :https://github.com/spotify/docker‐maven‐plugin -->
   <!-- Dockerfile maven plugin -->
   <plugin>
       <groupId>com.spotify</groupId>
       <artifactId>dockerfile-maven-plugin</artifactId>
       <version>${docker.maven.plugin.version}</version>
       <executions>
           <execution>
           <id>default</id>
           <goals>
               <!-- If package I don't want to use docker pack , Just comment out this goal-->
               <goal>build</goal>
               <goal>push</goal>
           </goals>
           </execution>
       </executions>
       <configuration>
        <contextDirectory>${project.basedir}</contextDirectory>
           <!-- harbor  Warehouse user name and password -->
           <useMavenSettingsForAuth>useMavenSettingsForAuth>true</useMavenSettingsForAuth>
           <repository>${docker.repostory}/${docker.registry.name}/${project.artifactId}</repository>
           <tag>${docker.image.tag}</tag>
           <buildArgs>
               <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
           </buildArgs>
       </configuration>
   </plugin>

        </plugins>
        
  <resources>
   <!--  Appoint  src/main/resources All files and folders are resource files  -->
   <resource>
    <directory>src/main/resources</directory>
    <targetPath>${project.build.directory}/classes</targetPath>
    <includes>
     <include>**/*</include>
    </includes>
    <filtering>true</filtering>
   </resource>
  </resources>
 </build>

Next , stay SpringBoot Start the creation of the root directory of the module where the class is located Dockerfile, An example of the content is shown below .

# Add dependency environment , The premise is that Java8 Of Docker Images from the official mirror Repository pull Come down , And upload it to your own Harbor In a private warehouse 
FROM 192.168.0.10:1180/library/java:8
# Specify image creator 
MAINTAINER binghe
# Running directory 
VOLUME /tmp
# Copy local files to container 
ADD target/*jar app.jar
# A command that is executed automatically after the container is started 
ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]

According to the actual situation , Modify yourself .

Be careful :FROM 192.168.0.10:1180/library/java:8 The premise is to execute the following command .

docker pull java:8
docker tag java:8 192.168.0.10:1180/library/java:8
docker login 192.168.0.10:1180
docker push 192.168.0.10:1180/library/java:8

stay SpringBoot Start the creation of the root directory of the module where the class is located yaml file , The entry is called test.yaml file , The contents are shown below .

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-starter
  labels:
    app: test-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-starter
  template:
    metadata:
      labels:
        app: test-starter
    spec:
      containers:
      - name: test-starter
        image: 192.168.0.10:1180/test/test-starter:1.0.0
        ports:
        - containerPort: 8088
      nodeSelector:
        clustertype: node12

---
apiVersion: v1
kind: Service
metadata:
  name: test-starter
  labels:
    app: test-starter
spec:
  ports:
    - name: http
      port: 8088
      nodePort: 30001
  type: NodePort
  selector:
    app: test-starter

2.Jenkins Configure publishing project

Upload project to SVN The code base , For example, the address is svn://192.168.0.10/test

Next , stay Jenkins Configure automatic publishing in . The steps are as follows .

Click New Item.

Enter the description information in the description text box , As shown below .

Next , To configure SVN Information .

Be careful : To configure GitLab And SVN identical , I won't repeat .

Locate the Jenkins Of “ Building blocks ”, Use Execute Shell To build a release project to K8S colony .

The order of execution is as follows .

# Delete the local original image , Does not affect the Harbor The mirror image in the warehouse 
docker rmi 192.168.0.10:1180/test/test-starter:1.0.0
# Use Maven compile 、 structure Docker Mirror image , After the execution is completed, local Docker The image file will be rebuilt in the container 
/usr/local/maven-3.6.3/bin/mvn -f ./pom.xml clean install -Dmaven.test.skip=true
# Sign in  Harbor Warehouse 
docker login 192.168.0.10:1180 -u binghe -p Binghe123
# Upload image to Harbor Warehouse 
docker push 192.168.0.10:1180/test/test-starter:1.0.0
# Stop and delete K8S Running in a cluster 
/usr/bin/kubectl delete -f test.yaml
# take Docker The image is republished to K8S colony 
/usr/bin/kubectl apply -f test.yaml

Okay , That's all for today , I'm glacier , See you next time ~~

This article is from WeChat official account. - Glacier Technology (hacker-binghe)

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time : 2020-11-30

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[glacier]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201224160737125m.html

Scroll to Top