k8s(kubernetes)安装部署实例


k8s(kubernetes)安装部署实例

准备环境

  • Ubuntu版本:18.04
  • Docker: 20+
  • k8s: 1.21.3

两台主机,一台master,一台slave:

  • ubuntu01-master 192.168.2.123
  • ubuntu02-slave 192.168.2.124

注意:至少2CPU+2G内存配置

---------------------------------------------以下操作master和slave节点都要执行------------------------------------

安装Docker(20.版本)

apt-get update
apt install docker.io

系统初始化设置

#关闭防火墙
ufw disable

#关闭seLinux
sudo apt install selinux-utils
setenforce 0

#禁用swap分区
swapoff -a

#注释掉swap那一行
sudo vim /etc/fstab  

#桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#生效
sysctl --system 

配置k8s资源

apt install curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

apt-get update

安装nfs

apt-get install nfs-common

安装kubeadm(初始化cluster),kubelet(启动pod)和kubectl(k8s命令工具)

apt install -y kubelet=1.21.3-00 kubeadm=1.21.3-00 kubectl=1.21.3-00

设置开机启动并启动kubelet

systemctl enable kubelet 
systemctl start kubelet

---------------------------------------------以上操作主从节点都要执行------------------------------------

-------------------------------------------------以下是主节点master操作-------------------------------------------

所需镜像文件如下列表,也可以提前docker pull下来:

-------------k8s组件镜像------------
kube-apiserver:v1.21.3
kube-controller-manager:v1.21.3
kube-scheduler:v1.21.3
kube-proxy:v1.21.3
pause:3.2
etcd:3.4.13-0
coredns:v1.8.0

---------------flannel 网络插件所需镜像,后面安装需要--------
flannel-cni-plugin:v1.2.0
flannel:v0.23.0

新建k8s.sh脚本,批量拉取镜像

vim k8s.sh   

内容如下:

#!/bin/bash
images=(
 kube-apiserver:v1.21.3
 kube-controller-manager:v1.21.3
 kube-scheduler:v1.21.3
 kube-proxy:v1.21.3
 pause:3.2
 etcd:3.4.13-0
)
for imageName in ${images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
  docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0
#给k8s.sh添加执行权限
chmod +x k8s.sh

#执行k8s.sh
./k8s.sh


初始化master

kubeadm init --image-repository=registry.aliyuncs.com/google_containers  --pod-network-cidr=10.244.0.0/16	 --service-cidr=10.96.0.0/12

初始化完成后如下图,出现successfully! 说明安装成功。

QQ截图20230926140153

记住以上成功后打印出来的两点信息:

第一,创建文件命令,(在master上运行一下)

#执行打印出来的信息
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

第二,节点加入命令(此命令ubuntu02-slave上执行)

join的信息是在ubuntu02-slave上执行一下命令即可加入到master中

#该命令在slave节点上执行
kubeadm join 192.168.2.123:6443 --token mqgb74.87li3ca2w48y21lw \
>         --discovery-token-ca-cert-hash sha256:7d4159e6dc60fb50b27bdbc58225a7cc51aedadc10ba2125108e9577d67ebc29
  • 若没有记录,也可在master节点用以下操作获取:kubeadm token create --print-join-command

安装网络插件flannel

拉取镜像flannel所需镜像文件(所有节点都需要):

1.用docker pull 拉取镜像文件,或者网上下载
	flannel-cni-plugin:v1.2.0
	flannel:v0.23.0

2.将镜像打包成tar
    docker save -o flannel-cni-plugin_v1.2.0.tar flannel/flannel-cni-plugin:v1.2.0
    docker save -o flannel_v0.23.0.tar flannel/flannel:v0.23.0
3.将tar镜像压缩包,导入到containerd的k8s.io命名空间中,这样,下面的kube-flannel.yml文件中可以直接引用镜像地址了,要不然出现Init:ErrImagePull。
	sudo ctr -n k8s.io images import flannel-cni-plugin_v1.2.0.tar
	sudo ctr -n k8s.io images import flannel_v0.23.0.tar
4.验证是否导入成功,并记下路径,修改到kube-flannel.yml文件中。
	sudo ctr -n k8s.io i check | grep flannel

image-20241204163116576

下载kube-flannel.yml文件,并修改其中IP配置信息和上面初始化master时配置保持一致和本地镜像文件路径:

wget http://www.chenguanghu.com:88/k8s/kube-flannel.yml
  • 如果报错,浏览器打开网站复制到新建本地文本kube-flannel.yml,执行sudo kubectl apply -f kube-flannel.yml

  • 一下内容是kube-flannel.yml,注意里面IP配置信息和上面初始化master时配置保持一致

    ---
    kind: Namespace
    apiVersion: v1
    metadata:
      name: kube-flannel
      labels:
        k8s-app: flannel
        pod-security.kubernetes.io/enforce: privileged
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    - apiGroups:
      - networking.k8s.io
      resources:
      - clustercidrs
      verbs:
      - list
      - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-flannel
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
      namespace: kube-flannel
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-flannel
      labels:
        tier: node
        k8s-app: flannel
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
        k8s-app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
            image: docker.io/flannel/flannel-cni-plugin:v1.2.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
            image: docker.io/flannel/flannel:v0.23.0
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: docker.io/flannel/flannel:v0.23.0
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
    
    

    保存后执行:

    #添加执行权限
    chmod +x kube-flannel.yml
    sudo kubectl apply -f kube-flannel.yml
    

    查询组件状态

    kubectl get cs
    

    解决组件状态Unhealthy

    # 需要用#注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的- – port=0
    #注释掉 - - port=0 这一行 
    cd /etc/kubernetes/manifests
    vim kube-controller-manager.yaml
    vim kube-scheduler.yaml
    

    注释完后重启服务

    systemctl restart kubelet
    

    再次查看组件状态(需要稍等)

    kubectl get cs
    

查看节点状态

kubectl get node

查看pod状态

kubectl get pods -A  (kubectl get pod -n kube-system -o wide )

如果状态不对,请查看对应日志找错,比如:
kubectl -n kube-flannel logs kube-flannel-ds-58zkh
 :-n代表namespace,logs后面是pod名
 

  • 若flannel没起来则查看其日志:
kubectl -n kube-system logs kube-flannel-ds-g59k5

kubectl describe node ubuntu02|grep cider

QQ截图20230926141214

--------------------------------------------------以下是从节点slave操作-----------------------------------------------------

加入master,

如果忘记token可重新创建一个,kubeadm token create --print-join-command

#该命令在slave节点上执行
kubeadm join 192.168.2.123:6443 --token mqgb74.87li3ca2w48y21lw \
>         --discovery-token-ca-cert-hash sha256:7d4159e6dc60fb50b27bdbc58225a7cc51aedadc10ba2125108e9577d67ebc29
  • 如果忘记此命令,也可在master节点用以下操作获取:kubeadm token create --print-join-command

如下图:

QQ截图20230926140458

成功后,在master上可以查看益加入的slave节点

kubectl get node

image-20230926145712163

------------------------------------------以上至此部署完成----------------------------------

卸载k8s

  1. 清除集群所有资源:
kubectl delete --all all --all-namespaces
  1. 停止 Kubernetes 服务:
sudo systemctl stop kubelet
sudo systemctl stop docker
  1. 删除 Kubernetes 包:
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni
  1. 删除 Kubernetes 配置文件和目录:
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/etcd/
sudo rm -rf ~/.kube/
教程