软件版本说明: Rocky-linux 8.10 k8s v1.30.0 docker 26.1.3 containerd 1.7.15 circtl 1.30 calico: 3.29.4
需求镜像列表: calico/pod2daemon-flexvol:v3.29.4 calico/node:v3.29.4 calico/cni:v3.29.4 calico/kube-controllers:v3.29.4 calico/typha:v3.29.4 calico/apiserver:v3.29.4 calico/node-driver-registrar:v3.29.4 calico/csi:v3.29.4
环境初始化 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 hostnamectl set-hostname test && bash yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm yum -y update sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config setenforce 0 systemctl disable firewalld.service --now swapoff -a sed -i '$ s/^/#/' /etc/fstab modprobe br_netfilter cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf yum -y install yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo echo " [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key " >> /etc/yum.repos.d/kubernetes.repoyum -y install chrony sed -i 's/^server/#server/g' /etc/chrony.conf sed -i '1s/^/server cn.pool.ntp.org iburst\n/' /etc/chrony.conf
容器安装配置 k8s 1.30需要containerd版本在1.7以上,原本的1.6.6没法用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 wget https://github.com/containerd/containerd/releases/download/v1.7.15/containerd-1.7.15-linux-amd64.tar.gz tar -C /usr/local -xzf containerd-1.7.15-linux-amd64.tar.gz wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /etc/systemd/system/containerd.service systemctl daemon-reload mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml vim /etc/containerd/config.toml SystemdCgroup = true sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" config_path = "/etc/containerd/certs.d" mkdir /etc/containerd/certs.d/docker.io/ -pecho '[host."https://hub-mirror.c.163.com",host."https://docker.m.daocloud.io", host."https://ghcr.io",host."https://mirror.baidubce.com",host."https://docker.nju.edu.cn"] capabilities = ["pull"] ' > /etc/containerd/certs.d/docker.io/hosts.tomlsystemctl enable containerd.service --now systemctl restart containerd mkdir -p /etc/cni/net.d /opt/cni/bin wget https://mirrors.chenby.cn/https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz tar -C /opt/cni/bin -xzvf cni-plugins-linux-amd64-v1.5.1.tgz yum install -y docker-ce && systemctl enable docker --now cat > /etc/docker/daemon.json << EOF { "registry-mirrors":["https://a88uijg4.mirror.aliyuncs.com", "https://docker.lmirror.top", "https://docker.m.daocloud.io", "https://hub.uuuadc.top", "https://docker.anyhub.us.kg", "https://dockerhub.jobcher.com", "https://dockerhub.icu", "https://docker.ckyl.me", "https://docker.awsl9527.cn", "https://docker.laoex.link", "https://registry.docker-cn.com"] } EOF systemctl daemon-reload systemctl restart docker CRICTL_VERSION="v1.30.0" wget https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION} /crictl-${CRICTL_VERSION} -linux-amd64.tar.gz tar zxvf crictl-${CRICTL_VERSION} -linux-amd64.tar.gz -C /usr/local/bin crictl -v cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
开始安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 yum -y install kubectl-1.30.0 kubelet-1.30.0 kubeadm-1.30.0 systemctl enable kubelet kubeadm config print init-defaults > kubeadm.yaml cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups : - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.10.151 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: test taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local : dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.30.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd kubeadm config images pull --config=kubeadm.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.30.0 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.30.0 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.30.0 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.3 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.15-0 kubeadm init --config=kubeadm.yaml ... Your Kubernetes control-plane has initialized successfully! mkdir -p $HOME /.kubecp -i /etc/kubernetes/admin.conf $HOME /.kube/configchown $(id -u):$(id -g) $HOME /.kube/configkubectl get nodes NAME STATUS ROLES AGE VERSION test NotReady control-plane 8m25s v1.30.0
尝试使用编译的kubeadm进行安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 kubeadm reset -f cd kubernetes-release-1.30/vim cmd/kubeadm/app/constants/constants.go CertificateValidity = time.Hour * 24 * 365 * 10 vim staging/src/k8s.io/client-go/util/cert/cert.go NotAfter: time.Now().Add(duration365d * 100).UTC(), cd build/./run.sh make kubeadm cd ..cp _output/dockerized/bin/linux/amd64/kubeadm /usr/bin/kubeadmcd ~kubeadm init --config=kubeadm.yaml ... Your Kubernetes control-plane has initialized successfully! kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jun 15, 2035 06:55 UTC 9y ca no apiserver Jun 15, 2035 06:55 UTC 9y ca no apiserver-etcd-client Jun 15, 2035 06:55 UTC 9y etcd-ca no apiserver-kubelet-client Jun 15, 2035 06:55 UTC 9y ca no controller-manager.conf Jun 15, 2035 06:55 UTC 9y ca no etcd-healthcheck-client Jun 15, 2035 06:55 UTC 9y etcd-ca no etcd-peer Jun 15, 2035 06:55 UTC 9y etcd-ca no etcd-server Jun 15, 2035 06:55 UTC 9y etcd-ca no front-proxy-client Jun 15, 2035 06:55 UTC 9y front-proxy-ca no scheduler.conf Jun 15, 2035 06:55 UTC 9y ca no super-admin.conf Jun 15, 2035 06:55 UTC 9y ca no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca May 24, 2125 06:55 UTC 99y no etcd-ca May 24, 2125 06:55 UTC 99y no front-proxy-ca May 24, 2125 06:55 UTC 99y no
安装calico 根据官方文档,支持k8s 1.30的calico版本为3.29与3.28,我们选3.29.4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.4/manifests/tigera-operator.yaml wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.4/manifests/custom-resources.yaml vim custom-resources.yaml apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: ipPools: - name: default-ipv4-ippool blockSize: 26 cidr: 10.244.0.0/16 encapsulation: IPIP natOutgoing: Enabled nodeSelector: all() --- apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {} kubectl apply -f custom-resources.yaml
准备镜像 折磨的来了,直接应用由于缺少加速器,所以没法直接拉取完calico的镜像
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-677d9487d4-4rf4s 0/1 Pending 0 8m51s calico-apiserver calico-apiserver-677d9487d4-zzwhr 0/1 Pending 0 8m51s calico-system calico-kube-controllers-64995687c5-wgl2j 0/1 Pending 0 8m50s calico-system calico-node-lw8qs 0/1 Init:ImagePullBackOff 0 8m50s calico-system calico-typha-85c9665d4-fgw48 0/1 ImagePullBackOff 0 8m51s 所需镜像: calico/pod2daemon-flexvol:v3.29.4 calico/node:v3.29.4 calico/cni:v3.29.4 calico/kube-controllers:v3.29.4 calico/typha:v3.29.4 calico/apiserver:v3.29.4 calico/node-driver-registrar:v3.29.4 calico/csi:v3.29.4 手动用docker pull拉取 然后进行打包: docker save -o calico3.29.4.tar.gz calico/pod2daemon-flexvol:v3.29.4 \ calico/node:v3.29.4 calico/cni:v3.29.4 calico/kube-controllers:v3.29.4 \ calico/typha:v3.29.4 calico/apiserver:v3.29.4 \ calico/node-driver-registrar:v3.29.4 calico/csi:v3.29.4 导入到crictl ctr -n k8s.io images import calico3.29.tar.gz unpacking docker.io/calico/pod2daemon-flexvol:v3.29.4 (sha256:5299a80c2fce746be4f826f46a9fa00c9c49483452b1e9327db01cf76b974a42)...done unpacking docker.io/calico/node:v3.29.4 (sha256:b26fe2fce2f90024603be3ae01a17686225ec60378e883570b641d8924b2b4f1)...done unpacking docker.io/calico/cni:v3.29.4 (sha256:da01953361f7c400cb43d41c7909658b1a568f843f8c5d059115687ae432fe93)...done unpacking docker.io/calico/kube-controllers:v3.29.4 (sha256:a67819621d64e176a2c76e72cc72ba35fb15fa7ed80a2ef2138ff3b62cfc3a45)...done unpacking docker.io/calico/typha:v3.29.4 (sha256:1ea9ead7ad1ecade3daacc3b2e037196bde41232d59627332ab33c5ed2a1df98)...done unpacking docker.io/calico/apiserver:v3.29.4 (sha256:11f25cc04d82f42c79f55015240eb99415639b5afadba9618dcd2553cfb468df)...done unpacking docker.io/calico/node-driver-registrar:v3.29.4 (sha256:f319dbe79fbebb4f896ba0c2e5fdd7812887c6883354b951b484670cd3081f80)...done kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-677d9487d4-4rf4s 1/1 Running 0 28m calico-apiserver calico-apiserver-677d9487d4-zzwhr 1/1 Running 0 28m calico-system calico-kube-controllers-64995687c5-plvcw 1/1 Running 0 10m calico-system calico-node-9xjc8 1/1 Running 0 10m calico-system calico-typha-85c9665d4-npj9w 1/1 Running 0 10m calico-system csi-node-driver-4hfrv 2/2 Running 0 10m kube-system coredns-6d58d46f65-bjtwd 1/1 Running 0 49m kube-system coredns-6d58d46f65-hpkz8 1/1 Running 0 49m kube-system etcd-test 1/1 Running 1 49m kube-system kube-apiserver-test 1/1 Running 2 49m kube-system kube-controller-manager-test 1/1 Running 2 49m kube-system kube-proxy-m698k 1/1 Running 0 49m kube-system kube-scheduler-test 1/1 Running 2 49m tigera-operator tigera-operator-767c6b76db-gzklb 1/1 Running 0 49m
加入新节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm yum -y update sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config setenforce 0 systemctl disable firewalld.service --now swapoff -a sed -i '$ s/^/#/' /etc/fstab modprobe br_netfilter cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf yum -y install yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo echo " [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key " >> /etc/yum.repos.d/kubernetes.repoyum -y install chrony sed -i 's/^server/#server/g' /etc/chrony.conf sed -i '1s/^/server cn.pool.ntp.org iburst\n/' /etc/chrony.conf wget https://github.com/containerd/containerd/releases/download/v1.7.15/containerd-1.7.15-linux-amd64.tar.gz tar -C /usr/local -xzf containerd-1.7.15-linux-amd64.tar.gz wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /etc/systemd/system/containerd.service systemctl daemon-reload mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml vim /etc/containerd/config.toml SystemdCgroup = true sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" config_path = "/etc/containerd/certs.d" mkdir /etc/containerd/certs.d/docker.io/ -pecho '[host."https://hub-mirror.c.163.com",host."https://docker.m.daocloud.io", host."https://ghcr.io",host."https://mirror.baidubce.com",host."https://docker.nju.edu.cn"] capabilities = ["pull"] ' > /etc/containerd/certs.d/docker.io/hosts.tomlsystemctl enable containerd.service --now systemctl restart containerd yum install -y docker-ce && systemctl enable docker --now cat > /etc/docker/daemon.json << EOF { "registry-mirrors":["https://a88uijg4.mirror.aliyuncs.com", "https://docker.lmirror.top", "https://docker.m.daocloud.io", "https://hub.uuuadc.top", "https://docker.anyhub.us.kg", "https://dockerhub.jobcher.com", "https://dockerhub.icu", "https://docker.ckyl.me", "https://docker.awsl9527.cn", "https://docker.laoex.link", "https://registry.docker-cn.com"] } EOF systemctl daemon-reload systemctl restart docker CRICTL_VERSION="v1.30.0" wget https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION} /crictl-${CRICTL_VERSION} -linux-amd64.tar.gz tar zxvf crictl-${CRICTL_VERSION} -linux-amd64.tar.gz -C /usr/local/bin crictl -v cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF yum -y install kubectl-1.30.0 kubelet-1.30.0 kubeadm-1.30.0 systemctl enable kubelet scp calico3.29.4.tar.gz root@192.168.10.231:~ ctr -n k8s.io images import calico3.29.4.tar.gz kubeadm join 192.168.10.151:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:439a18ace6c78450a8fae8fccff53617605183033e1fe61a0aec2a6ec51b5313 [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.136901ms [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 此时在主节点上进行查看,可以看到已经ready了 kubectl get nodes NAME STATUS ROLES AGE VERSION slave Ready <none> 5m22s v1.30.0 test Ready control-plane 85m v1.30.0kubectl get pods -A -owide | grep slave calico-system calico-node-h2lcc 1/1 Running 1 7m48s 192.168.10.231 slave <none> <none> calico-system csi-node-driver-v2bwf 2/2 Running 3 (2m20s ago) 7m48s 10.244.25.0 slave <none> <none> kube-system kube-proxy-48dd5 1/1 Running 1 7m48s 192.168.10.231 slave <none> <none>