办公电脑装ubuntu了,搞个k8s当测试环境
环境初始化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
| sudo swapoff -a sudo sed -i '$ s/^/#/' /etc/fstab
sudo modprobe br_netfilter sudo tee > /etc/sysctl.d/k8s.conf << 'EOF' net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl -p /etc/sysctl.d/k8s.conf
sudo systemctl disable ufw.service --now
sudo apt install -y chrony sudo systemctl restart chrony sudo systemctl enable chrony
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://docker.m.daocloud.io", "https://ghcr.io", "https://mirror.baidubce.com", "https://docker.nju.edu.cn" ], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl enable docker --now
|
安装k8s
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y kubelet=1.23.1-00 kubeadm=1.23.1-00 kubectl=1.23.1-00
sudo systemctl enable kubelet.service --now apt-mark hold kubelet kubeadm kubectl
kubeadm init --apiserver-advertise-address 10.167.115.99 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification
kubectl get nodes NAME STATUS ROLES AGE VERSION ws-station NotReady control-plane,master 3m33s v1.23.1
|
被k8s迁移仓库坑惨了,旧的源下不到,新的源又连不上
安装calico
calico.yaml文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| kubectl apply -f calico.yaml
kubectl get nodes NAME STATUS ROLES AGE VERSION ws-station Ready control-plane,master 13h v1.23.1
kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-677cd97c8d-tjn76 1/1 Running 0 54m calico-node-m8lk6 1/1 Running 0 54m coredns-65c54cc984-hgpvp 1/1 Running 0 13h coredns-65c54cc984-xlcdp 1/1 Running 0 13h etcd-ws-station 1/1 Running 0 13h kube-apiserver-ws-station 1/1 Running 1 (37m ago) 13h kube-controller-manager-ws-station 1/1 Running 1 (39m ago) 13h kube-proxy-mgxrl 1/1 Running 0 13h kube-scheduler-ws-station 1/1 Running 1 (39m ago) 13h
|
部署metrics与使用lens
部署metrics-server
不装metrics,lens连pod的占用率都抓不到,lens吊用没有,所以装一个
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213
| mkdir ~/metrics-server cd ~/metrics-server
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.7.1
sudo tee > metrics-server-rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - nodes/metrics verbs: - get - apiGroups: - "" resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system EOF
sudo tee > metrics-server-svc.yaml <<EOF apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server EOF
sudo tee > metrics-server-deployment.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" containers: - args: - /metrics-server - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP - --kubelet-use-node-status-port - --metric-resolution=15s image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.1 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 10Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF
kubectl apply -f .
|
部署lens
lens是一个管理k8s的IDE,方便运维人员去进行运维
1 2 3 4 5 6
| curl -fsSL https://downloads.k8slens.dev/keys/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/lens-archive-keyring.gpg > /dev/null echo "deb [arch=amd64 signed-by=/usr/share/keyrings/lens-archive-keyring.gpg] https://downloads.k8slens.dev/apt/debian stable main" | sudo tee /etc/apt/sources.list.d/lens.list > /dev/null sudo apt update && sudo apt install lens lens-desktop
可以看到成功连接上了我的集群,并且可以看到各个pod的资源占用情况和状态
|
