樟木头镇网站建设,南通seo网站推广费用,甘肃建投建设有限公司网站,新郑市住房建设局网站1. 准备离线安装包
参考教程离线包准备教程
2. 准备环境
2.1. 准备主机
主机名ip系统k8s-master192.168.38.128ubuntu22.04k8s-node192.168.38.131ubuntu22.04
2.2. 设置host
修改 /etc/hosts 文件#xff0c;添加master和node节点#xff0c;需要和主机名保持一致
2…1. 准备离线安装包
参考教程离线包准备教程
2. 准备环境
2.1. 准备主机
主机名ip系统k8s-master192.168.38.128ubuntu22.04k8s-node192.168.38.131ubuntu22.04
2.2. 设置host
修改 /etc/hosts 文件添加master和node节点需要和主机名保持一致
2.3. 禁用swap
kubeadm初始化时会提示用户禁用swap修改 /etc/fstab 文件注释掉swap
# 查看是否关闭swap分区
swapoff -a
# 查看swap分区命令
swapon --show2.4. 安装chrony
# 查看时区时间
date
# 替换时区为上海市区
timedatectl set-timezone Asia/Shanghai
# 安装命令联网同步时间
apt install chrony -y
# 开机启动
systemctl enable --now chrony2.5. 安装ipset、ipvsadm
# 安装命令
apt install ipset ipvsadm -y# 创建IPVS内核配置文件
cat EOF | tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF# 手动加载模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_conntrack2.6. 配置内核模块
# 创建K8S内核配置
cat EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF# 手动执行加载模块
modprobe overlay
modprobe br_netfilter# 创建IPV4内核配置文件
cat /etc/sysctl.d/k8s.conf EOF
net.bridge.bridge-nf-call-ip6tables1
net.bridge.bridge-nf-call-iptables1
net.ipv4.ip_forward1
vm.swappiness0
EOF# 加载内核
sysctl --system
2.4. 安装docker和containerd
# 要将Ubuntu上的Docker升级到最新版本可以按照以下步骤进行操作
# 卸载旧版本的Docker如果您已经安装了旧版本的Docker请先卸载它们。可以使用以下命令卸载旧版本的Docker
apt-get remove docker docker-engine docker.io containerd runc# 安装依赖项升级Docker之前需要确保系统具有必要的依赖项。可以使用以下命令安装这些依赖项
apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common# 添加Docker官方GPG密钥可以使用以下命令添加Docker官方GPG密钥
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -# 添加Docker官方存储库使用以下命令将Docker官方存储库添加到APT源列表中
add-apt-repository deb [archamd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable# 安装Docker CE现在可以使用以下命令安装Docker CE社区版
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io# 验证Docker安装是否成功可以使用以下命令验证Docker安装是否成功
docker run hello-world
# 如果Docker安装成功将输出“Hello from Docker!”消息。# 如果已经安装过Docker并且运行了容器需要重启Docker
systemctl restart docker# 设置开机自启
systemctl enable docker2.5. 修改配置containerd
apt-get安装的containerd默认配置是禁用cri插件的需要手动更改配置
# 备份原配置
mv /etc/containerd/config.toml /etc/containerd/config.toml_bak
# 读取containerd默认配置并且保存
containerd config default /etc/containerd/config.toml
# 重启containerd
systemctl restart containerd注默认安装的containerd1.7.25使用的sandbox_image配置是3.8安装k8s建议将其改为3.10我们的离线安装包下载的pause版本也是3.10 修改指定内核驱动为Cgroup
3. 安装kubernetes节点
3.1. 【所有节点】安装kubeadm、kubelet、kubectl
# 使用我们下载的二进制文件即可
install kubeadm /usr/local/bin/kubeadm
install kubelet /usr/local/bin/kubelet
install kubectl /usr/local/bin/kubectl其中kubelet需要配置成 systemd 服务我们需要手动添加文件 /etc/systemd/system/kubelet.service内容如下
[Unit]
Descriptionkubelet: The Kubernetes Node Agent
Documentationhttps://kubernetes.io/docs/
Wantsnetwork-online.target
Afternetwork-online.target[Service]
ExecStart/usr/bin/kubelet
Restartalways
StartLimitInterval0
RestartSec10[Install]
WantedBymulti-user.target# 给文件添加授权
chmod x /etc/systemd/system/kubelet.service
# 注册服务
systemctl daemon-reload
# 启动服务
systemctl start kubelet
# 设置开机启动
systemctl enable kubelet# 其他常用命令
# 检查kubelet状态
systemctl status kubelet
# 检查kubelet启动日志
journalctl -xeu kubelet
# 重启kubelet
systemctl restart kubelet3.2. 【所有节点】安装k8s镜像
安装镜像前我们可以使用下面命令确认必要镜像名称
# 查看初始化需要的镜像
kubeadm config images list
# 直接拉取初始化需要的镜像
kubeadm config images pull
# 查看kubeadm默认配置
kubeadm config print init-defaults注此处可以根据不同镜像源读取到不同的镜像但是有个小坑当修改了默认镜像源后需要注意重新tag一下比如例如把配置修改为国内镜像源 k8s.mirror.nju.edu.cn 后拉取镜像的地址是 k8s.mirror.nju.edu.cn/coredns/coredns:v1.11.3 需要拉取后手动tag成 k8s.mirror.nju.edu.cn/coredns:v1.11.3
rootk8s-master:/etc/kubernetes/pki# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
registry.k8s.io/etcd:3.5.16-0
rootk8s-master:/opt/software/kubernetes/K8S/1.32.1# kubeadm config images list --configinit.cn.yaml
k8s.mirror.nju.edu.cn/kube-apiserver:v1.32.1
k8s.mirror.nju.edu.cn/kube-controller-manager:v1.32.1
k8s.mirror.nju.edu.cn/kube-scheduler:v1.32.1
k8s.mirror.nju.edu.cn/kube-proxy:v1.32.1
k8s.mirror.nju.edu.cn/coredns:v1.11.3
k8s.mirror.nju.edu.cn/pause:3.10
k8s.mirror.nju.edu.cn/etcd:3.5.16-0安装k8s的容器镜像时需要指定namespacek8s的namespace默认是k8s.io安装命令如下
ctr -n k8s.io i import conformance_v1.32.1.tar
ctr -n k8s.io i import kubectl_v1.32.1.tar
ctr -n k8s.io i import kube-apiserver_v1.32.1.tar
ctr -n k8s.io i import kube-proxy_v1.32.1.tar
ctr -n k8s.io i import kube-scheduler_v1.32.1.tar
ctr -n k8s.io i import kube-controller-manager_v1.32.1.tar
ctr -n k8s.io i import coredns_v1.11.3.tar
ctr -n k8s.io i import pause_3.10.tar
ctr -n k8s.io i import etcd_3.5.16-0.tar除此以外还有fannel的镜像和dashboard的镜像需要安装安装命令如下
ctr -n k8s.io i import fannel-cni-plugin_v1.6.2-flannel1.tar
ctr -n k8s.io i import fannel_v0.26.4.tar
ctr -n k8s.io i import kubernetesui_dashboard_v2.7.0.tar
ctr -n k8s.io i import kubernetesui_metrics-scraper_v1.0.8.tar3.3. 【主节点】初始化k8s
生成默认配置文件 init.default.yaml
# 导出默认配置
kubeadm config print init-defaults init.default.yaml修改配置文件内容 1将 localAPIEndpoint.advertiseAddress 改为本机地址 2将 nodeRegistration.name 改为 k8s-master和主机hostname保持一致 3将 kubernetesVersion 改为1.32.1 4为 networking 添加 podSubnet 值10.244.0.0/16
# 指定配置文件初始化很多人会把镜像源改成国内的镜像源或者进行配置优化
kubeadm init --configinit.default.yaml修改国内镜像源方法放在附录 初始化完成后会有一段提示我在这里完整放在这里
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using kubeadm config images pull
[certs] Using certificateDir folder /etc/kubernetes/pki
[certs] Generating ca certificate and key
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.38.131]
[certs] Generating apiserver-kubelet-client certificate and key
[certs] Generating front-proxy-ca certificate and key
[certs] Generating front-proxy-client certificate and key
[certs] Generating etcd/ca certificate and key
[certs] Generating etcd/server certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node localhost] and IPs [192.168.38.131 127.0.0.1 ::1]
[certs] Generating etcd/peer certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node localhost] and IPs [192.168.38.131 127.0.0.1 ::1]
[certs] Generating etcd/healthcheck-client certificate and key
[certs] Generating apiserver-etcd-client certificate and key
[certs] Generating sa key and public key
[kubeconfig] Using kubeconfig folder /etc/kubernetes
[kubeconfig] Writing admin.conf kubeconfig file
[kubeconfig] Writing super-admin.conf kubeconfig file
[kubeconfig] Writing kubelet.conf kubeconfig file
[kubeconfig] Writing controller-manager.conf kubeconfig file
[kubeconfig] Writing scheduler.conf kubeconfig file
[etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests
[control-plane] Using manifest folder /etc/kubernetes/manifests
[control-plane] Creating static Pod manifest for kube-apiserver
[control-plane] Creating static Pod manifest for kube-controller-manager
[control-plane] Creating static Pod manifest for kube-scheduler
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.862065ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 45.000997586s
[upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-node as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: kwy8f7.w2psm0sfq25uv1y6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace
[kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.38.131:6443 --token kwy8f7.w2psm0sfq25uv1y6 \--discovery-token-ca-cert-hash sha256:aab53eda3ba7a646e6a938ebb8a9741c63adbc0aeba41649eed68b044bf4f7aa3.4. 【主节点】配置K8S配置文件
常规用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configroot用户在/root/.bashrc文件最后添加内容如下
export KUBECONFIG/etc/kubernetes/admin.conf3.5. 【所有节点】安装fannel
所有节点引入镜像
ctr -n k8s.io i import fannel_v0.26.4.tar
ctr -n k8s.io i import fannel-cni-plugin_v1.6.2-flannel1.tar主节点发布Fannel
# 使用离线安装包中的配置文件进行安装文件名为kube-fannel.yml
kubectl apply -f kube-fannel.yml
# 根据配置文件删除容器
kubectl delete -f kube-fannel.yml3.6. 【子节点】注册节点
主节点上获取token命令如下
# 获取创建节点注册token
kubeadm token create --print-join-command子节点运行刚才得到的注册命令
# 注册子节点
kubeadm join 192.168.38.128:6443 --token kwy8f7.w2psm0sfq25uv1y6 \--discovery-token-ca-cert-hash sha256:aab53eda3ba7a646e6a938ebb8a9741c63adbc0aeba41649eed68b044bf4f7aa3.7. 【主节点】查看K8S运行情况
# 查看节点情况
kubectl get node
# 查看pod运行情况
kubectl get pod -A3.8. 【所有节点】安装Dashboard
所有节点引入镜像
ctr -n k8s.io i import kubernetesui_dashboard_v2.7.0.tar
ctr -n k8s.io i import kubernetesui_metrics-scraper_v1.0.8.tar配置文件需要检查两部分 检查端口 检查容器拉取策略默认是Always安装离线镜像需要调整为IfNotPresent
主节点发布Dashboard
# 使用离线安装包中的配置文件进行安装此处我将文件名改为了k8s-dashboard-2.7.0.yaml
kubectl apply -f k8s-dashboard-2.7.0.yaml
# 常用命令
# 根据配置文件删除容器这点和docker-compose类似
kubectl delete -f k8s-dashboard-2.7.0.yaml登录及验证参考K8S的Dashboard登录及验证
4. 附录
4.1. k8s初始化设置为国内镜像源
很多小伙伴希望运行k8s时候自动拉取国内镜像源的包这里简单介绍下步骤
重置K8S配置
kubeadm reset导出K8S默认配置
kubeadm config print init-defaults init.default.yaml修改镜像源地址改为k8s.mirror.nju.edu.cn此处我重命名文件为 init.cn.yaml 完整配置如下
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.38.128bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentimagePullSerial: truename: nodetaints: null
timeouts:controlPlaneComponentHealthCheck: 4m0sdiscovery: 5m0setcdAPICall: 2m0skubeletHealthCheck: 4m0skubernetesAPICall: 1m0stlsBootstrap: 5m0supgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:local:dataDir: /var/lib/etcd
imageRepository: k8s.mirror.nju.edu.cn
kind: ClusterConfiguration
kubernetesVersion: 1.32.1
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
proxy: {}
scheduler: {}
重新初始化
# 初始化命令
kubeadm init --configinit.cn.yaml一些小问题的解决 之前提到使用国内镜像源后coredns/coredns的镜像源会自动变成coredns因此我们需要手动下载这个镜像并重命名
# 拉取镜像
ctr -n k8s.io i pull k8s.mirror.nju.edu.cn/coredns/coredns:v1.11.3
# 重新tag
ctr -n k8s.io i tag k8s.mirror.nju.edu.cn/coredns/coredns:v1.11.3 k8s.mirror.nju.edu.cn/coredns:v1.11.34.2. kubelet节点NotReady报错cni plugin not initialized 安装工具net-tools
apt-get install net-tools删除cni配置文件
ifconfig cni0 down
ip link delete cni0
rm -rf /var/lib/cni/
rm -f /etc/cni/net.d/*手动创建cni配置文件
cat EOL /etc/cni/net.d/10-flannel.conflist
{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]
}
EOL重启kubelet
systemctl restart kubelet验证启动
kubectl get nodes -A4.3. kube-fannel-ds-mcmxd容器提示CrashLoopBackOff
检查当前pod状态
kubectl get pod -A2. 检查Flannel的pod日志
kubectl describe pod kube-flannel-ds-mcmxd -n kube-flannel信息里面会出现如下信息可以看到Containers—kube-flannel—Last State部分信息提示最近一次状态是Error 3. 查看容器日志
kubectl logs kube-flannel-ds-mcmxd -n kube-flannel --all-containers这里找到了问题所在提示我们
Error registering network: failed to acquire lease: node master pod cidr not assigned解决方案此处我的k8s已经初始化结束只用了第二步就修复了这个问题
1 系统安装前的操作kubeadm初始化指定 --pod-network-cidr参数 通过指令初始化
kubeadm init --pod-network-cidr10.244.0.0/16通过配置文件初始化需要手动添加 networking.podSubnet部分
2系统运行时的操作修改kubernetes控制平面节点配置 修改 /etc/kubernetes/manifests/kube-controller-manage.yaml 文件在command中添加 –allocate-node-cidrstrue 和 –cluster-cidr10.244.0.0/16 两项配置修改后内容如下
apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:component: kube-controller-managertier: control-planename: kube-controller-managernamespace: kube-system
spec:containers:- command:- kube-controller-manager- --allocate-node-cidrstrue- --cluster-cidr10.244.0.0/16- --authentication-kubeconfig/etc/kubernetes/controller-manager.conf- --authorization-kubeconfig/etc/kubernetes/controller-manager.conf- --bind-address127.0.0.1- --client-ca-file/etc/kubernetes/pki/ca.crt- --cluster-namekubernetes- --cluster-signing-cert-file/etc/kubernetes/pki/ca.crt- --cluster-signing-key-file/etc/kubernetes/pki/ca.key- --controllers*,bootstrapsigner,tokencleaner- --kubeconfig/etc/kubernetes/controller-manager.conf- --leader-electtrue- --requestheader-client-ca-file/etc/kubernetes/pki/front-proxy-ca.crt- --root-ca-file/etc/kubernetes/pki/ca.crt- --service-account-private-key-file/etc/kubernetes/pki/sa.key- --use-service-account-credentialstrueimage: registry.k8s.io/kube-controller-manager:v1.32.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10257scheme: HTTPSinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 15name: kube-controller-managerresources:requests:cpu: 200mstartupProbe:failureThreshold: 24httpGet:host: 127.0.0.1path: /healthzport: 10257scheme: HTTPSinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 15volumeMounts:- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/ca-certificatesname: etc-ca-certificatesreadOnly: true- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/execname: flexvolume-dir- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: true- mountPath: /etc/kubernetes/controller-manager.confname: kubeconfigreadOnly: true- mountPath: /usr/local/share/ca-certificatesname: usr-local-share-ca-certificatesreadOnly: true- mountPath: /usr/share/ca-certificatesname: usr-share-ca-certificatesreadOnly: truehostNetwork: truepriority: 2000001000priorityClassName: system-node-criticalsecurityContext:seccompProfile:type: RuntimeDefaultvolumes:- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/ca-certificatestype: DirectoryOrCreatename: etc-ca-certificates- hostPath:path: /usr/libexec/kubernetes/kubelet-plugins/volume/exectype: DirectoryOrCreatename: flexvolume-dir- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs- hostPath:path: /etc/kubernetes/controller-manager.conftype: FileOrCreatename: kubeconfig- hostPath:path: /usr/local/share/ca-certificatestype: DirectoryOrCreatename: usr-local-share-ca-certificates- hostPath:path: /usr/share/ca-certificatestype: DirectoryOrCreatename: usr-share-ca-certificates
status: {}在 Kubernetes 集群中Flannel 是一个常用的网络插件用于提供容器之间的网络连接。Flannel 通过为每个节点分配一个子网并配置网络地址转换NAT来实现容器网络的隔离和通信。
当你在 Kubernetes 控制平面节点上修改 kube-controller-manager 配置确保启用了 --allocate-node-cidrstrue 和 --cluster-cidr10.244.0.0/16这对 Flannel 的成功运行至关重要。以下是详细的原因和解释
–allocate-node-cidrstrue 这个参数告诉 Kubernetes 控制平面节点特别是 kube-controller-manager启用自动为每个节点分配 CIDR子网块的功能。 –allocate-node-cidrstrue 启用后Kubernetes 会在集群初始化时为每个节点分配一个专用的 IP 子网。这个子网会被分配给该节点上所有运行的 Pod。 Flannel 作为网络插件会使用这些子网来为容器分配 IP 地址。Flannel 会确保每个 Pod 获得唯一的 IP 地址避免与其他 Pod 或节点的 IP 地址冲突。–cluster-cidr10.244.0.0/16 –cluster-cidr 设置 Kubernetes 集群的 Pod 网络地址范围。在你的例子中–cluster-cidr10.244.0.0/16 设置了集群的 Pod 网络地址池为 10.244.0.0/16。这意味着所有的 Pod 地址将从这个范围内分配。 Flannel 需要知道这个地址范围以便能够正确地为每个节点分配子网。每个节点的子网必须来自这个范围Flannel 会确保每个节点的 IP 地址分配不会冲突。 如果 --cluster-cidr 设置不正确Flannel 无法为 Pod 正确分配 IP 地址导致 Pod 无法通信。
4.4. 子节点无权查看pod内容
搭建完k8s环境后子节点用户默认是没有查看pod权限的运行查看命令会报错 pods is forbidden
rootk8s-node:/opt/software/kubernetes/config# kubectl get po -A
Error from server (Forbidden): pods is forbidden: User system:node:k8s-node cannot list resource pods in API group at the cluster scope: can only list/watch pods with spec.nodeName field selector在主节点上创建节点角色配置根据需求配置权限生产环境一定要遵循最少权限原则此处命名为ClusterRole.yaml内容如下
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: node-manage-pods
rules:
- apiGroups: []resources: [pods]verbs: [get, list, watch, create, update, patch, delete]引入角色
kubectl apply -f ClusterRole.yaml在主节点上创建用户角色绑定配置此处命名为NodeRoleBinding.yaml内容如下
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: node-manage-pods-binding
subjects:
- kind: Username: system:node:k8s-nodeapiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRolename: node-manage-podsapiGroup: rbac.authorization.k8s.io绑定用户和角色
kubectl apply -f NodeRoleBinding.yaml登上子节点服务器查看kubectl get po -A
4.5. 容器启动后提示CrashLoopBackOff会有一段时间启动成功但是会不停重启 这种情况一般是节点的containerd没有设置SystemdCgroup导致的正常情况主从节点都需要如此设置 解决步骤 编辑 /etc/containerd/config.toml 配置文件找到SystemdCgroup配置改为true即可 重启containerd重启kubelet
systemctl restart containerd
systemctl restart kubelet检查服务运行情况趋于稳定偶尔还有重启可能是因为资源配置等其他原因导致后续继续排查