当前位置: 首页 > news >正文

网站服务搭建心理学网站开发

网站服务搭建,心理学网站开发,自己建设网站的费用,电子商务网站建设试题答案一、准备 1、禁用selinux #临时禁用 setenforce 0 #永久禁用 sed -i s/enforcing/disabled/ /etc/selinux/config #检查selinux是否已禁用 sestatus 2、禁用交换分区 #命令行临时禁用 swapoff -a #永久禁用 vim /etc/fstab 注释掉有swap字样的那行#xff0c;重启 3、允许… 一、准备 1、禁用selinux #临时禁用 setenforce 0 #永久禁用 sed -i s/enforcing/disabled/ /etc/selinux/config #检查selinux是否已禁用 sestatus 2、禁用交换分区 #命令行临时禁用 swapoff -a #永久禁用 vim /etc/fstab 注释掉有swap字样的那行重启 3、允许iptables转发、启用br_netfilter模块 cat /etc/sysctl.d/k8s.conf EOF net.bridge.bridge-nf-call-iptables 1 net.bridge.bridge-nf-call-ip6tables 1 EOFecho 1 /proc/sys/net/ipv4/ip_forwardcat EOF | sudo tee /etc/modules-load.d/k8s.conf bridge br_netfilter EOFsysctl --system#停止防火墙 systemctl stop firewalld systemctl disable firewalld 4、修改hostname使每台服务器的hostname唯一 hostnamectl set-hostname server-xxxxx#把新设置的hostname映射到服务器ip上 vim /etc/hosts 127.0.0.1 server-xxxxx 或 局域网ip server-xxxxx 二、开始安装 1、安装containerd centos yum install -y yum-utils device-mapper-persistent-data lvm2 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo yum makecache yum -y install containerd.io ubuntu apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable apt update apt install -y containerd.io debian apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable apt update apt install -y containerd.io 修改containerd配置 containerd config default /etc/containerd/config.toml sed -i s/registry.k8s.io\/pause:[0-9].[0-9]/registry.aliyuncs.com\/google_containers\/pause:3.9/g /etc/containerd/config.toml systemctl restart containerd 修改containerd镜像源 vim /etc/containerd/config.toml [plugins.io.containerd.grpc.v1.cri.registry.mirrors][plugins.io.containerd.grpc.v1.cri.registry.mirrors.docker.io]endpoint [https://atomhub.openatom.cn][plugins.io.containerd.grpc.v1.cri.registry.mirrors.docker.io/library]endpoint [https://atomhub.openatom.cn/library][plugins.io.containerd.grpc.v1.cri.registry.mirrors.registry.k8s.io]endpoint [https://registry.aliyuncs.com/google_containers]systemctl restart containerd 2、离线安装containerd 下载 wget https://github.com/containerd/containerd/releases/download/v1.7.21/containerd-1.7.21-linux-amd64.tar.gz tar zxvf containerd-1.7.21-linux-amd64.tar.gz chmod 755 /bin/* cp -n bin/* /usr/bin/ 启动服务 cat /usr/lib/systemd/system/containerd.service EOF [Unit] Descriptioncontainerd container runtime Documentationhttps://containerd.io Afternetwork.target[Service] ExecStartPre-/sbin/modprobe overlay ExecStart/usr/bin/containerdTypenotify Delegateyes KillModeprocess Restartalways RestartSec5 LimitNPROCinfinity LimitCOREinfinity LimitNOFILEinfinity TasksMaxinfinity OOMScoreAdjust-999[Install] WantedBymulti-user.target EOFsystemctl start containerd systemctl enable containerd 3、安装docker centos yum install -y yum-utils device-mapper-persistent-data lvm2 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo yum makecache yum -y install docker-ce ubuntu apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable apt update apt install -y docker-ce debian apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable apt update apt install -y docker-ce 修改docker配置 mkdir -p /etc/docker cat /etc/docker/daemon.json EOF {registry-mirrors: [http://mirrors.ustc.edu.cn/,http://docker.jx42.com,https://0c105db5188026850f80c001def654a0.mirror.swr.myhuaweicloud.com,https://5tqw56kt.mirror.aliyuncs.com,https://docker.1panel.live,http://mirror.azure.cn/,https://hub.rat.dev/,https://docker.ckyl.me/,https://docker.chenby.cn,https://docker.hpcloud.cloud],exec-opts:[native.cgroupdriversystemd] } EOFsystemctl enable docker systemctl start docker 3、离线安装docker centos yum install -y yum-utils device-mapper-persistent-data lvm2 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo yum makecache yum -y install conntrack cri-tools ebtables ethtool kubernetes-cni socat ubuntu apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable apt update apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat debian apt install -y apt-transport-https ca-certificates curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/debian/gpg | apt-key add - add-apt-repository deb [archamd64] http://mirrors.aliyun.com/docker-ce/linux/debian $(lsb_release -cs) stable apt update apt install -y conntrack cri-tools ebtables ethtool kubernetes-cni socat 解压二进制包 wget https://download.docker.com/linux/static/stable/x86_64/docker-27.2.0.tgz tar zxvf docker-27.2.0.tgz -C ./ cp -n ./docker/* /usr/bin/ 添加自启动配置 vim /usr/lib/systemd/system/docker.service [Unit] DescriptionDocker Application Container Engine Documentationhttps://docs.docker.com Afternetwork-online.target docker.socket firewalld.service containerd.service time-set.target Wantsnetwork-online.target containerd.service Requiresdocker.socket[Service] Typenotify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart/usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock ExecReload/bin/kill -s HUP $MAINPID TimeoutStartSec0 RestartSec2 Restartalways# Note that StartLimit* options were moved from Service to Unit in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst3# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval60s# Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROCinfinity LimitCOREinfinity # Older systemd versions default to a LimitNOFILE of 1024:1024, which is insufficient for many # applications including dockerd itself and will be inherited. Raise the hard limit, while # preserving the soft limit for select(2). LimitNOFILE1024:524288# Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMaxinfinity# set delegate yes so that systemd does not reset the cgroups of docker containers Delegateyes# kill only the docker process, not all processes in the cgroup KillModeprocess OOMScoreAdjust-500[Install] WantedBymulti-user.targetvim /usr/lib/systemd/system/docker.socket [Unit] DescriptionDocker Socket for the API PartOfdocker.service[Socket] # If /var/run is not implemented as a symlink to /run, you may need to # specify ListenStream/var/run/docker.sock instead. ListenStream/run/docker.sock SocketMode0660 SocketUserroot SocketGroupdocker[Install] WantedBysockets.target 启动 groupadd docker systemctl daemon-reload systemctl enable docker systemctl start docker 4、安装cri-docker wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.amd64.tgz tar -xf cri-dockerd-0.3.15.amd64.tgz cp cri-dockerd/cri-dockerd /usr/bin/cri-dockerd curl https://github.com/Mirantis/cri-dockerd/raw/master/packaging/systemd/cri-docker.service -L -o /usr/lib/systemd/system/cri-docker.service curl https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket -L -o /usr/lib/systemd/system/cri-docker.socket #修改cri-docker配置 vim /usr/lib/systemd/system/cri-docker.service #修改ExecStart加上pod-infra-container-image参数 ExecStart/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugincni --pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.9systemctl daemon-reload systemctl start cri-docker#查看cri-docker信息安装了k8s后crictl命令才可用 crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock info 5、安装ipvs #centos yum -y install ipvsadm ipset #ubuntudebian apt -y install ipvsadm ipset#如果/etc/sysconfig/modules/ipvs.modules文件不存在则 mkdir -p /etc/sysconfig/modules cat /etc/sysconfig/modules/ipvs.modules EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh #modprobe -- nf_conntrack_ipv4 #4以上的内核就没有ipv4 modprobe -- nf_conntrack EOFchmod 755 /etc/sysconfig/modules/ipvs.modules sh /etc/sysconfig/modules/ipvs.modules#检测是否加载 lsmod | grep ip_vs 6、安装kubernetes centos cat /etc/yum.repos.d/kubernetes.repo EOF [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFyum makecache #查看所有kubelet可安装版本 yum list --showduplicates kubelet yum install -y kubelet-1.28.2-0 kubeadm-1.28.2-0 kubectl-1.28.2-0 ubuntu apt update apt install -y apt-transport-https ca-certificates gnupgcurl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat EOF /etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOFapt update #查看所有kubelet可安装版本 apt-cache madison kubelet apt install -y kubelet1.28.2-00 kubeadm1.28.2-00 kubectl1.28.2-00 设置所有组件自启动 systemctl enable containerd systemctl enable docker systemctl enable cri-docker systemctl enable kubelet 7、离线安装kubernetes 安装 crictlkubeadm/kubelet 容器运行时接口CRI所需 wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz tar zxvf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin/ 安装 kubeadm、kubelet、kubectl 并添加 kubelet 系统服务 wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet wget https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl chmod 755 kubeadm kubelet kubectl cp kubeadm kubelet kubectl /usr/bin/curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubelet/kubelet.service -o /etc/systemd/system/kubelet.service sudo mkdir -p /etc/systemd/system/kubelet.service.d curl -sSL https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf -o /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 三、集群搭建 1、初始化master节点 #如果kubelet服务已经启动先关闭 systemctl stop kubelet#可以先行拉取镜像排除拉取问题 kubeadm config images pull \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.28.2 \ --cri-socketunix:///var/run/cri-dockerd.sock*如果拉取镜像很慢或者觉得有问题存在可以查看服务日志 查看cri-docker服务日志 journalctl -xefu cri-docker#如果前面曾经初始化过、或者初始化错参数可以重置集群为未初始化 kubeadm reset -f \ --cri-socketunix:///var/run/cri-dockerd.sock#开始初始化 kubeadm init \ --apiserver-advertise-address服务器内网ip地址 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.28.2 \ --service-cidr10.96.0.0/12 \ --pod-network-cidr10.244.0.0/16 \ --cri-socketunix:///var/run/cri-dockerd.sock#如果想跳过cri-docker直接让k8s跟container通信需要变更一个参数 --cri-socketunix:///run/containerd/containerd.sock k8s的默认网络代理使用iptables换用ipvs的话性能会更高 kubectl edit -n kube-system cm kube-proxy 修改 mode: ipvs#删除 kube-proxyk8s会自动重建 kubectl get pod -n kube-system |grep kube-proxy| awk {print $1}|xargs kubectl -n kube-system delete pod#接着查看日志有打印 Using ipvs Proxier 表示使用成功 kubectl get pod -n kube-system | grep kube-proxy kubectl logs -n kube-system kube-proxy-xxxxx#试试查看一下转发规则 ipvsadm -Ln 环境变量配置 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown root:root $HOME/.kube/config echo export KUBECONFIG/etc/kubernetes/admin.conf /etc/profilesystemctl daemon-reload systemctl restart kubelet 查看master节点 kubectl get nodes 会看到一个control-plane节点但是NotReady状态需要安装网络插件 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 安装后多等一会kubectl get nodes 节点就是Ready了另一个网络插件Calico的安装 Flannel vs Calico选择 Flannel 还是 Calico 主要取决于你的具体需求。如果你的集群规模较小不需要太多复杂的网络功能Flannel 是一个合适的选择。而如果你需要一个功能强大的网络插件来支持大规模集群和复杂的网络策略那么 Calico 可能更适合你。 最佳实践对于未来可能需要扩展或集成更多设备和策略的集群建议使用 Calico因为它提供了更好的可扩展性和更丰富的功能集。而对于小规模集群或测试环境Flannel 可能是一个更简单易用的选择。 kubectl apply -f https://docs.tigera.io/archive/v3.25/manifests/calico.yamlkubectl get pods --namespacekube-system | grep calico-node 如果输出结果中显示了calico-node的Pod状态为Running则表示Calico已经成功安装 2、其他节点加入集群 在worker节点上检查文件不存在就从master上拷贝过来 #网络插件配置 scp /etc/cni/net.d/* worker的ip:/etc/cni/net.d/#master集群配置 scp /etc/kubernetes/admin.conf worker的ip:/etc/kubernetes/#启动参数 scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf worker的ip:/etc/systemd/system/kubelet.service.d/ */etc/cni/net.d/*例如如果master已经安装过网络插件并且用的是Flannel应该拷贝/etc/cni/net.d/10-flannel.conflist {name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}] } *如果网络插件是Calico应该拷贝/etc/cni/net.d/10-calico.conflist {name: k8s-pod-network,cniVersion: 0.3.1,plugins: [{type: calico,log_level: info,log_file_path: /var/log/calico/cni/cni.log,datastore_type: kubernetes,nodename: server-180,mtu: 0,ipam: {type: calico-ipam},policy: {type: k8s},kubernetes: {kubeconfig: /etc/cni/net.d/calico-kubeconfig}},{type: portmap,snat: true,capabilities: {portMappings: true}},{type: bandwidth,capabilities: {bandwidth: true}}] } *如果/etc/systemd/system/kubelet.service.d/10-kubeadm.conf在master上也没有用下面内容保存 [Service] EnvironmentKUBELET_KUBECONFIG_ARGS--bootstrap-kubeconfig/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig/etc/kubernetes/kubelet.conf EnvironmentKUBELET_CONFIG_ARGS--config/var/lib/kubelet/config.yaml # This is a file that kubeadm init and kubeadm join generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile-/etc/default/kubelet ExecStart ExecStart/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS 添加worker节点 #worker节点也做环境变量配置这样就可以在worker节点上使用kubectl命令了 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown ${id -u}:${id -g} $HOME/.kube/config echo export KUBECONFIG/etc/kubernetes/admin.conf /etc/profile systemctl daemon-reload systemctl restart kubelet#在master服务器上运行命令 kubeadm token create --print-join-command这将生成一个 kubeadm join 命令将上面生成的命令复制并在新的 Worker 节点上执行。这将使新的节点以 Worker 的身份加入集群 *注意需要在生成的kubeadm join 命令后面再加cri-socket参数例如kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socketunix:///var/run/cri-dockerd.sock 添加其他master节点 和添加worker节点操作一样只是在join命令时多加一个参数--control-plane例如 kubeadm join 10.1.3.178:6443 --token z994lz.s0ogba045j84195c --discovery-token-ca-cert-hash sha256:89d69bc4b7c03bc8328713794c7aa4af798b0e65a64021a329bb9bf1d7afd23e --cri-socketunix:///var/run/cri-dockerd.sock --control-plane*注意集群要建立多master节点的话还需要创建证书并共享到每个master节点 todo 查看所有已加入集群的节点 kubectl get nodes 3、CNI插件 CNI插件就是上面需要用到的网络插件的底层如果kubeadm init不成功或者init成功后kubectl老是卡住可以 journalctl查看各个服务日志可能会出现/opt/cni/bin/目录下某某bin文件不存在的报错例如portmap、flannel检查一下/opt/cni/bin/目录下是否有该可执行文件没有的话下载 #基础的 mkdir -p /opt/cni/bin/ wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz tar zxvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/#flannel插件 wget https://github.com/flannel-io/cni-plugin/releases/download/v1.5.1-flannel2/cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz tar zxvf cni-plugin-flannel-linux-amd64-v1.5.1-flannel2.tgz -C /opt/cni/bin/ mv /opt/cni/bin/flannel-amd64 /opt/cni/bin/flannelsystemctl daemon-reload systemctl restart containerd systemctl restart docker systemctl restart cri-docker systemctl restart kubelet 四、核心组件说明 k8s的核心组件都是pod形式存在kubectl get pods -n kube-system即可看到所有 参考 k8s–多master高可用集群环境搭建_kubernetes高可用多master搭建-CSDN博客 Master 主控节点 ETCD配置存储中心 etcd服务是Kubernetes提供默认的存储系统保存所有集群数据使用时需要为etcd数据提供备份计划。 kube-apiserverk8s集群的大脑 kube-apiserver用于暴露Kubernetes API。任何的资源请求/调用操作都是通过kube-apiserver提供的接口进行。 提供了集群管理的RESTAPI接口(包括鉴权、数据校验及集群状态变更) 负责其他模块之间的数据交互,承担通信枢纽功能 是资源配额控制的入口 提供完备的集群安全机制 kube-controller-manager控制器管理器 运行管理控制器是集群中处理常规任务的后台线程。逻辑上每个控制器是一个单独的进程但为了降低复杂性它们都被编译成单个二进制文件并在单个进程中运行。 由一系列控制器组成,通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态 ● Node Controller ● Deployment Controller ● Service Controller ● Volume Controller ● Endpoint Controller ● Garbage Controller ● Namespace Controller ● Job Controller ● Resource quta Controller Scheduler调度程序监控node资源的状况 主要功能是接收调度pod到适合的运算节点上 ● 预算策略( predict ) ● 优选策略( priorities ) Worker节点 Kubelet容器的守护进程 容器的搭起销毁等动作负责pod的生命周期运行node上 简单地说, kubelet的主要功能就是定时从某个地方获取节点上pod的期望状态(运行什么容器、运行的副本数量网络或者存储如何配置等等) ,并调用对应的容器平台接口达到这个状态 定时汇报当前节点的状态给apiserver,以供调度的时候使用 镜像和容器的清理工作保证节点上镜像不会占满磁盘空间退出的容器不会占用太多资源 kube-proxy网络代理和负载均衡器 运行在node上最先用iptables做隔离现在流行用ipvs更方便 kube-proxy是K8S在每个节点上运行网络代理, service资源的载体 ●建立了pod网络和集群网络的关系( clusterip- podip ) ●常用三种流量调度模式 ●Userspace (废弃) ●Iptables (废弃) ●Ipvs(推荐) ●负责建立和删除包括更新调度规则、通知apiserver自己的更新,或者从apiserver哪里获取其他kube-proxy的调度规则变化来更新自己的Endpoint Controller 负责维护Service和Pod的对应关系 Kube-proxy负责service的实现即实现了K8s内部从pod到Service和外部从node port到service的访问 注Pod网络是kube-kubelet提供不是直接由Kube-proxy提供 各组件的工作流程 User采用命令kubectl— API server响应调度不同的Schedule— Schedule调度— Controller Manager创建不同的资源— etcd写入状态— 查找集群node哪个有资源通过Schedule到对应的node上创建pod 五、后续 1、安装helm https://github.com/helm/helm Helm | 安装Helm Helm 是 Kubernetes 的包管理器将来会有越来越多的组件转用helm来部署 wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz tar zxvf helm-v3.15.4-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/bin/ chmod x /usr/local/bin/helm 2、安装管理界面dashboard https://github.com/kubernetes/dashboard helm安装方式 # 添加kubernetes-dashboard repository helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ # 部署一个 kubernetes-dashboard 发布版本 helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard 非helm安装方式 #获取dashboard资源文件 wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml -O kubernetes-dashboard.yaml#修改yaml文件暴露nodeport端口 spec:type: NodePort# 新增ports:- port: 443targetPort: 8443nodePort: 30100# 新增selector:k8s-app: kubernetes-dashboard#加载 kubectl apply -f kubernetes-dashboard.yaml# 创建 dashboard-admin 用户 kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard # 绑定 clusterrolebinding kubectl create clusterrolebinding dashboard-admin --clusterrolecluster-admin --serviceaccountkubernetes-dashboard:dashboard-admin # 创建登录token kubectl create token dashboard-admin -n kubernetes-dashboard#访问 https://服务器ip地址:30001 把上面创建出来的登录token复制到token输入框里登录 3、彻底删除k8s #清空K8S集群设置 kubeadm reset -f --cri-socketunix:///var/run/cri-dockerd.sock #如果用了ipvs ipvsadm --clear#停止K8S systemctl stop kubelet systemctl stop cri-docker.socket cri-docker systemctl stop docker.socket docker#删除K8S相关软件 yum -y remove kubelet kubeadm kubectl docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras #如果是离线安装的docker yum -y remove kubelet kubeadm kubectl containerd.io rm -rf /usr/bin/docker* /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.socket#手动删除所有镜像、容器和卷 rm -rf /var/lib/docker rm -rf /var/lib/containerd#彻底删除相关文件 rm -rf $HOME/.kube ~/.kube/ /etc/kubernetes/ /etc/systemd/system/kubelet.service.d /usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/cri-docker.service /usr/bin/kube* /etc/cni /opt/cni /var/lib/etcd /etc/docker/daemon.json /etc/containerd/config.toml /usr/lib/systemd/system/containerd.service 六、tips 1、一些有用命令 #用container命令行查看镜像列表 ctr image list#查看container下k8s拉取的镜像 ctr -n k8s.io image list#用cri-docker命令行查看镜像列表 crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image#强制删除某个pod kubectl delete pod pod -n namespace --grace-period0 --force#查看iptables的转发规则 iptables -L#查看ipvs的转发规则 ipvsadm -Ln 2、拉取镜像太慢 参考 K8S Containerd导入Docker image镜像_containerd导入docker镜像-CSDN博客 K8S在创建容器时或多或少有些镜像无法正常拉取网络等原因。 还在使用Docker Engine时我们能方便的pull第三方同步的镜像然后tag成需要的标签版本让K8S从本地获取到想要的镜像。 因Docker将其容器格式和运行时runC捐赠给OCI开放容器标准OCI标准化了容器工具和底层实现之间的大量接口。 以加速calico网络插件拉取为例 # 拉取docker镜像 docker pull calico/cni:v3.25.0 docker pull calico/node:v3.25.0 # 为镜像打上k8s需要的 tag docker tag calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0 docker tag calico/node:v3.25.0 docker.io/calico/node:v3.25.0 # 将镜像保存下来 docker save -o ./calico-cni.tar calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0 docker save -o ./calico-node.tar calico/node:v3.25.0 docker.io/calico/node:v3.25.0 然后进行镜像导入。注意要导入至K8S使用的containerd默认命名空间是 k8s.io 否则它会找不到镜像 # 导入-n 参数为指定命名空间 ctr -n k8s.io image import calico-cni.tar ctr -n k8s.io image import calico-node.tar # 确认下导入 ctr -n k8s.io image list | grep calico # crictl是Kubernetes社区定义的CRI接口工具在这边也确认下 crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock image | grep calico#加载 wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml kubectl apply -f calico.yaml 至此K8S已能在本地找到相应镜像记得确认imagePullPolicy已设置为IfNotPresent或Never 3、一些可能出现的错误 Failed to start docker.service: Unit docker.service is masked systemctl unmask docker.socket systemctl unmask docker.service [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist modprobe bridge modprobe br_netfilter sysctl --system
http://www.w-s-a.com/news/266382/

相关文章:

  • 微信网站应用开发营销推广的方案
  • 广州做网站商城的公司制作一个app的完整流程
  • 湖南城乡建设厅网站163注册企业邮箱
  • 做网站怎么调整图片间距织梦做的网站如何去掉index
  • 凡科网免费建站步骤及视频网页设计基础教程第二版课后答案
  • 建设一个旅游网站毕业设计企业网站要更新文章吗
  • 做网站需要简介中山网站设计公司
  • 网站怎么做导航栏微信公众号官网登录
  • 1_ 掌握网站开发的基本流程 要求:熟悉网站开发与设计的基本流程.电子商城网站开发
  • 百度网站怎么建设河北省工程造价信息网官网
  • 阿里云网站模板网页设计的合适尺寸是多少
  • 做小程序和做网站哪个好让别人做网站推广需要多少钱
  • 做外贸的几个网站查询网域名解析
  • 酒泉如何做百度的网站seo研究中心好客站
  • 网站设计建设平台户县做网站
  • 一元云购网站开发wordpress博客空间
  • 深圳高端网站建设公司排名如何搭建局域网服务器
  • 照片管理网站模板高端网站开发哪家好
  • 黄冈网站制作wordpress为什么不能显示域名
  • 做网站设计怎么进企业电子商务网站建设与管理教材
  • 设计广告公司网站建设网站开发技术选择
  • 个人网站教程个人网站有必要备案吗
  • 网站建设推广好做吗黄浦企业网站制作
  • 怎样做28网站代理中山网站建设方案外包
  • vs2010做网站前台搭建小网站
  • 做视频必须知道的一些网站wordpress 标签鼠标滑过_弹出的title 代码美化
  • 怎么做室内设计公司网站电商运营培训视频课程
  • 昆明网站策划天津市建筑信息平台
  • 三亚放心游app官方网站wordpress 个人主题
  • 做简单的网站备案平台新增网站