2024年最新版-Kubeadm快速部署Kubernetes集群(K8S)

Kubernetes集群部署

文章目录

  • Kubernetes集群部署
    • 资源列表
    • 基础环境
    • 一、环境准备(三台主机都要执行)
      • 1.1、绑定hosts
      • 1.2、安装常用软件
      • 1.3、关闭交换分区
      • 1.4、时间同步
    • 二、Docker环境部署(三台主机都要执行)
      • 2.1、安装依赖包
      • 2.2、添加YUM软件源
      • 2.3、更新YUM源缓存并安装Docker
      • 2.4、启动Docker
      • 2.5、配置阿里云加速器
      • 2.6、内核优化
    • 三、部署Kubernetes集群
      • 3.1、三台主机配置Kubernetes的YUM源
      • 3.2、三台主机安装Kubelet、Kubeadm和Kubectl
      • 3.3、三台主机设置Kubelet开机启动
      • 3.4、master节点生成初始化配置文件
      • 3.5、master节点修改初始化配置文件
      • 3.6、master节点拉取所需镜像
      • 3.7、master节点初始化k8s-master
        • 3.7.1、办法一
        • 3.7.2、办法二
      • 3.8、master节点复制k8s认证文件到用户的home目录
      • 3.9、node节点加入集群
      • 3.10、在k8s-master查看节点状态
    • 四、安装flannel网络插件
      • 4.1、拉取flannel镜像
      • 4.2、安装flannel网络插件
      • 4.3、查看节点状态

资源列表

操作系统配置主机名IP所需软件
CentOS 7.92C4Gk8s-master192.168.93.101Docker Ce、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、Etcd、kube-proxy
CentOS 7.92C4Gk8s-node01192.168.93.102Docker CE、kubectl、kube-proxy、Flnnel
CentOS 7.92C4Gk8s-node02192.168.93.103Docker CE、kubectl、kube-proxy、Flnnel

基础环境

  • 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
  • 关闭内核安全机制
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  • 修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

一、环境准备(三台主机都要执行)

  • 在正式开始部署kubernetes集群之前,先要进行如下准备工作。基础环境相关配置操作,在三台主机k8s-master、k8s-node01、k8s-node02上都需要执行,下面以k8s-master主机为例进行操作演示

1.1、绑定hosts

[root@k8s-master ~]# cat >> /etc/hosts << EOF
192.168.93.101 k8s-master
192.168.93.102 k8s-node01
192.168.93.103 k8s-node02
EOF

1.2、安装常用软件

[root@k8s-master ~]# yum -y install vim lrzsz unzip wget net-tools tree bash-completion telnet

1.3、关闭交换分区

  • kubeadm不支持swap
# 临时关闭
[root@k8s-master ~]# swapoff -a
# 永久关闭
[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab 

1.4、时间同步

[root@k8s-master ~]# yum -y install ntpdate
[root@k8s-master ~]# ntpdate ntp.aliyun.com

二、Docker环境部署(三台主机都要执行)

  • 完成基础环境准备之后,在三台主机上分别部署Docker环境,因为kubernetes对容器的编排需要Docker的支持。以k8s-master主机为例进行操作演示,首先安装一些Docker的依赖包,然后将Docker的YUM源设置成国内地址,最后通过YUM方式安装Docker并启动

2.1、安装依赖包

  • 在正式安装Docker之前,需要先将Docker运行所需的一些依赖包安装好
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

2.2、添加YUM软件源

  • 使用YUM方式安装Docker时,推荐阿里云的YUM源,因为国外的DockerYUM源不能用了
[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.3、更新YUM源缓存并安装Docker

  • 安装kubernetes的版本和docker的版本进行在网上找一下看看什么版本相互适配,本次已经找好了版本,跟着安装即可
# 列出所有docker版本
[root@k8s-master ~]# yum list docker-ce.x86_64 --showduplicates | sort -r
# 快速建立yum缓存
[root@k8s-master ~]# yum makecache fast
# 安装指定版本docker
[root@k8s-master ~]# yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15

2.4、启动Docker

[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

2.5、配置阿里云加速器

  • 将镜像加速器地址直接写入/etc/docker/daemon.json文件内,如果文件不存在,可直接新建文件并保存。通过该扩展名可以看出,daemon.json的内容必须符合json格式,书写时要注意
# 以下设置了cgroup驱动和阿里云加速器地址
[root@k8s-master ~]# vim /etc/docker/daemon.json
{  
  "exec-opts": ["native.cgroupdriver=systemd"],  
  "registry-mirrors": ["https://u9noolvn.mirror.aliyuncs.com"]  
}
[root@k8s-master ~]# systemctl daemon-reload 
[root@k8s-master ~]# systemctl restart docker

2.6、内核优化

# 在Docker的使用过程中有时会看到下面这个警告信息
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled


# 这中镜像信息可以通过配置内核参数的方式来消除
[root@k8s-master ~]# cat >> /etc/sysctl.conf << EOF
# 启用ipv6桥接转发
net.bridge.bridge-nf-call-ip6tables = 1
# 启用ipv4桥接转发
net.bridge.bridge-nf-call-iptables = 1
# 开启路由转发功能
net.ipv4.ip_forward = 1
# 禁用swap分区
vm.swappiness = 0
EOF
# 往内核中加载模块
[root@k8s-master ~]# modprobe br_netfilter
# 加载文件内容
[root@k8s-master ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

三、部署Kubernetes集群

  • 准备好基础环境和Docker环境,下面就开始通过kubeadm来部署kubernetes集群

3.1、三台主机配置Kubernetes的YUM源

  • 这里使用的Kubernetes源同样推荐使用阿里云的
[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 快速清理yum缓存
[root@k8s-master ~]# yum makecache fast

3.2、三台主机安装Kubelet、Kubeadm和Kubectl

  • yum list kubectl --showduplicates | sort -r 列出k8s版本信息
  • kubectl:命令行管理工具、kubeadm:安装K8S集群工具、kubelet管理容器工具
# 安装指定版本
[root@k8s-master ~]# yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
# 查看k8s版本
[root@k8s-master ~]# kubelet --version
Kubernetes v1.18.0

# 如果在命令执行过程中出现索引gpg检查失败的情况,使用yum install -y --nogpgcheck kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0来安装

3.3、三台主机设置Kubelet开机启动

  • 操作节点:k8s-master、k8s-node01、k8s-node02
  • kubelet刚安装好后,通过systemctl start kubelet方式是无法启动的,需要加入节点或初始化为master后才可以启动成功
[root@k8s-master ~]# systemctl enable kubelet.service

3.4、master节点生成初始化配置文件

  • Kubeadm提供了很多配置项,kubeadm配置在kubernetes集群中是存储在ConfigMap中的,也可将这些配置写入配置文件,方便管理复杂的配置项。kubeadm配置内容是通过kubeadm config命令写入配置文件的
# 生成初始化配置文件
[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml
W0615 08:50:40.154637   10202 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]


# 其中,kubeadm config除了用于输出配置项到文件中,还提供了其他一些常用功能
kubeadm config view:查看当前集群中的配置值
kubeadm config print join-defaults:输出kubeadm join默认参数文件的内容
kubeadm config images list:列出所需的镜像列表
kubeadm config images pull:拉取镜像到本地
kubeadm config upload from-flags:由配置参数生成ConfigMap

3.5、master节点修改初始化配置文件

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.93.101   # master节点IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master    # 如果使用域名保证可以解析,或直接使用IP地址
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd    # etcd容器挂载到本地的目录
imageRepository: registry.aliyuncs.com/google_containers  # 默认地址国内无法访问,修改为国内地址
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12	# service资源的网段,集群内部的网络
  podSubnet: 10.244.0.0/16   # 新增加Pod资源网段,需要与下面的pod网络插件地址一致
scheduler: {}

3.6、master节点拉取所需镜像

# 查看初始化需要的镜像
[root@k8s-master ~]# kubeadm config images list --config=init-config.yaml
W0615 09:00:11.145158   10221 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7


# 拉取镜像
[root@k8s-master ~]# kubeadm config images pull --config=init-config.yaml
W0615 09:00:38.350044   10227 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7


# 查看拉取的镜像
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        4 years ago         117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        4 years ago         173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        4 years ago         162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        4 years ago         95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        4 years ago         683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        4 years ago         43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        4 years ago         288MB

3.7、master节点初始化k8s-master

  • kubeadm通过初始化安装是不包括网络插件的,也就是说初始化之后不具备相关网络功能的,比如k8s-master节点上查看信息都是“Not Ready”状态、Pod的CoreDNS无法提供服务等
  • 若初始化失败执行:kubeadm reset、rm -rf $HOME/.kube、/etc/kubernetes/、/var/lib/etcd/
  • 初始化操作本次实验提供两个办法,但是使用的是办法一进行的k8s-master节点的初始化操作
3.7.1、办法一
[root@k8s-master ~]# kubeadm init --config=init-config.yaml
W0615 09:09:02.736872   10425 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.93.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0615 09:09:04.854021   10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0615 09:09:04.854832   10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.502401 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy




###################################################################
Your Kubernetes control-plane has initialized successfully!
###################################################################






To start using your cluster, you need to run the following as a regular user:






###################################################################
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
###################################################################






You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:






###################################################################
kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082 
###################################################################
3.7.2、办法二
# 下面的ip地址更改为master节点的ip,k8s版本更具实际的版本进行更改
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.93.101 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

3.8、master节点复制k8s认证文件到用户的home目录

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

3.9、node节点加入集群

  • 直接把k8s-master节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置
[root@k8s-node01 ~]# kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:17:57.223102    9376 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


[root@k8s-node02 ~]# kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:18:01.227038   19243 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.10、在k8s-master查看节点状态

  • 前面已经提到了,在初始化k8s-master时并没有网络相关的配置,所以无法跟node节点通信,因此状态都是“Not Ready”。但是通过kubeadm join加入的node节点已经在k8s-master上可以看到
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   11m     v1.18.0
k8s-node01   NotReady   <none>   2m22s   v1.18.0
k8s-node02   NotReady   <none>   2m18s   v1.18.0

四、安装flannel网络插件

  • flannel是一个轻量级的网络插件,基于虚拟网络的方式,使用了多种后端实现,如基于Overlay的VXLAN和基于Host-Gateway的方式。它创建了一个覆盖整个集群的群集网络,使得Pod可以跨节点通信

4.1、拉取flannel镜像

  • 需要提前使用魔法拉取flannel镜像,因为国内的aliyun镜像仓库没有flannel镜像
  • 如果下载不下来,或者网络插件文件没有,可以评论或者私信免费提供
# 注意:集群内所有节点都需要这两个镜像,不然状态即使安装了flannel网络插件,状态也是“Not Ready”
[root@k8s-master ~]# docker pull docker.io/flannel/flannel-cni-plugin:v1.1.2
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.21.5
[root@k8s-master ~]# docker images | grep fl
flannel/flannel                                                   v0.21.5             a6c0cb5dbd21        13 months ago       68.9MB
flannel/flannel-cni-plugin                                        v1.1.2              7a2dcab94698        18 months ago       7.97MB

4.2、安装flannel网络插件

  • 可以使用的网络插件有很多,本次使用flannel
[root@k8s-master ~]# kubectl apply -f kube-flannel.yaml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

4.3、查看节点状态

# 查看节点状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   23m   v1.18.0
k8s-node01   Ready    <none>   14m   v1.18.0
k8s-node02   Ready    <none>   14m   v1.18.0

# 查看指定pod状态
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-25bzd             1/1     Running   0          23m
coredns-7ff77c879f-wp885             1/1     Running   0          23m
etcd-k8s-master                      1/1     Running   0          24m
kube-apiserver-k8s-master            1/1     Running   0          24m
kube-controller-manager-k8s-master   1/1     Running   0          24m
kube-proxy-2tphl                     1/1     Running   0          15m
kube-proxy-hqppj                     1/1     Running   0          15m
kube-proxy-rfxw2                     1/1     Running   0          23m
kube-scheduler-k8s-master            1/1     Running   0          24m

# 查看所有pod状态
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-h727x                1/1     Running   0          77s
kube-flannel   kube-flannel-ds-kbztr                1/1     Running   0          77s
kube-flannel   kube-flannel-ds-nw9pr                1/1     Running   0          77s
kube-system    coredns-7ff77c879f-25bzd             1/1     Running   0          24m
kube-system    coredns-7ff77c879f-wp885             1/1     Running   0          24m
kube-system    etcd-k8s-master                      1/1     Running   0          24m
kube-system    kube-apiserver-k8s-master            1/1     Running   0          24m
kube-system    kube-controller-manager-k8s-master   1/1     Running   0          24m
kube-system    kube-proxy-2tphl                     1/1     Running   0          15m
kube-system    kube-proxy-hqppj                     1/1     Running   0          15m
kube-system    kube-proxy-rfxw2                     1/1     Running   0          24m
kube-system    kube-scheduler-k8s-master            1/1     Running   0          24m

# 至此,通过Kubeadm快速安装Kubernetes集群已经完成

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/713441.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【面试题】MySQL常见面试题总结

备战实习&#xff0c;会定期给大家整理常考的面试题&#xff0c;大家一起加油&#xff01; &#x1f3af; 系列文章目录 【面试题】面试题分享之JVM篇【面试题】面试题分享之Java并发篇【面试题】面试题分享之Java集合篇&#xff08;三&#xff09; 注意&#xff1a;文章若有错…

StarNet实战:使用StarNet实现图像分类任务(一)

文章目录 摘要安装包安装timm 数据增强Cutout和MixupEMA项目结构计算mean和std生成数据集 摘要 https://arxiv.org/pdf/2403.19967 论文主要集中在介绍和分析一种新兴的学习范式——星操作&#xff08;Star Operation&#xff09;&#xff0c;这是一种通过元素级乘法融合不同子…

[大模型]XVERSE-7B-chat langchain 接入

XVERSE-7B-Chat为XVERSE-7B模型对齐后的版本。 XVERSE-7B 是由深圳元象科技自主研发的支持多语言的大语言模型&#xff08;Large Language Model&#xff09;&#xff0c;参数规模为 70 亿&#xff0c;主要特点如下&#xff1a; 模型结构&#xff1a;XVERSE-7B 使用主流 Deco…

echarts学习:通过图例事件实现选中后控制多条折线的显隐

1.问题描述 我在工作中遇到了这样一个需求&#xff1a;我们都知道点击echarts折线图的图例&#xff0c;是可以控制折线的显隐的。我现在希望点击某一个图例可以改变多条折线的显隐。 例如在下面这张图中&#xff0c;我将“xxx水位”和“yyy水位”分为一组&#xff1b;将“xxx…

521. 最长特殊序列 Ⅰ(Rust单百解法-脑筋急转弯)

题目 给你两个字符串 a 和 b&#xff0c;请返回 这两个字符串中 最长的特殊序列 的长度。如果不存在&#xff0c;则返回 -1 。 「最长特殊序列」 定义如下&#xff1a;该序列为 某字符串独有的最长 子序列 &#xff08;即不能是其他字符串的子序列&#xff09; 。 字符串 s …

【云原生】docker swarm 使用详解

目录 一、前言 二、容器集群管理问题 2.1 docker集群管理问题概述 2.1.1 docker为什么需要容器部署 2.2 docker容器集群管理面临的挑战 三、docker集群部署与管理解决方案 四、Docker Swarm概述 4.1 Docker Swarm是什么 4.1.1 Docker Swarm架构图 4.1.2 Docker Swarm几…

【MySQL】在CentOS环境下安装MySQL

目录 一、卸载残留环境 二、获取官方yum源 三、安装yum源 四、安装MySQL 五、启动MySQL 一、卸载残留环境 输入 ps axj | grep mysql 查看是否存在正在运行的MySQL服务 如果有&#xff0c;则先输入 systemctl stop mysqld 来关闭服务 然后输入 rpm -qa | grep mysql 查看…

Docker MySQL Shutting down mysqld

6月初至6月15日发现MySQL无故停机多次&#xff0c;导致系统无法使用。接下来各种日志查看&#xff0c;排查原因。先附上一份Docker种MySQL的日志的截图。 一、根据Docker的日志初步估计是数据库内存飙升&#xff0c;从而被系统杀掉进程 查询Linux系统日志&#xff0c;在宿主机…

Python武器库开发-武器库篇之Mongodb未授权漏洞扫描器(五十六)

Python武器库开发-武器库篇之Mongodb未授权漏洞扫描器(五十六) MongoDB 未授权访问漏洞简介以及危害 MongoDB是一款非常受欢迎的开源NoSQL数据库&#xff0c;广泛应用于各种Web应用和移动应用中。然而&#xff0c;由于默认配置的不当或者管理员的疏忽&#xff0c;导致不少Mon…

常用串口助手推荐

串口助手作为嵌入式软件工程师最常用的工具&#xff0c;相信大部分的同学都不陌生&#xff0c;这里就不介绍它的使用啦&#xff0c;介绍介绍有哪些好用的款。感兴趣的小伙伴也可以自己去写一个串口助手。 一、SSCOM5.13.1 站内下载资源&#xff1a; https://download.csdn.n…

51单片机STC89C52RC——2.2 独立按键控制LED亮灭Plus

目的 当独立K1按键按一下&#xff08;立即松开&#xff09;&#xff0c;LED D1点亮。再按一下K1&#xff08;立即松开&#xff09;LED D1熄灭。 与前一节《51单片机STC89C52RC——2.1 独立按键控制LED亮灭》当独立K1按键按下时LED D1 点亮&#xff0c;松开D1熄灭 效果不一…

PyTorch 张量数据类型

【数据类型】Python 与 PyTorch 常见数据类型对应&#xff1a; 用 a.type() 获取数据类型&#xff0c;用 isinstance(a, 目标类型) 进行类型合法化检测 >>> import torch >>> a torch.randn(2,3) >>> a tensor([[-1.7818, -0.2472, -2.0684],[ 0.…

单片机与DHT11温湿度检测设计

本次设计是采用STC89C54单片机加上低成本的温湿度模块DHT11构成的温湿度检测系统。设计主要由硬件与软件两部分设计构成。硬件方面包括单片机STC89C54、温湿度模块DHT11、显示模块LCD1602、电池电源、I2C存储器以及控制按键等5个部分。此系统完全基于单片机最小系统并进行一定的…

Open vSwitch 中 vswitchd 事件上报

一、数据包转发流程与 vswitchd 事件上报 Open vSwitch 的数据包转发流程如下图所示&#xff1a; 在数据包的转发流程中&#xff0c;提到过慢速路径的概念&#xff1a;即当数据包在内核空间无法完全处理时&#xff0c;会产生 upcall 调用&#xff0c;将数据包从内核空间转发到用…

XGBoost预测及调参过程(+变量重要性)--血友病计数数据

所使用的数据是血友病数据&#xff0c;如有需要&#xff0c;可在主页资源处获取&#xff0c;数据信息如下&#xff1a; 读取数据及数据集区分 数据预处理及区分数据集代码如下&#xff08;详细预处理说明见上篇文章--随机森林&#xff09;&#xff1a; import pandas as pd im…

RPG游戏完整指南

环境&#xff1a;unity2021urp 本教程教大家如何使用Unity创建一个RPG游戏&#xff0c;玩家可以在城镇场景中进行导航并寻找战斗&#xff0c;并在战斗中遇到不同类型的敌人。玩家可以向敌人施加不同的动作&#xff0c;如&#xff1a;常规攻击和撤离。这会是一个十分有趣的体验。…

AI时代新爬虫:网站自动转LLM数据,firecrawl深度玩法解读

在大模型的时代&#xff0c;爬虫技术也有了很多新的发展&#xff0c;最近出现了专门针对大模型来提取网站信息的爬虫&#xff0c;一键将网页内容转换为LLM-ready的数据。今天我们介绍其中的开源热门代表&#xff1a;firecrawl。 firecrawl 是什么 FireCrawl是一款创新的爬虫工…

数据资产治理与数据质量提升:构建完善的数据治理体系,确保数据资产的高质量与准确性

一、引言 随着信息技术的迅猛发展&#xff0c;数据已经成为企业和社会发展的重要资产。然而&#xff0c;数据资产的有效治理与数据质量的提升&#xff0c;是企业实现数字化转型、提升竞争力的关键。本文旨在探讨数据资产治理与数据质量提升的重要性&#xff0c;并提出构建完善…

开源高效API管理工具:RAP

RAP&#xff1a;简化API开发&#xff0c;提升团队协作效率- 精选真开源&#xff0c;释放新价值。 概览 RAP&#xff08;RESTful API Project&#xff09;是一个开源的API管理工具&#xff0c;由阿里巴巴团队开发并维护。它旨在帮助前后端开发人员通过一个统一的平台来设计、开…

Linux 按键输入实验

Linux 按键输入实验 1、添加 pinctrl 节点 首先修改在设备树里面添加关于按键的节点。I.MX6U-ALPHA 开发板上的 KEY 使用了 UART1_CTS_B 这个 PIN&#xff0c;打开 imx6ull-alientekemmc.dts&#xff0c;在 iomuxc 节点的 imx6ul-evk 子节点下创建一个名为“pinctrl_key”的子…