本文共 22654 字,大约阅读时间需要 75 分钟。
1.1 Kubernetes必备组件
Kubernetes集群中主要存在两种类型的节点:master、minion节点,Minion节点为运行 Docker容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。Master节点负责对外提供一系列管理集群的API接口,并且通过和 Minion 节点交互来实现对集群的操作管理。Apiserver:用户和 kubernetes 集×××互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过etcd来实现持久化并维护对象的一致性。scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。controller-manager:主要是用于保证 replication Controller 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。kubelet:运行在 minion节点,负责和节点上的Docker交互,例如启停容器,监控运行状态等。proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。etcd:key-value键值存储数据库,用来存储kubernetes的信息的。flannel:Flannel是CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,Flannel 目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。1.2 kubernetes环境部署
1.2.1 基础环境;
主机名规划 IP规划 系统/内核 系统内核 软件版本 1.2.2 环境初始化;(注意所有节点都执行)setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/sysconfig/selinuxsystemctl stop firewalldyum -y install ntpsystemctl start ntpdsystemctl enable ntpd1.3 Kubernetes Master安装与配置
Kubernetes Master节点上安装etcd和Kubernetesyum install kubernetes-master etcd -yMaster /etc/etcd/etcd.conf配置文件,代码如下:########################etcd.conf############################ETCD_NAME=etcd165ETCD_DATA_DIR="/chj/etcd/data"ETCD_LISTEN_PEER_URLS=""ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.165:2380"ETCD_INITIAL_CLUSTE.R="etcd165="ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"ETCD_ADVERTISE_CLIENT_URLS=""######################命令行模式执行###############################
touch etcd.start.shcat etcd.start.sh/usr/bin/etcd \--name etcd165 \--debug \--initial-advertise-peer-urls \--listen-peer-urls \--listen-client-urls \--advertise-client-urls \--initial-cluster-token k8s-etcd \--data-dir /chj/etcd/data \--initial-cluster etcd166=\--initial-cluster-state new启动命令: nohup sh etcd.start.sh &启动etcd服务,执行命令:systemctl restart etcd.serviceMaster /etc/kubernetes/config配置文件,代码如下:
[root@k8s-master etcd]# cat /etc/kubernetes/config |grep -v "^#" |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler,proxy进程。
Master /etc/kubernetes/apiserver配置文件,代码如下:[root@k8s-master etcd]# cat /etc/kubernetes/apiserver |grep -v "^#" |grep -v "^$"KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"KUBE_API_PORT="--port=8080"KUBE_ETCD_SERVERS="--etcd-servers="KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.88.0/16"KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"KUBE_API_ARGS="--allow_privileged=false"启动Kubernetes Master节点上的etcd, apiserver, controller-manager和scheduler进程及状态:
for I in kube-apiserver kube-controller-manager kube-scheduler; dosystemctl restart $Isystemctl enable $Isystemctl status $Idone1.4 Kubernetes Node1安装配置在Kubernetes Node1节点上安装flannel、docker和Kubernetesyum install kubernetes-node etcd flannel -y编辑node1 /etc/etcd/etcd.conf,内容如下:ETCD_NAME=etcd166ETCD_DATA_DIR="/chj/etcd/data"ETCD_LISTEN_PEER_URLS=""ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.166:2380"ETCD_INITIAL_CLUSTE.R="etcd165="ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"ETCD_ADVERTISE_CLIENT_URLS=""######################命令行模式执行###############################
touch etcd.start.shcat etcd.start.sh/usr/bin/etcd \--name etcd166 \--debug \--initial-advertise-peer-urls \--listen-peer-urls \--listen-client-urls \--advertise-client-urls \--initial-cluster-token k8s-etcd \--data-dir /chj/etcd/data \--initial-cluster etcd166=\--initial-cluster-state new配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:[root@k8s-node01 kubernetes]# grep -v "^#" config |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="配置文件/etc/kubernetes/kubelet,代码如下:[root@k8s-node01 kubernetes]# grep -v "^#" kubelet |grep -v "^$"KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_API_SERVER="--api-servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"KUBELET_ARGS=""分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:for I in kube-proxy kubelet dosystemctl restart $Isystemctl enable $Isystemctl status $I done1.5 Kubernetes Node2安装配置在Kubernetes Node2节点上安装flannel、docker和Kubernetesyum -y install flannel docker kubernetes-node编辑node1 /etc/etcd/etcd.conf配置,内容如下:ETCD_NAME=etcd167ETCD_DATA_DIR="/chj/etcd/data"ETCD_LISTEN_PEER_URLS=""ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.167:2380"ETCD_INITIAL_CLUSTE.R="etcd165="ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"ETCD_ADVERTISE_CLIENT_URLS=""######################命令行模式执行###############################
touch etcd.start.shcat etcd.start.sh/usr/bin/etcd \--name etcd166 \--debug \--initial-advertise-peer-urls \--listen-peer-urls \--listen-client-urls \--advertise-client-urls \--initial-cluster-token k8s-etcd \--data-dir /chj/etcd/data \--initial-cluster etcd166=\--initial-cluster-state new配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:[root@k8s-node02]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="配置文件/etc/kubernetes/kubelet,代码如下:[root@k8s-node02]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_API_SERVER="--api-servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"KUBELET_ARGS=""分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:for I in kube-proxy kubelet dosystemctl restart $Isystemctl enable $Isystemctl status $I done1.6 Kubernetes Node3安装配置
在Kubernetes Node3节点上安装flannel、docker和Kubernetesyum -y install flannel docker kubernetes-node编辑node1 /etc/etcd/etcd.conf配置,内容如下:########################etcd.conf############################
ETCD_NAME=etcd29ETCD_DATA_DIR="/chj/etcd/data"ETCD_LISTEN_PEER_URLS=""ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.46.29:2380"ETCD_INITIAL_CLUSTE.R="etcd165="ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"ETCD_ADVERTISE_CLIENT_URLS=""######################命令行模式执行###############################
touch etcd.start.shcat etcd.start.sh/usr/bin/etcd \--name etcd29 \--debug \--initial-advertise-peer-urls \--listen-peer-urls \--listen-client-urls \--advertise-client-urls \--initial-cluster-token k8s-etcd \--data-dir /chj/etcd/data \--initial-cluster etcd166=\--initial-cluster-state new启动命令: nohup sh etcd.start.sh &配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="配置文件/etc/kubernetes/kubelet,代码如下:[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_API_SERVER="--api-servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"KUBELET_ARGS=""分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:for I in kube-proxy kubelet dosystemctl restart $Isystemctl enable $Isystemctl status $I done1.7 Kubernetes Node4安装配置
在Kubernetes Node3节点上安装flannel、docker和Kubernetesyum -y install flannel docker kubernetes-node编辑node1 /etc/etcd/etcd.conf配置,内容如下:########################etcd.conf############################
ETCD_NAME=etcd32ETCD_DATA_DIR="/chj/etcd/data"ETCD_LISTEN_PEER_URLS=""ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.46.32:2380"ETCD_INITIAL_CLUSTE.R="etcd165="ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"ETCD_ADVERTISE_CLIENT_URLS=""######################命令行模式执行###############################
touch etcd.start.shcat etcd.start.sh/usr/bin/etcd \--name etcd32 \--debug \--initial-advertise-peer-urls \--listen-peer-urls \--listen-client-urls \--advertise-client-urls \--initial-cluster-token k8s-etcd \--data-dir /chj/etcd/data \--initial-cluster etcd166=\--initial-cluster-state new启动命令: nohup sh etcd.start.sh &配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="配置文件/etc/kubernetes/kubelet,代码如下:[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_API_SERVER="--api-servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"KUBELET_ARGS=""分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:for I in kube-proxy kubelet dosystemctl restart $Isystemctl enable $Isystemctl status $I done1.8 Kubernetes Node5安装配置
在Kubernetes Node3节点上安装flannel、docker和Kubernetesyum -y install flannel docker kubernetes-node配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"KUBE_LOGTOSTDERR="--logtostderr=true"KUBE_LOG_LEVEL="--v=0"KUBE_ALLOW_PRIV="--allow-privileged=false"KUBE_MASTER="--master="配置文件/etc/kubernetes/kubelet,代码如下:[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_API_SERVER="--api-servers="KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"KUBELET_ARGS=""分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:for I in kube-proxy kubelet dosystemctl restart $Isystemctl enable $Isystemctl status $I done至此,Kubernetes集群环境搭建完毕。
1.9 Kubernetes flanneld网络配置 vi /etc/sysconfig/flanneld代码如下: 所有node 和master 节点执行;[root@k8s-master etcd]# grep -v "^#" /etc/sysconfig/flanneld |grep -v "^$"
FLANNEL_ETCD_ENDPOINTS=""FLANNEL_ETCD_PREFIX="/flannel/network"etcd配置中心创建flannel网络配置: 此步骤master 和node 都可以执行etcdctl rm /flannel/network/ --recursive############创建vxlan 网络模型###########etcdctl set /flannel/network/config "{ \"Network\": \"10.10.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"############创建upd 网络模型###########etcdctl set /flannel/network/config "{ \"Network\": \"10.10.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"upd\" } }"Kubernetes的Node节点搭建和配置flannel网络,etcd中//flannel/network/config节点会被Node节点上的flannel用来创建Doker IP地址网段;查看etcd配置中心网段;etcdctl ls /flannel/network/config/subnetsMaster 和node 全部启动flannel 服务;
systemctl start flanneld.service凡是有安装 flannel 插件服务,启动都会生成如下配置;
注意此文件启动flannel服务生成 docker配置文件:(例如下配置)[root@k8s-node01 kubernetes]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=10.10.7.1/24"DOCKER_OPT_IPMASQ="--ip-masq=true"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=10.10.7.1/24 --ip-masq=true --mtu=1450 --insecure-registry 192.168.47.250:5000"红色部分为docker配置私有仓库地址;[root@k8s-node01]# cat /run/flannel/subnet.env FLANNEL_NETWORK=10.10.0.0/16FLANNEL_SUBNET=10.10.7.1/24FLANNEL_MTU=1450FLANNEL_IPMASQ=false管理网卡命令;
1.改变设备传输队列的长度。参数:txqueuelen NUMBER或者txqlen NUMBER2.改变网络设备MTU(最大传输单元)的值。
#ip link set dev eth0 mtu 15003.修改网络设备的MAC地址。4.参数: address LLADDRESS5.停止网卡;
ip link set eth1 down6.开启网卡;ip link set dev eth0 up 7.删除网卡;ip link delete docker01.10 Kubernetes Pods配置
构建完Kubernetes集群环境,接下来创建Pods,用kubectl describe和kubectl logs命令定位原因,发现原因是无法从gcr.io(Google Container-Registry)拉取镜像,可以改成其他可访问的地址,也可以创建Docker私有镜像。1.11 Kubernetes Web UI配置
创建kube-namespace.yaml,代码如下:{ "kind": "Namespace","apiVersion": "v1","metadata": { "name": "kube-system"}}使用命令:kubectl get namespace查看命名空间。创建kubernetes-dashboard.yaml,代码如下:#
#
#
#
kind: Deployment
apiVersion: extensions/v1beta1metadata:labels: app: kubernetes-dashboardversion: latestname: kubernetes-dashboardnamespace: kube-systemspec:replicas: 1selector:matchLabels:app: kubernetes-dashboardtemplate:metadata:labels:app: kubernetes-dashboardspec:containers:kind: Service
apiVersion: v1metadata:labels:app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-systemspec:type: NodePortports:Nginx WEB pods配置代码:
Pod {5}kind : PodapiVersion : v1metadata {10}name : nginx-web1-1187963248-3z862generateName : nginx-web1-1187963248-namespace : kube-systemselfLink : /api/v1/namespaces/kube-system/pods/nginx-web1-1187963248-3z862uid : c2ad1b54-5253-11e7-91ba-000c291c9220resourceVersion : 156010creationTimestamp : 2017-06-16T05:22:15Zlabels {2}app : nginx-web1pod-template-hash : 1187963248annotations {1}kubernetes.io/created-by : {\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"nginx-web1-1187963248\",\"uid\":\"647c4dd7-5253-11e7-91ba-000c291c9220\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"154018\"}}\nownerReferences [1]0 {5}apiVersion : extensions/v1beta1kind : ReplicaSetname : nginx-web1-1187963248uid : 647c4dd7-5253-11e7-91ba-000c291c9220controller : truespec {6}containers [1]0 {6}name : nginx-web1image : nginxresources {0}(empty object)terminationMessagePath : /dev/termination-logimagePullPolicy : AlwayssecurityContext {1}privileged : falserestartPolicy : AlwaysterminationGracePeriodSeconds : 30dnsPolicy : ClusterFirstnodeName : 192.168.1.122securityContext {0}(empty object)status {6}phase : Runningconditions [3]0 {4}type : Initializedstatus : TruelastProbeTime : nulllastTransitionTime : 2017-06-16T05:22:16Z1 {4}type : Readystatus : TruelastProbeTime : nulllastTransitionTime : 2017-06-16T05:41:14Z2 {4}type : PodScheduledstatus : TruelastProbeTime : nulllastTransitionTime : 2017-06-16T05:22:16ZhostIP : 192.168.1.122podIP : 10.1.24.5startTime : 2017-06-16T05:22:16ZcontainerStatuses [1]0 {8}name : nginx-web1state {1}running {1}startedAt : 2017-06-16T05:41:12ZlastState {0}(empty object)ready : truerestartCount : 0image : nginximageID : docker-pullable://docker.io/nginx@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268containerID : docker://42d347c4aadf38757ce6fc071c72650df4b503a9315d306063c03b7b11d91b4a1.12 Kubernetes 本地私有仓库实战
Docker仓库主要用于存放Docker镜像,Docker仓库分为公共仓库和私有仓库,基于registry可以搭建本地私有仓库,使用私有仓库的优点如下: 节省网络带宽,针对于每个镜像不用去Docker官网仓库下载; 下载Docker镜像从本地私有仓库中下载; 组件公司内部私有仓库,方便各部门使用,服务器管理更加统一; 可以基于GIT或者SVN、Jenkins更新本地Docker私有仓库镜像版本。官方提供Docker Registry来构建本地私有仓库,目前最新版本为v2,最新版的docker已不再支持v1,Registry v2使用Go语言编写,在性能和安全性上做了很多优化,重新设计了镜像的存储格式。如下为在192.168.47.250服务器上构建Docker本地私有仓库的方法及步骤:(1) 下载Docker registry镜像,命令如下:docker pull registry(2) 启动私有仓库容器,启动命令如下:mkdir -p /data/registry/docker run -itd -p 5000:5000 -v /data/registry:/tmp/registry docker.io/registryDocker本地仓库启动后台容器启动,如图24-2所示:默认情况下,会将仓库存放于容器内的/tmp/registry目录下,这样如果容器被删除,则存放于容器中的镜像也会丢失,所以我们一般情况下会指定本地一个目录挂载到容器内的/tmp/registry下。
(3) 上传镜像至本地私有仓库:客户端上传镜像至本地私有仓库,如下以busybox镜像为例,将busybox上传至私有仓库服务器。docker pull busyboxdocker tag busybox 192.168.47.250:5000/busyboxdocker push 192.168.47.250:5000/busybox(4) 检测本地私有仓库:curl -XGET curl -XGET (5) 客户端使用本地私有仓库:客户端docker配置文件添加如下代码,同时重启docker服务,获取本地私有仓库如图24-3所示:OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry 192.168.47.250:5000'ADD_REGISTRY='--add-registry 192.168.47.250:5000'至此,docker本地私有仓库部署完毕,可以向仓库中添加或者更新Docker镜像。
1.13 Kubernetes使用本地仓库创建kube-namespace.yaml,代码如下(如果已创建,则无需创建){ "kind": "Namespace","apiVersion": "v1","metadata": { "name": "kube-system"}}创建kubernetes-dashboard.yaml,读取本地私有仓库,代码如下:#
#
#
#
kind: Deployment
apiVersion: extensions/v1beta1metadata:labels: app: kubernetes-dashboardversion: latestname: kubernetes-dashboardnamespace: kube-systemspec:replicas: 2 selector:matchLabels:app: kubernetes-dashboardtemplate:metadata:labels:app: kubernetes-dashboardspec:containers:kind: Service
apiVersion: v1metadata:labels:app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-systemspec:type: NodePortports:1.14 Kubernetes 操作命令注解;
k8s 操作命令注解kubectl get componentstatuses ########检测服务状态########kubectl get events --all-namespaces ########获取事件日志######### kubectl get pod nginx ########获取nginx pod ###### kubectl describe pods nginx ########获取pod nginx 具体详细信息########### kubectl create -f nginx.yaml ########创建nginx pod ############## kubectl run my-web --image=192.168.47.250:5000/nginx --replicas=2 --port=80 #######################命令行模式运行nginx pod###kubectl delete pod --all ########清除所有pod##################kubectl get deployment ########所有获取deployment ########### kubectl delete deployment my-web ########删除my-web deployment######### kubectl get service ########获取所有的service#############kubectl describe svc wweb-nginx-service ########获取详细wweb-nginx service ###kubectl get po --show-labels ########查看pod 的标签################kubectl label pod web--kcz name=web ########打pod 标签 ###################kubectl get po -o wide #########查看pod 网络信息(ip)######### kubectl get po -o yaml #########查看pod 的 yaml############## kubectl get po -l name=web #########查看标签等于web############# kubectl get po -l name!=web #########查看标签不等于web############kubectl exec -it web--vjx /bin/bash #########进入 pod 容器中### kubectl edit deployment web #########修改deployment web######## kubectl scale --replicas=0 deployment web ##########缩放或扩容deploymentkubectl get endpoints #########获取端点################# kubectl cluster-info #########查询k8s 集群信息############ kubectl describe node #########查看node 机器信息############# kubectl scale deployment --replicas=0 config-ms-v6 #########缩容和升级###############kubectl delete deployment --all ######删除部署所有部署###########kubectl get po --show-labels ############查看 pod lables 标签#######kubectl describe service ###########查看service 详细############kubectl set image deployment/config-ms-v24 config-ms-v24=config-ms-v24 config-ms-v24=192.168.47.250:5000/config-ms:v23 --all ####更新iamges##1.15 Kubernetes 使用问题总结;
1.flannel 网络启动 docker 获取不到 etcd 分配的ip地址;
解决办法添加参数;
cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container EngineDocumentation=After=network-online.target firewalld.serviceWants=network-online.target##########注意是新增参考而不是删除原来配置的参数############[Service]EnvironmentFile=-/run/flannel/subnet.envType=notifyExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}#####################################################转载于:https://blog.51cto.com/breaklinux/2056344