博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes+Etcd-v1.5.2 分布式集群部署
阅读量:6273 次
发布时间:2019-06-22

本文共 22654 字,大约阅读时间需要 75 分钟。

1.1 Kubernetes必备组件

Kubernetes集群中主要存在两种类型的节点:master、minion节点,Minion节点为运行 Docker容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。
Master节点负责对外提供一系列管理集群的API接口,并且通过和 Minion 节点交互来实现对集群的操作管理。
Apiserver:用户和 kubernetes 集×××互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过etcd来实现持久化并维护对象的一致性。
scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。
controller-manager:主要是用于保证 replication Controller 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。
kubelet:运行在 minion节点,负责和节点上的Docker交互,例如启停容器,监控运行状态等。
proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。
etcd:key-value键值存储数据库,用来存储kubernetes的信息的。
flannel:Flannel是CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,Flannel 目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。

Kubernetes+Etcd-v1.5.2 分布式集群部署

1.2 kubernetes环境部署

1.2.1 基础环境;

主机名规划 IP规划 系统/内核 系统内核 软件版本

Kubernetes+Etcd-v1.5.2 分布式集群部署

1.2.2 环境初始化;(注意所有节点都执行)
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/sysconfig/selinux
systemctl stop firewalld
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

1.3 Kubernetes Master安装与配置

Kubernetes Master节点上安装etcd和Kubernetes
yum install kubernetes-master etcd -y
Master /etc/etcd/etcd.conf配置文件,代码如下:
########################etcd.conf############################
ETCD_NAME=etcd165
ETCD_DATA_DIR="/chj/etcd/data"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.165:2380"
ETCD_INITIAL_CLUSTE.R="etcd165="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"
ETCD_ADVERTISE_CLIENT_URLS=""

######################命令行模式执行###############################

touch etcd.start.sh
cat etcd.start.sh
/usr/bin/etcd \
--name etcd165 \
--debug \
--initial-advertise-peer-urls \
--listen-peer-urls \
--listen-client-urls \
--advertise-client-urls \
--initial-cluster-token k8s-etcd \
--data-dir /chj/etcd/data \
--initial-cluster etcd166=\
--initial-cluster-state new
启动命令: nohup sh etcd.start.sh &
启动etcd服务,执行命令:systemctl restart etcd.service

Master /etc/kubernetes/config配置文件,代码如下:

[root@k8s-master etcd]# cat /etc/kubernetes/config |grep -v "^#" |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="

将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler,proxy进程。

Master /etc/kubernetes/apiserver配置文件,代码如下:
[root@k8s-master etcd]# cat /etc/kubernetes/apiserver |grep -v "^#" |grep -v "^$"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers="
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.88.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS="--allow_privileged=false"

启动Kubernetes Master节点上的etcd, apiserver, controller-manager和scheduler进程及状态:

for I in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $I
systemctl enable $I
systemctl status $I
done
1.4 Kubernetes Node1安装配置
在Kubernetes Node1节点上安装flannel、docker和Kubernetes
yum install kubernetes-node etcd flannel -y
编辑node1 /etc/etcd/etcd.conf,内容如下:
ETCD_NAME=etcd166
ETCD_DATA_DIR="/chj/etcd/data"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.166:2380"
ETCD_INITIAL_CLUSTE.R="etcd165="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"
ETCD_ADVERTISE_CLIENT_URLS=""

######################命令行模式执行###############################

touch etcd.start.sh
cat etcd.start.sh
/usr/bin/etcd \
--name etcd166 \
--debug \
--initial-advertise-peer-urls \
--listen-peer-urls \
--listen-client-urls \
--advertise-client-urls \
--initial-cluster-token k8s-etcd \
--data-dir /chj/etcd/data \
--initial-cluster etcd166=\
--initial-cluster-state new
配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:
[root@k8s-node01 kubernetes]# grep -v "^#" config |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="
配置文件/etc/kubernetes/kubelet,代码如下:
[root@k8s-node01 kubernetes]# grep -v "^#" kubelet |grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"
KUBELET_ARGS=""
分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:
for I in kube-proxy kubelet
do
systemctl restart $I
systemctl enable $I
systemctl status $I
done
1.5 Kubernetes Node2安装配置
在Kubernetes Node2节点上安装flannel、docker和Kubernetes
yum -y install flannel docker kubernetes-node
编辑node1 /etc/etcd/etcd.conf配置,内容如下:
ETCD_NAME=etcd167
ETCD_DATA_DIR="/chj/etcd/data"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.47.167:2380"
ETCD_INITIAL_CLUSTE.R="etcd165="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"
ETCD_ADVERTISE_CLIENT_URLS=""

######################命令行模式执行###############################

touch etcd.start.sh
cat etcd.start.sh
/usr/bin/etcd \
--name etcd166 \
--debug \
--initial-advertise-peer-urls \
--listen-peer-urls \
--listen-client-urls \
--advertise-client-urls \
--initial-cluster-token k8s-etcd \
--data-dir /chj/etcd/data \
--initial-cluster etcd166=\
--initial-cluster-state new
配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:
[root@k8s-node02]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="
配置文件/etc/kubernetes/kubelet,代码如下:
[root@k8s-node02]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"
KUBELET_ARGS=""
分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:
for I in kube-proxy kubelet
do
systemctl restart $I
systemctl enable $I
systemctl status $I
done

1.6 Kubernetes Node3安装配置

在Kubernetes Node3节点上安装flannel、docker和Kubernetes
yum -y install flannel docker kubernetes-node
编辑node1 /etc/etcd/etcd.conf配置,内容如下:

########################etcd.conf############################

ETCD_NAME=etcd29
ETCD_DATA_DIR="/chj/etcd/data"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.46.29:2380"
ETCD_INITIAL_CLUSTE.R="etcd165="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"
ETCD_ADVERTISE_CLIENT_URLS=""

######################命令行模式执行###############################

touch etcd.start.sh
cat etcd.start.sh
/usr/bin/etcd \
--name etcd29 \
--debug \
--initial-advertise-peer-urls \
--listen-peer-urls \
--listen-client-urls \
--advertise-client-urls \
--initial-cluster-token k8s-etcd \
--data-dir /chj/etcd/data \
--initial-cluster etcd166=\
--initial-cluster-state new
启动命令: nohup sh etcd.start.sh &

配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:

[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="
配置文件/etc/kubernetes/kubelet,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"
KUBELET_ARGS=""
分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:
for I in kube-proxy kubelet
do
systemctl restart $I
systemctl enable $I
systemctl status $I
done

1.7 Kubernetes Node4安装配置

在Kubernetes Node3节点上安装flannel、docker和Kubernetes
yum -y install flannel docker kubernetes-node
编辑node1 /etc/etcd/etcd.conf配置,内容如下:

########################etcd.conf############################

ETCD_NAME=etcd32
ETCD_DATA_DIR="/chj/etcd/data"
ETCD_LISTEN_PEER_URLS=""
ETCD_LISTEN_CLIENT_URLS="0.0.0.0:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="192.168.46.32:2380"
ETCD_INITIAL_CLUSTE.R="etcd165="
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd"
ETCD_ADVERTISE_CLIENT_URLS=""

######################命令行模式执行###############################

touch etcd.start.sh
cat etcd.start.sh
/usr/bin/etcd \
--name etcd32 \
--debug \
--initial-advertise-peer-urls \
--listen-peer-urls \
--listen-client-urls \
--advertise-client-urls \
--initial-cluster-token k8s-etcd \
--data-dir /chj/etcd/data \
--initial-cluster etcd166=\
--initial-cluster-state new
启动命令: nohup sh etcd.start.sh &

配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:

[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="
配置文件/etc/kubernetes/kubelet,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"
KUBELET_ARGS=""
分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:
for I in kube-proxy kubelet
do
systemctl restart $I
systemctl enable $I
systemctl status $I
done

1.8 Kubernetes Node5安装配置

在Kubernetes Node3节点上安装flannel、docker和Kubernetes
yum -y install flannel docker kubernetes-node
配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置,对Node节点上的Kubernetes进行配置,编辑配置文件/etc/kubernetes/config,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/config |grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master="
配置文件/etc/kubernetes/kubelet,代码如下:
[root@k8s-node03]# grep -v "^#" /etc/kubernetes/kubelet |grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pause"
KUBELET_ARGS=""
分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:
for I in kube-proxy kubelet
do
systemctl restart $I
systemctl enable $I
systemctl status $I
done

至此,Kubernetes集群环境搭建完毕。

1.9 Kubernetes flanneld网络配置
vi /etc/sysconfig/flanneld代码如下: 所有node 和master 节点执行;

[root@k8s-master etcd]# grep -v "^#" /etc/sysconfig/flanneld |grep -v "^$"

FLANNEL_ETCD_ENDPOINTS=""
FLANNEL_ETCD_PREFIX="/flannel/network"
etcd配置中心创建flannel网络配置: 此步骤master 和node 都可以执行
etcdctl rm /flannel/network/ --recursive
############创建vxlan 网络模型###########
etcdctl set /flannel/network/config "{ \"Network\": \"10.10.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
############创建upd 网络模型###########
etcdctl set /flannel/network/config "{ \"Network\": \"10.10.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"upd\" } }"
Kubernetes的Node节点搭建和配置flannel网络,etcd中//flannel/network/config节点会被Node节点上的flannel用来创建Doker IP地址网段;
查看etcd配置中心网段;
etcdctl ls /flannel/network/config/subnets

Master 和node 全部启动flannel 服务;

systemctl start flanneld.service

凡是有安装 flannel 插件服务,启动都会生成如下配置;

注意此文件启动flannel服务生成 docker配置文件:(例如下配置)
[root@k8s-node01 kubernetes]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.10.7.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.10.7.1/24 --ip-masq=true --mtu=1450 --insecure-registry 192.168.47.250:5000"
红色部分为docker配置私有仓库地址;
[root@k8s-node01]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.10.0.0/16
FLANNEL_SUBNET=10.10.7.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

管理网卡命令;

1.改变设备传输队列的长度。
参数:txqueuelen NUMBER或者txqlen NUMBER

ip link set dev eth0 txqueuelen 100

2.改变网络设备MTU(最大传输单元)的值。

#ip link set dev eth0 mtu 1500
3.修改网络设备的MAC地址。
4.参数: address LLADDRESS

ip link set dev eth0 address 00:01:4f:00:15:f1

5.停止网卡;

ip link set eth1 down
6.开启网卡;
ip link set dev eth0 up
7.删除网卡;
ip link delete docker0

1.10 Kubernetes Pods配置

构建完Kubernetes集群环境,接下来创建Pods,用kubectl describe和kubectl logs命令定位原因,发现原因是无法从gcr.io(Google Container-Registry)拉取镜像,可以改成其他可访问的地址,也可以创建Docker私有镜像。

1.11 Kubernetes Web UI配置

创建kube-namespace.yaml,代码如下:
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kube-system"
}
}
使用命令:kubectl get namespace查看命名空间。
创建kubernetes-dashboard.yaml,代码如下:

Copyright 2017 Google Inc. All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

Configuration to deploy release version of the Dashboard UI.

#

Example usage: kubectl create -f <this_file>

kind: Deployment

apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: latest
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:

  • name: kubernetes-dashboard
    image: 192.168.47.250:5000/k8s-dashaboard
    imagePullPolicy: Always
    ports:
    • containerPort: 9090
      protocol: TCP
      args:

      Uncomment the following line to manually specify Kubernetes API server Host

      If not specified, Dashboard will attempt to auto discover the API server and connect

      to it. Uncomment only if the default does not work.

      • --apiserver-host=192.168.47.165:8080
        livenessProbe:
        httpGet:
        path: /
        port: 9090
        initialDelaySeconds: 30
        timeoutSeconds: 30

        kind: Service

        apiVersion: v1
        metadata:
        labels:
        app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
        spec:
        type: NodePort
        ports:

        • port: 80
          targetPort: 9090
          selector:
          app: kubernetes-dashboard
          创建dashboard pods模块:
          kubectl create -f kubernetes-dashboard.yaml
          创建完成后,查看Pods和Service的详细信息:
          kubectl get pods --all-namespaces
          kubectl describe service/kubernetes-dashboard --namespace="kube-system"
          kubectl describe pod/kubernetes-dashboard-530803917-816df --namespace="kube-system"
          通过浏览器访问该端口就能看到Kubernetes的Web UI:

Nginx WEB pods配置代码:

Pod {5}
kind : Pod
apiVersion : v1
metadata {10}
name : nginx-web1-1187963248-3z862
generateName : nginx-web1-1187963248-
namespace : kube-system
selfLink : /api/v1/namespaces/kube-system/pods/nginx-web1-1187963248-3z862
uid : c2ad1b54-5253-11e7-91ba-000c291c9220
resourceVersion : 156010
creationTimestamp : 2017-06-16T05:22:15Z
labels {2}
app : nginx-web1
pod-template-hash : 1187963248
annotations {1}
kubernetes.io/created-by : {\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"nginx-web1-1187963248\",\"uid\":\"647c4dd7-5253-11e7-91ba-000c291c9220\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"154018\"}}\n
ownerReferences [1]
0 {5}
apiVersion : extensions/v1beta1
kind : ReplicaSet
name : nginx-web1-1187963248
uid : 647c4dd7-5253-11e7-91ba-000c291c9220
controller : true
spec {6}
containers [1]
0 {6}
name : nginx-web1
image : nginx
resources {0}
(empty object)
terminationMessagePath : /dev/termination-log
imagePullPolicy : Always
securityContext {1}
privileged : false
restartPolicy : Always
terminationGracePeriodSeconds : 30
dnsPolicy : ClusterFirst
nodeName : 192.168.1.122
securityContext {0}
(empty object)
status {6}
phase : Running
conditions [3]
0 {4}
type : Initialized
status : True
lastProbeTime : null
lastTransitionTime : 2017-06-16T05:22:16Z
1 {4}
type : Ready
status : True
lastProbeTime : null
lastTransitionTime : 2017-06-16T05:41:14Z
2 {4}
type : PodScheduled
status : True
lastProbeTime : null
lastTransitionTime : 2017-06-16T05:22:16Z
hostIP : 192.168.1.122
podIP : 10.1.24.5
startTime : 2017-06-16T05:22:16Z
containerStatuses [1]
0 {8}
name : nginx-web1
state {1}
running {1}
startedAt : 2017-06-16T05:41:12Z
lastState {0}
(empty object)
ready : true
restartCount : 0
image : nginx
imageID : docker-pullable://docker.io/nginx@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268
containerID : docker://42d347c4aadf38757ce6fc071c72650df4b503a9315d306063c03b7b11d91b4a

1.12 Kubernetes 本地私有仓库实战

Docker仓库主要用于存放Docker镜像,Docker仓库分为公共仓库和私有仓库,基于registry可以搭建本地私有仓库,使用私有仓库的优点如下:
 节省网络带宽,针对于每个镜像不用去Docker官网仓库下载;
 下载Docker镜像从本地私有仓库中下载;
 组件公司内部私有仓库,方便各部门使用,服务器管理更加统一;
 可以基于GIT或者SVN、Jenkins更新本地Docker私有仓库镜像版本。
官方提供Docker Registry来构建本地私有仓库,目前最新版本为v2,最新版的docker已不再支持v1,Registry v2使用Go语言编写,在性能和安全性上做了很多优化,重新设计了镜像的存储格式。如下为在192.168.47.250服务器上构建Docker本地私有仓库的方法及步骤:
(1) 下载Docker registry镜像,命令如下:
docker pull registry
(2) 启动私有仓库容器,启动命令如下:
mkdir -p /data/registry/
docker run -itd -p 5000:5000 -v /data/registry:/tmp/registry docker.io/registry
Docker本地仓库启动后台容器启动,如图24-2所示:

默认情况下,会将仓库存放于容器内的/tmp/registry目录下,这样如果容器被删除,则存放于容器中的镜像也会丢失,所以我们一般情况下会指定本地一个目录挂载到容器内的/tmp/registry下。

(3) 上传镜像至本地私有仓库:
客户端上传镜像至本地私有仓库,如下以busybox镜像为例,将busybox上传至私有仓库服务器。
docker pull busybox
docker tag busybox 192.168.47.250:5000/busybox
docker push 192.168.47.250:5000/busybox
(4) 检测本地私有仓库:
curl -XGET
curl -XGET
(5) 客户端使用本地私有仓库:
客户端docker配置文件添加如下代码,同时重启docker服务,获取本地私有仓库如图24-3所示:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry 192.168.47.250:5000'
ADD_REGISTRY='--add-registry 192.168.47.250:5000'

至此,docker本地私有仓库部署完毕,可以向仓库中添加或者更新Docker镜像。

1.13 Kubernetes使用本地仓库
创建kube-namespace.yaml,代码如下(如果已创建,则无需创建)
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kube-system"
}
}
创建kubernetes-dashboard.yaml,读取本地私有仓库,代码如下:

Copyright 2015 Google Inc. All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

Configuration to deploy release version of the Dashboard UI.

#

Example usage: kubectl create -f <this_file>

kind: Deployment

apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: latest
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:

  • name: kubernetes-dashboard
    image: 192.168.47.250:5000/kubernetes-dashboard-amd64
    imagePullPolicy: Always
    ports:
    • containerPort: 9090
      protocol: TCP
      args:

      Uncomment the following line to manually specify Kubernetes API server Host

      If not specified, Dashboard will attempt to auto discover the API server and connect

      to it. Uncomment only if the default does not work.

      • --apiserver-host=192.168.1.120:8080
        livenessProbe:
        httpGet:
        path: /
        port: 9090
        initialDelaySeconds: 30
        timeoutSeconds: 30

        kind: Service

        apiVersion: v1
        metadata:
        labels:
        app: kubernetes-dashboard
        name: kubernetes-dashboard
        namespace: kube-system
        spec:
        type: NodePort
        ports:

        • port: 80
          targetPort: 9090
          selector:
          app: kubernetes-dashboard
          使用命令kubectl create -k kube-dashboard.yaml,创建dashboard,POD无法运行,后台日志报错如下:
          The push refers to a repository [192.168.47.250:5000/registry]
          Get : http: server gave HTTP response to HTTPS client
          需要在Docker主机添加本地仓库地址,添加方法如下:
          (1) /etc/docker/daemon.json文件中代码如下:
          { "insecure-registries":["192.168.47.250:5000"] }
          (2) /etc/kubernetes/kubelet配置文件将KUBELET_POD_INFRA_CONTAINER选项注释,同时添加一个新的KUBELET_POD_INFRA_CONTAINER参数,代码如下,前提需要将pod-infrastructure镜像上传私有仓库。
          #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
          KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.47.250:5000/pod-infrastructure:latest"
          在每台nodes上重启kubelet服务:systemctl restart kubelet.service
          (3) Nodes上Docker配置文件/etc/sysconfig/docker中,添加如下代码,并重启Docker服务:
          ADD_REGISTRY='--add-registry 192.168.47.250:5000'
          通过WEB-UI创建镜像,则会从本地仓库中获取。
          查看kubernetes-dashboard创建信息如图所示:

1.14 Kubernetes 操作命令注解;

k8s 操作命令注解
kubectl get componentstatuses ########检测服务状态########
kubectl get events --all-namespaces ########获取事件日志#########
kubectl get pod nginx ########获取nginx pod ######
kubectl describe pods nginx ########获取pod nginx 具体详细信息###########
kubectl create -f nginx.yaml ########创建nginx pod ##############
kubectl run my-web --image=192.168.47.250:5000/nginx --replicas=2 --port=80 #######################命令行模式运行nginx pod###
kubectl delete pod --all ########清除所有pod##################
kubectl get deployment ########所有获取deployment ###########
kubectl delete deployment my-web ########删除my-web deployment#########
kubectl get service ########获取所有的service#############
kubectl describe svc wweb-nginx-service ########获取详细wweb-nginx service ###
kubectl get po --show-labels ########查看pod 的标签################
kubectl label pod web--kcz name=web ########打pod 标签 ###################
kubectl get po -o wide #########查看pod 网络信息(ip)#########
kubectl get po -o yaml #########查看pod 的 yaml##############
kubectl get po -l name=web #########查看标签等于web#############
kubectl get po -l name!=web #########查看标签不等于web############
kubectl exec -it web--vjx /bin/bash #########进入 pod 容器中###
kubectl edit deployment web #########修改deployment web########
kubectl scale --replicas=0 deployment web ##########缩放或扩容deployment
kubectl get endpoints #########获取端点#################
kubectl cluster-info #########查询k8s 集群信息############
kubectl describe node #########查看node 机器信息#############
kubectl scale deployment --replicas=0 config-ms-v6 #########缩容和升级###############
kubectl delete deployment --all ######删除部署所有部署###########
kubectl get po --show-labels ############查看 pod lables 标签#######
kubectl describe service ###########查看service 详细############
kubectl set image deployment/config-ms-v24 config-ms-v24=config-ms-v24 config-ms-v24=192.168.47.250:5000/config-ms:v23 --all ####更新iamges##

1.15 Kubernetes 使用问题总结;

1.flannel 网络启动 docker 获取不到 etcd 分配的ip地址;

解决办法添加参数;

cat /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine
Documentation=
After=network-online.target firewalld.service
Wants=network-online.target
##########注意是新增参考而不是删除原来配置的参数############
[Service]
EnvironmentFile=-/run/flannel/subnet.env
Type=notify
ExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
#####################################################

转载于:https://blog.51cto.com/breaklinux/2056344

你可能感兴趣的文章
2019测试指南-web应用程序安全测试(二)指纹Web服务器
查看>>
树莓派3链接wifi
查看>>
js面向对象编程
查看>>
Ruby中类 模块 单例方法 总结
查看>>
jQuery的validate插件
查看>>
5-4 8 管道符 作业控制 shell变量 环境变量配置
查看>>
Enumberable
查看>>
开发者论坛一周精粹(第五十四期) 求购备案服务号1枚!
查看>>
validate表单验证及自定义方法
查看>>
知识点002-yum的配置文件
查看>>
学习 Git(使用手册)
查看>>
javascript 中出现missing ) after argument list的错误
查看>>
RSA 加密解密
查看>>
Cause: org.apache.ibatis.ognl.ExpressionSyntaxException: Malformed OGNL expression:......
查看>>
路由模式 - direct
查看>>
form表单的target属性
查看>>
mysql的常用引擎
查看>>
Linux基础(day40)
查看>>
第二个Java应用和Tomcat的管理功能
查看>>
10.28 rsync工具介绍 10.29/10.30 rsync常用选项 10.31 rsync通过ssh同步
查看>>