Kubernetes单节点部署二进制k8s群集

osc_ndt6833m 2020-11-11 14:12:52
Kubernetes


Kubernetes单节点部署二进制k8s群集

Kubernetes单节点部署二进制k8s群集
#环境
| master | 192.168.100.170 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| node1 | 192.168.100.180 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| node2 | 192.168.100.190 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| etcd | ca.pem,server.pem,server-key.pem |
| flannel | ca.pem,server.pem,server-key.pem |
| kube-apiserver | ca.pem,server.pem,server-key.pem |
| kubelet | ca.pem,ca-key.pem |
| kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
| kubectl | ca.pem,admin-pem,admin-key.pem |
一: Etcd群集部署---------------------------------------------------------
hostnamectl set-hostaname master
hostnamectl set-hostaname node1
hostnamectl set-hostaname node2
iptables -F
setenforce 0
//master部署------------------------------
1.master主机创建k8s文件夹并上传etcd脚本,下载cffssl官方证书生成工具
mkdir k8s && cd k8s
//上传脚本etcd-cert.sh etcd.sh
ls
etcd-cert.sh etcd.sh
2.下载证书制作工具
k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
bash cfssl.sh
ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
3.开始制作证书
#cfssl 生成证书工具 cfssljson通过传入json文件生成证书 cfssl-certinfo查看证书信息
#定义ca证书
k8s]# cd etcd-cert
[[email protected] etcd-cert]# cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
#实现证书签名,"实现CA证书签名"
[[email protected] etcd-cert]# cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
#生产证书,生成ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.指定etcd三个节点之间的通信验证,注意要修改这里的ip
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.100.170", "master地址"
"192.168.100.180", "node1地址"
"192.168.100.190" "node2地址"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
#生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
#检查生成的证书
[[email protected] etcd-cert]# ls
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
5.部署ETCD服务
#官网下载地址:https://github.com/etcd-io/etcd/releases
#这里选择本地上传etcd-v3.3.10-linux-amd64.tar.gz,kubernetes-server-linux-amd64.tar.gz,flannel-v0.10.0-linux-amd64.tar.gz
k8s]# ls
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz
etcd.sh kubernetes-server-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64 flannel-v0.10.0-linux-amd64.tar.gz
k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p '创建配置文件,命令文件,证书目录'
k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ '//移动命令到刚刚创建的 bin目录'
#证书拷贝
[[email protected] k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ '//将证书文件复制到刚刚创建的ssl目录'
[[email protected] k8s]# bash etcd.sh etcd01 192.168.100.170 etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380 '//进入卡住状态等待其他节点加入,使用另外一个终端查看'
[[email protected] ~]# ps -ef | grep etcd
6.拷贝证书及启动服务脚本取其他node节点
[[email protected] k8s]# scp -r /opt/etcd/ [email protected]:/opt/
[[email protected] k8s]# scp -r /opt/etcd/ [email protected]:/opt
#拷贝服务脚本
[[email protected] k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[[email protected] k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
//node节点部署
7.node1
#修改配置文件
[[email protected] ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" "此处修改为etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.180:2380" "修改为nodde2地址"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.180:2379" "修改为nodde2地址"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.180:2380" "修改为nodde2地址"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.180:2379" "修改为nodde2地址"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.170:2380,etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#启动etcd
[[email protected] ssl]# systemctl start etcd
[[email protected] ssl]# systemctl status etcd
[[email protected] ssl]# systemctl enable etcd
8.node2
#修改配置文件
[[email protected] ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" "此处修改为etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.190:2380" "修改为nodde3地址"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.190:2379" "修改为nodde3地址"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.190:2380" "修改为nodde3地址"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.190:2379" "修改为nodde3地址"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.170:2380,etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#启动etcd
[[email protected] ssl]# systemctl start etcd
[[email protected] ssl]# systemctl status etcd
[[email protected] ssl]# systemctl enable etcd
9.检查etcd群集状态
[[email protected] etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" cluster-health
member 257ab5cb19142f4b is healthy: got healthy result from https://192.168.100.180:2379
member 777f7eb10e389e47 is healthy: got healthy result from https://192.168.100.190:2379
member eac869b8bd29e072 is healthy: got healthy result from https://192.168.100.170:2379
cluster is healthy
'检查集群状态:注意相对路径'
#二: node节点docker引擎部署和flannel网络配置------------------------------------
//所有node节点部署docker引擎,详见docker安装脚本
//master服务器分配ETCD网络
1.master节点写入分配的子网段到ETCD中,供flannel使用
[[email protected] etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
2.查看写入的信息
[[email protected] etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" get /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
3.拷贝到所有node节点(只需要部署在node节点即可)
[[email protected] k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz [email protected]:/root
[[email protected] k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz [email protected]:/root
'//谁需要跑pod,谁就需要安装flannel网络'
//所有node节点操作解压
#####node01
[[email protected] ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
1.创建k8s工作目录
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[[email protected] ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2.编写服务脚本与
[[email protected] ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3.开启flannel网络功能
[[email protected] ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.170:2379,https://192.168.100.180:2379
4.配置docker连接flannel
[[email protected] ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5.配置docker连接flannel
[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env "添加行"
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock "添加$DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[[email protected] ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6.重启docker服务
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
7.查看flannel网络
[[email protected] ~]# ifconfig
###node2
[[email protected] ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
1.创建k8s工作目录
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[[email protected] ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2.编写服务脚本与
[[email protected] ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3.开启flannel网络功能
[[email protected] ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.170:2379,https://192.168.100.180:2379
4.配置docker连接flannel
[[email protected] ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5.配置docker连接flannel
[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env "添加行"
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock "添加$DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[[email protected] ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6.重启docker服务
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
7.查看flannel网络
[[email protected] ~]# ifconfig
#####测试ping通对方docker0网卡 证明flannel起到路由作用
[[email protected] ~]# docker run -it centos:7 /bin/bash
[[email protected] /]# yum install net-tools -y
[[email protected] /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.84.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[[email protected] ~]# docker run -it centos:7 /bin/bash
[[email protected] /]# yum install net-tools -y
[[email protected] /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.36.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#测试
[[email protected] /]# ping 172.17.84.2
[[email protected] /]# ping 172.17.36.2
"容器相互能ping通就说明容器间能跨宿主机相互访问"
四: 部署master组件
//在master上操作,api-server生成证书
1、master节点操作,api-server生成证书
[[email protected] k8s]# unzip master.zip
[[email protected] k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p "创建配置文件目录,脚本目录,证书目录"
[[email protected] k8s]# mkdir k8s-cert
[[email protected] k8s]# cd k8s-cert/
[[email protected] k8s-cert]# ls "上传k8s-cert.sh到这里"
k8s-cert.sh
[[email protected] k8s-cert]# cat k8s-cert.sh
cat > ca-config.json <<EOF "ca的json证书"
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF "ca的签名证书"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - "创建ca 证书,执行后会生成ca.pem和ca-key.pem"
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1", "Cloud vip地址,这里不用修改"
"127.0.0.1", "本地地址"
"192.168.100.170", "master1地址,这里生成证书,规划一下地址授权证书,方便后续多节点部署"
"192.168.100.160", "master2地址"
"192.168.100.100", "vip"
"192.168.100.150", "loadbalance(master)"
"192.168.100.140", "loadbalance(backup)"
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing", "名称,可以自定义"
"ST": "BeiJing", "名称,可以自定义"
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成server证书,这个命令执行后会产生server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF "管理员签名"
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
#生成管理员证书 执行以下命令会生成admin.pem admin-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF "代理签名"
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成代理端的证书,会生成kube-proxy-key.pem kube-proxy.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2.制作证书
[[email protected] k8s-cert]# bash k8s-cert.sh "生成证书"
[[email protected] k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[[email protected] k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[[email protected] k8s-cert]# cd ..
[[email protected] k8s]# ls
apiserver.sh etcd-v3.3.10-linux-amd64 master.zip
controller-manager.sh etcd-v3.3.10-linux-amd64.tar.gz scheduler.sh
etcd-cert k8s-cert
etcd.sh kubernetes-server-linux-amd64.tar.gz
3、解压k8s服务器端压缩包
[[email protected] k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
4.复制服务器端关键命令到k8s工作目录中
[[email protected] k8s]# cd /root/k8s/kubernetes/server/bin
[[email protected] bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
5、编辑令牌并绑定角色kubelet-bootstrap
[[email protected] k8s]# cd /root/k8s
[[email protected] k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' '//随机生成序列号'
0d8e1e148121fc25d8623239ae6cf7e0
[[email protected] k8s]# vim /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#'//序列号,用户名,id,角色,这个用户是master用来管理node节点的'
6、开启apiserver,将数据存放在etcd集群中并检查kube状态
[[email protected] k8s]# bash apiserver.sh 192.168.100.170 https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[[email protected] k8s]# ps aux | grep kube "检查进程是否成功启动"
[[email protected] ~]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379 \
--bind-address=192.168.100.170 \
--secure-port=6443 \
--advertise-address=192.168.100.170 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[[email protected] k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.100.170:6443 0.0.0.0:* LISTEN 69865/kube-apiserve
tcp 0 0 192.168.100.170:6443 192.168.100.170:53210 ESTABLISHED 69865/kube-apiserve
tcp 0 0 192.168.100.170:53210 192.168.100.170:6443 ESTABLISHED 69865/kube-apiserve
[[email protected] k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 69865/kube-apiserve
7、启动scheduler服务
[[email protected] k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[[email protected] k8s]# ps aux | grep ku
postfix 68074 0.0 0.1 91732 4080 ? S 10:07 0:00 pickup -l -t unix -u
root 69865 14.4 8.0 401580 311244 ? Ssl 11:43 0:09
[[email protected] k8s]# chmod +x controller-manager.sh
8、启动controller-manager
[[email protected] k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
9、查看master节点状态
[[email protected] k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
[[email protected] bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]168.100.180's password:
kubelet 100% 168MB 74.8MB/s 00:02
kube-proxy 100% 48MB 97.6MB/s 00:00
[[email protected] bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password:
kubelet 100% 168MB 101.4MB/s 00:01
kube-proxy 100% 48MB 102.3MB/s 00:00
//node节点部署
1、master节点上将kubectl和kube-proxy拷贝到node节点
[[email protected] bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]168.100.180's password:
kubelet 100% 168MB 74.8MB/s 00:02
kube-proxy 100% 48MB 97.6MB/s 00:00
[[email protected] bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password:
kubelet 100% 168MB 101.4MB/s 00:01
kube-proxy 100% 48MB 102.3MB/s 00:00
2.nod01节点操作(复制node.zip到/root目录下再解压)
[[email protected] ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐
flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面
//解压node.zip,获得kubelet.sh proxy.sh
[[email protected] ~]# unzip node.zip 
3.在master上操作,创建kubeconfig目录
[[email protected] k8s]# mkdir kubeconfig
[[email protected] k8s]# cd kubeconfig/
//拷贝kubeconfig.sh文件进行重命名
[[email protected] kubeconfig]# mv kubeconfig.sh kubeconfig
[[email protected] kubeconfig]# cat /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootst"
[[email protected] kubeconfig]# vim kubeconfig 
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=0d8e1e148121fc25d8623239ae6cf7e0 \ '//此token序列号就是之前/opt/kubernetes/cfg/token.csv 文件中使用的的'
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[[email protected] kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ '//设置环境变量(可以写入到/etc/prlfile中)'
[[email protected] kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
4、生成配置文件并拷贝到node节点
[[email protected] kubeconfig]# bash kubeconfig 192.168.100.170 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
User "kubelet-bootstrap" set.
Switched to context "default".
[[email protected] kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
#拷贝配置文件到node节点
[[email protected] kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]168.100.180's password:
bootstrap.kubeconfig 100% 2169 1.4MB/s 00:00
kube-proxy.kubeconfig 100% 6275 5.8MB/s 00:00
[[email protected] kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]'s password:
bootstrap.kubeconfig 100% 2169 352.8KB/s 00:00
kube-proxy.kubeconfig 100% 6275 3.3MB/s 00:00
5.创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
[[email protected] kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
//在节点上操作
6、node01节点操作生成kubelet kubelet.config配置文件
#------------------------------------node1操作
#创建kubelete的配置文件与服务脚本
[[email protected] ~]# bash kubelet.sh 192.168.100.180
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#检查kubelete服务启动
[[email protected] ~]# ps aux | grep kube
root 10206 0.0 0.6 391444 18372 ? Ssl 07:55 0:11 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 32918 3.2 1.5 405340 45420 ? Ssl 11:57 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.100.180 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 32952 0.0 0.0 112724 988 pts/0 S+ 11:57 0:00 grep --color=auto kube
7、master上检查到node01节点的请求,查看证书状态
#------------------------------master上操作
#检查到node01节点的请求
[[email protected] kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 30s kubelet-bootstrap Pending "(等待集群给该节点颁发证书)"
8、颁发证书,再次查看证书状态
[[email protected] kubeconfig]# kubectl certificate approve node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw
certificatesigningrequest.certificates.k8s.io/node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw approved "master进行授权允许加入群集"
#继续查看证书状态
[[email protected] kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued "(已经被允许加入群集)"
[[email protected] kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued
9、查看集群状态并启动proxy服务
[[email protected] kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 31s v1.12.3
#'//如果有一个节点noready,检查kubelet,如果很多节点noready,那就检查apiserver,如果没问题再检查VIP地址,keepalived'
#---------------------------node1节点操作,启动proxy服务
[[email protected] ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip
docker-install.sh initial-setup-ks.cfg proxy.sh
flannel.sh kubelet.sh README.md
[[email protected] ~]# bash proxy.sh 192.168.100.180
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[[email protected] ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 12:04:50 CST; 9s ago
Main PID: 34171 (kube-proxy)
Tasks: 0
Memory: 8.2M
CGroup: /system.slice/kube-proxy.service
‣ 34171 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 -...
#部署node2
#----------------------------在node01节点操作
#把现成的/opt/kubernetes目录复制到其他node节点进行修改即可
[[email protected] ~]# scp -r /opt/kubernetes/ [email protected]:/opt/
#把kubelet,kube-proxy的service文件拷贝到node2中
[[email protected] ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
[email protected]168.100.190's 'password:
kubelet.service 100% 264 159.9KB/s 00:00
kube-proxy.service 100% 231 302.4KB/s 00:00
[[email protected] ~]# systemctl enable kubelet.service
#------------------------------node2操作
1、修改三个配置文件的IP地址
#首先删除复制过来的证书,等会node02会自行申请证书
[[email protected] ~]# cd kubeconfig/
[[email protected] kubeconfig]# cd /opt/kubernetes/ssl/
[[email protected] ssl]# ls
kubelet-client-2020-09-29-12-03-29.pem kubelet.crt
kubelet-client-current.pem kubelet.key
[[email protected] ssl]# rm -rf *
[[email protected] ssl]# ls
[[email protected] ssl]# cd ../cfg/
2、启动服务并查看状态
#修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)
[[email protected] cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.190 \ "改成node2地址"
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[[email protected] cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.100.190 "node2地址"
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[[email protected] cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.190 \ "node2的地址"
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
#启动服务
[[email protected] cfg]# systemctl start kubelet.service
[[email protected] cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[[email protected] cfg]# systemctl start kube-proxy.service
[[email protected] cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
3.master上操作查看请求并同意node02证书
//在master上操作查看请求Pending
[[email protected] kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow 47s kubelet-bootstrap Pending
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 12m kubelet-bootstrap Approved,Issued
[[email protected] kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 6m26s v1.12.3
[[email protected] kubeconfig]# kubectl certificate approve node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow "授权允许请求加入群集"
certificatesigningrequest.certificates.k8s.io/node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow approved
"master查看群集中的节点"
[[email protected] kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 8m52s v1.12.3
192.168.100.190 Ready <none> 43s v1.12.3
版权声明
本文为[osc_ndt6833m]所创,转载请带上原文链接,感谢
https://my.oschina.net/u/4314113/blog/4712846

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云