Kubernetes single node deployment binary k8s cluster

0k and6833m 2020-11-11 14:12:52
kubernetes single node deployment binary


Kubernetes Single node deployment binary k8s to cluster around

Kubernetes Single node deployment binary k8s to cluster around
# Environmental Science 
| master | 192.168.100.170 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| node1 | 192.168.100.180 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| node2 | 192.168.100.190 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
| etcd | ca.pem,server.pem,server-key.pem |
| flannel | ca.pem,server.pem,server-key.pem |
| kube-apiserver | ca.pem,server.pem,server-key.pem |
| kubelet | ca.pem,ca-key.pem |
| kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
| kubectl | ca.pem,admin-pem,admin-key.pem |
One : Etcd Cluster deployment ---------------------------------------------------------
hostnamectl set-hostaname master
hostnamectl set-hostaname node1
hostnamectl set-hostaname node2
iptables -F
setenforce 0
//master Deploy ------------------------------
1.master Host creation k8s Folder and upload etcd Script , download cffssl Official certificate generation tool
mkdir k8s && cd k8s
// Upload script etcd-cert.sh etcd.sh
ls
etcd-cert.sh etcd.sh
2. Download certificate making tool
k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
bash cfssl.sh
ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
3. Start making certificates
#cfssl Build certificate tool cfssljson By passing in json File generation certificate cfssl-certinfo View certificate information 
# Definition ca certificate 
k8s]# cd etcd-cert
[root@master etcd-cert]# cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
# Implement certificate signing ," Realization CA Certificate signature "
[root@master etcd-cert]# cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
# Production certificate , Generate ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4. Appoint etcd Communication verification among three nodes , Pay attention to modify here ip
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.100.170", "master Address "
"192.168.100.180", "node1 Address "
"192.168.100.190" "node2 Address "
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
# Generate ETCD certificate server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
# Check the generated certificate 
[root@master etcd-cert]# ls
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
5. Deploy ETCD service
# Official website download address :https://github.com/etcd-io/etcd/releases
# Choose local upload here etcd-v3.3.10-linux-amd64.tar.gz,kubernetes-server-linux-amd64.tar.gz,flannel-v0.10.0-linux-amd64.tar.gz
k8s]# ls
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz
etcd.sh kubernetes-server-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64 flannel-v0.10.0-linux-amd64.tar.gz
k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p ' create profile , Command file , Certificate directory '
k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ '// Move command to just created bin Catalog '
# Certificate copy 
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ '// Copy the certificate file to the one you just created ssl Catalog '
[root@master k8s]# bash etcd.sh etcd01 192.168.100.170 etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380 '// Enter stuck state and wait for other nodes to join , Use another terminal to view '
[root@master ~]# ps -ef | grep etcd
6. Copy the certificate and start the service script to get other node node
[root@master k8s]# scp -r /opt/etcd/ root@192.168.100.180:/opt/
[root@master k8s]# scp -r /opt/etcd/ root@192.168.100.190:/opt
# Copy service scripts 
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.100.180:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.100.190:/usr/lib/systemd/system/
//node Node deployment
7.node1
# Modify the configuration file 
[root@node01 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" " Change here to etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.180:2380" " It is amended as follows nodde2 Address "
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.180:2379" " It is amended as follows nodde2 Address "
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.180:2380" " It is amended as follows nodde2 Address "
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.180:2379" " It is amended as follows nodde2 Address "
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.170:2380,etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# start-up etcd
[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd
[root@localhost ssl]# systemctl enable etcd
8.node2
# Modify the configuration file 
[root@node01 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" " Change here to etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.190:2380" " It is amended as follows nodde3 Address "
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.190:2379" " It is amended as follows nodde3 Address "
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.190:2380" " It is amended as follows nodde3 Address "
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.190:2379" " It is amended as follows nodde3 Address "
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.170:2380,etcd02=https://192.168.100.180:2380,etcd03=https://192.168.100.190:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# start-up etcd
[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd
[root@localhost ssl]# systemctl enable etcd
9. Check etcd Cluster state
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" cluster-health
member 257ab5cb19142f4b is healthy: got healthy result from https://192.168.100.180:2379
member 777f7eb10e389e47 is healthy: got healthy result from https://192.168.100.190:2379
member eac869b8bd29e072 is healthy: got healthy result from https://192.168.100.170:2379
cluster is healthy
' Check cluster status : Pay attention to the relative path '
# Two : node node docker Engine deployment and flannel The network configuration ------------------------------------
// all node Node deployment docker engine , See docker set up script
//master Server allocation ETCD The Internet
1.master The node writes the assigned subnet segment to ETCD in , for flannel Use
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
2. View information written
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379" get /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
3. Copy to all node node ( Just deploy in node The node can be )
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.195.150:/root
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.195.151:/root
'// Who needs to run pod, Whoever needs to install flannel The Internet '
// all node Node operation decompression
#####node01
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
1. establish k8s working directory
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2. Write service scripts and
[root@node1 ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3. Turn on flannel Network function
[root@node1 ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.170:2379,https://192.168.100.180:2379
4. To configure docker Connect flannel
[root@node1 ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5. To configure docker Connect flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env " add rows "
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock " add to $DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
// explain :bip Specify the subnet at startup
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6. restart docker service
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
7. see flannel The Internet
[root@node1 ~]# ifconfig
###node2
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
1. establish k8s working directory
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2. Write service scripts and
[root@node1 ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3. Turn on flannel Network function
[root@node1 ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.170:2379,https://192.168.100.180:2379
4. To configure docker Connect flannel
[root@node1 ~]# bash flannel.sh https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5. To configure docker Connect flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env " add rows "
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock " add to $DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
// explain :bip Specify the subnet at startup
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6. restart docker service
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
7. see flannel The Internet
[root@node2 ~]# ifconfig
##### test ping Communicate with each other docker0 network card prove flannel It's routing 
[root@node1 ~]# docker run -it centos:7 /bin/bash
[root@5f9a65565b53 /]# yum install net-tools -y
[root@5f9a65565b53 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.84.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node2 ~]# docker run -it centos:7 /bin/bash
[root@abbc159a6378 /]# yum install net-tools -y
[root@abbc159a6378 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.36.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# test 
[root@abbc159a6378 /]# ping 172.17.84.2
[root@5f9a65565b53 /]# ping 172.17.36.2
" Containers can each other ping This means that containers can access each other across hosts "
Four : Deploy master Components
// stay master On the operation ,api-server Generate Certificate
1、master Node operation ,api-server Generate Certificate
[root@localhost k8s]# unzip master.zip
[root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p " Create a directory of configuration files , The scripts directory , Certificate directory "
[root@localhost k8s]# mkdir k8s-cert
[root@localhost k8s]# cd k8s-cert/
[root@localhost k8s-cert]# ls " Upload k8s-cert.sh Come here "
k8s-cert.sh
[root@master k8s-cert]# cat k8s-cert.sh
cat > ca-config.json <<EOF "ca Of json certificate "
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF "ca Signing certificate for "
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - " establish ca certificate , It will be generated after execution ca.pem and ca-key.pem"
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1", "Cloud vip Address , There is no need to modify "
"127.0.0.1", " Local address "
"192.168.100.170", "master1 Address , Here, the certificate is generated , Plan your address authorization certificate , It is convenient for subsequent multi node deployment "
"192.168.100.160", "master2 Address "
"192.168.100.100", "vip"
"192.168.100.150", "loadbalance(master)"
"192.168.100.140", "loadbalance(backup)"
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing", " name , You can customize "
"ST": "BeiJing", " name , You can customize "
"O": "k8s",
"OU": "System"
}
]
}
EOF
# Generate server certificate , This command will produce server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF " Administrator signature "
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
# Generate administrator certificate Executing the following command generates admin.pem admin-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF " Agent Signature "
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# Generate proxy side Certificate , Will generate kube-proxy-key.pem kube-proxy.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2. Make a certificate
[root@master k8s-cert]# bash k8s-cert.sh " Generate Certificate "
[root@master k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
apiserver.sh etcd-v3.3.10-linux-amd64 master.zip
controller-manager.sh etcd-v3.3.10-linux-amd64.tar.gz scheduler.sh
etcd-cert k8s-cert
etcd.sh kubernetes-server-linux-amd64.tar.gz
3、 decompression k8s Server side compression package
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
4. Copy the server-side key commands to k8s In the working directory
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
5、 Edit the token and bind the role kubelet-bootstrap
[root@master k8s]# cd /root/k8s
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' '// Randomly generated serial number '
0d8e1e148121fc25d8623239ae6cf7e0
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#'// Serial number , user name ,id, role , This user is master Used to manage node Node '
6、 Turn on apiserver, Store data in etcd In the cluster and check kube state
[root@master k8s]# bash apiserver.sh 192.168.100.170 https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@localhost k8s]# ps aux | grep kube " Check whether the process started successfully "
[root@master ~]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379 \
--bind-address=192.168.100.170 \
--secure-port=6443 \
--advertise-address=192.168.100.170 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.100.170:6443 0.0.0.0:* LISTEN 69865/kube-apiserve
tcp 0 0 192.168.100.170:6443 192.168.100.170:53210 ESTABLISHED 69865/kube-apiserve
tcp 0 0 192.168.100.170:53210 192.168.100.170:6443 ESTABLISHED 69865/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 69865/kube-apiserve
7、 start-up scheduler service
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# ps aux | grep ku
postfix 68074 0.0 0.1 91732 4080 ? S 10:07 0:00 pickup -l -t unix -u
root 69865 14.4 8.0 401580 311244 ? Ssl 11:43 0:09
[root@master k8s]# chmod +x controller-manager.sh
8、 start-up controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
9、 see master Node status
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
[root@master bin]# scp kubelet kube-proxy root@192.168.100.180:/opt/kubernetes/bin/
root@192.168.100.180's password:
kubelet 100% 168MB 74.8MB/s 00:02
kube-proxy 100% 48MB 97.6MB/s 00:00
[root@master bin]# scp kubelet kube-proxy root@192.168.100.190:/opt/kubernetes/bin/
root@192.168.100.190's password:
kubelet 100% 168MB 101.4MB/s 00:01
kube-proxy 100% 48MB 102.3MB/s 00:00
//node Node deployment
1、master The node will kubectl and kube-proxy copy to node node
[root@master bin]# scp kubelet kube-proxy root@192.168.100.180:/opt/kubernetes/bin/
root@192.168.100.180's password:
kubelet 100% 168MB 74.8MB/s 00:02
kube-proxy 100% 48MB 97.6MB/s 00:00
[root@master bin]# scp kubelet kube-proxy root@192.168.100.190:/opt/kubernetes/bin/
root@192.168.100.190's password:
kubelet 100% 168MB 101.4MB/s 00:01
kube-proxy 100% 48MB 102.3MB/s 00:00
2.nod01 Node operation ( Copy node.zip To /root Under the directory, unzip )
[root@localhost ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip public video file music
flannel.sh initial-setup-ks.cfg README.md Templates picture download desktop
// decompression node.zip, get kubelet.sh proxy.sh
[root@localhost ~]# unzip node.zip 
3. stay master On the operation , establish kubeconfig Catalog
[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/
// Copy kubeconfig.sh Rename the file
[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootst"
[root@master kubeconfig]# vim kubeconfig 
APISERVER=$1
SSL_DIR=$2
# establish kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"
# Set cluster parameters 
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# Set the client authentication parameters 
kubectl config set-credentials kubelet-bootstrap \
--token=0d8e1e148121fc25d8623239ae6cf7e0 \ '// this token The serial number is just before /opt/kubernetes/cfg/token.csv Used in documents '
--kubeconfig=bootstrap.kubeconfig
# Setting context parameters 
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# Setting the default context 
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# establish kube-proxy kubeconfig file 
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ '// Set the environment variable ( Can be written to /etc/prlfile in )'
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
4、 Generate the configuration file and copy it to node node
[root@master kubeconfig]# bash kubeconfig 192.168.100.170 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
User "kubelet-bootstrap" set.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
# Copy the configuration file to node node 
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.100.180:/opt/kubernetes/cfg/
root@192.168.100.180's password:
bootstrap.kubeconfig 100% 2169 1.4MB/s 00:00
kube-proxy.kubeconfig 100% 6275 5.8MB/s 00:00
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.100.190:/opt/kubernetes/cfg/
root@192.168.100.190's password:
bootstrap.kubeconfig 100% 2169 352.8KB/s 00:00
kube-proxy.kubeconfig 100% 6275 3.3MB/s 00:00
5. establish bootstrap Roles give permissions to connect apiserver Ask for signature ( The key )
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
// Operate on the node
6、node01 Node operation generation kubelet kubelet.config The configuration file
#------------------------------------node1 operation 
# establish kubelete Configuration files and service scripts for 
[root@node1 ~]# bash kubelet.sh 192.168.100.180
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# Check kubelete Service startup 
[root@node1 ~]# ps aux | grep kube
root 10206 0.0 0.6 391444 18372 ? Ssl 07:55 0:11 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.100.170:2379,https://192.168.100.180:2379,https://192.168.100.190:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 32918 3.2 1.5 405340 45420 ? Ssl 11:57 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.100.180 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 32952 0.0 0.0 112724 988 pts/0 S+ 11:57 0:00 grep --color=auto kube
7、master Check on node01 Node's request , View Certificate Status
#------------------------------master On the operation 
# It was found that node01 Node's request 
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 30s kubelet-bootstrap Pending "( Wait for the cluster to issue a certificate to the node )"
8、 Issue certificate , Check the certificate status again
[root@master kubeconfig]# kubectl certificate approve node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw
certificatesigningrequest.certificates.k8s.io/node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw approved "master Authorize to join the cluster "
# Continue to view certificate status 
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued "( Has been allowed to join the cluster )"
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued
9、 View the cluster status and start proxy service
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 31s v1.12.3
#'// If there is a node noready, Check kubelet, If there are many nodes noready, Then check apiserver, If there is no problem, check again VIP Address ,keepalived'
#---------------------------node1 Node operation , start-up proxy service 
[root@node1 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip
docker-install.sh initial-setup-ks.cfg proxy.sh
flannel.sh kubelet.sh README.md
[root@node1 ~]# bash proxy.sh 192.168.100.180
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Two 2020-09-29 12:04:50 CST; 9s ago
Main PID: 34171 (kube-proxy)
Tasks: 0
Memory: 8.2M
CGroup: /system.slice/kube-proxy.service
‣ 34171 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 -...
# Deploy node2
#---------------------------- stay node01 Node operation 
# Put the ready-made /opt/kubernetes Directory copy to other node The node can be modified 
[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.100.190:/opt/
# hold kubelet,kube-proxy Of service File copy to node2 in 
[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.100.190:/usr/lib/systemd/system/
root@192.168.100.190's 'password:
kubelet.service 100% 264 159.9KB/s 00:00
kube-proxy.service 100% 231 302.4KB/s 00:00
[root@node1 ~]# systemctl enable kubelet.service
#------------------------------node2 operation 
1、 Modify three configuration files IP Address
# First delete the copied Certificate , wait node02 I will apply for the certificate myself 
[root@node2 ~]# cd kubeconfig/
[root@node2 kubeconfig]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# ls
kubelet-client-2020-09-29-12-03-29.pem kubelet.crt
kubelet-client-current.pem kubelet.key
[root@node2 ssl]# rm -rf *
[root@node2 ssl]# ls
[root@node2 ssl]# cd ../cfg/
2、 Start the service and check the status
# Modify the configuration file kubelet kubelet.config kube-proxy( Three configuration files )
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.190 \ " Change to node2 Address "
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.100.190 "node2 Address "
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.190 \ "node2 The address of "
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
# Start the service 
[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
3.master Check the request and agree to node02 certificate
// stay master On the operation to view the request Pending
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow 47s kubelet-bootstrap Pending
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 12m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 6m26s v1.12.3
[root@master kubeconfig]# kubectl certificate approve node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow " Authorization allows requests to join the cluster "
certificatesigningrequest.certificates.k8s.io/node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow approved
"master View nodes in the cluster "
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 8m52s v1.12.3
192.168.100.190 Ready <none> 43s v1.12.3
版权声明
本文为[0k and6833m]所创,转载请带上原文链接,感谢

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云