5、 Deploying k8s cluster

cuiyongchao007 2021-01-20 23:33:51
deploying k8s cluster


5、 ... and 、 Deploy k8s Cluster( On )

We're going to deploy three nodes of Kubernetes Cluster.

​ master yes Master,node1 and node2 yes Node. The operating system of all nodes is Ubuntu 18.04, Of course other Linux It's OK, too . For official installation documents, please refer to https://kubernetes.io/docs/setup/independent/install-kubeadm/

​ Be careful :Kubernetes Almost all the installation components and Docker The images are all in goolge On my website , This may be a big obstacle for domestic students . Suggestion is : We have to find a way to overcome the network obstacles , Or even Kubernetes I can't even get in .

( One ) install Docker

​ All nodes need to be installed Docker. Reference resources :https://docs.docker.com/engine/install/ubuntu/

# step 1: Install some of the necessary system tools , Installation package , allow apt command HTTPS visit Docker Source .
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: install GPG certificate
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: Write the software source information , take Docker Source added to /etc/apt/sources.list
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: Update and install Docker-CE
sudo apt-get -y update &&
sudo apt-get install docker-ce docker-ce-cli containerd.io

( Two ) install kubelet、kubeadm and kubectl

​ Install on all nodes kubelet、kubeadm and kubectl.

​ kubelet Running on the Cluster On all nodes , Responsible for starting Pod And the container .

​ kubeadm For initialization Cluster.

​ kubectl yes Kubernetes Command line tools . adopt kubectl Can deploy and manage applications , View resources , establish 、 Delete and update various components .

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

( 3、 ... and ) use kubeadm establish Cluster

​ The complete official document can refer to  https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

(1) initialization Master

​ Control plane nodes are machines that run control plane components , Include  etcd( Cluster database ) and  API Server( Command line tools  kubectl Communicate with ).

1.( recommend ) If you plan to put a single control plane kubeadm Cluster upgrade to high availability , You should designate --control-plane-endpoint Set shared endpoints for all control plane nodes . The endpoint can be a load balancer DNS Name or IP Address .
2. Select a Pod The network plugin , And verify if it is necessary to kubeadm init Pass parameters . According to the third-party network plug-in you choose , You may need to set --pod-network-cidr Value . see also install Pod Network add ons .
3.( Optional ) From version 1.14 Start ,kubeadm Try to use a series of well-known domain socket paths to detect Linux Container runtime on . To use a different container runtime , Or if multiple containers are installed on a preconfigured node , Please for kubeadm init Appoint --cri-socket Parameters . See Installing runtime .
4.( Optional ) Unless otherwise stated , otherwise kubeadm Use the network interface associated with the default gateway to set up this control plane node API server The address of . To use other network interfaces , Please for kubeadm init Set up --apiserver-advertise-address=<ip-address> Parameters . To deploy and use IPv6 Address of the Kubernetes colony , Must specify a IPv6 Address , for example --apiserver-advertise-address=fd00::101
5.( Optional ) stay kubeadm init Run before kubeadm config images pull, To verify and gcr.io Connectivity of container image warehouse .

About apiserver-advertise-address and ControlPlaneEndpoint Precautions for :

--apiserver-advertise-address It can be used for API server Set broadcast address , --control-plane-endpoint Can be used to set shared endpoints for all control plane nodes .

--control-plane-endpoint allow IP Addresses and can be mapped to IP Address of the DNS name . Please contact your network administrator , To evaluate possible solutions for such mappings .

​ This is an example mapping :

192.168.0.102 cluster-endpoint

​ among 192.168.0.102 Is the IP Address ,cluster-endpoint Is mapped to the IP The custom of DNS name . This will allow you to --control-plane-endpoint=cluster-endpoint Pass to kubeadm init, And the same DNS The name is passed to kubeadm join. You can change it later cluster-endpoint To point to the address of the load balancer in the high availability scenario .

kubeadm No support, there will be no --control-plane-endpoint Parameters of a single control plane cluster into a high availability cluster .

​ close SELinux:

# Temporarily disabled selinux
# Permanent ban modify /etc/sysconfig/selinux File settings
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
# Press enter here , Here's the second command
setenforce 0

​ stay Master Execute the following command above :

 close swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
All nodes ,kubeadm Initialization warning ”cgroupfs“ solve
vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
systemctl daemon-reload
systemctl restart docker
initialization master:
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address 10.0.0.41 --pod-network-cidr=10.244.0.0/16

--apiserver-advertise-address Indicate the use Master Which one? interface And Cluster Other node communications for . If Master There are many. interface, It is suggested to specify clearly , If you don't specify ,kubeadm Will automatically select the interface.

--pod-network-cidr Appoint Pod Scope of the network .Kubernetes Support multiple network solutions , And different network solutions --pod-network-cidr Have their own requirements , I'm going to set it to 10.244.0.0/16 Because we will use flannel Network plan , Must be set to this CIDR. In the later practice, we will switch to other network solutions , such as Canal.

root@cuiyongchao:~# kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address 10.0.0.41 --pod-network-cidr=10.244.0.0/16
W1101 09:18:28.676350 26460 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks ---①
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key ---②
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cuiyongchao kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [cuiyongchao localhost] and IPs [10.0.0.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [cuiyongchao localhost] and IPs [10.0.0.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file ---③
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.004269 seconds ----④
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node cuiyongchao as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node cuiyongchao as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: krsig9.fnxqz4724vkrlevz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy ---⑤
Your Kubernetes control-plane has initialized successfully! ---⑥
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ---⑦
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: ---⑧
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.41:6443 --token krsig9.fnxqz4724vkrlevz \ ---⑨
--discovery-token-ca-cert-hash sha256:cf41916f790097ac0619a837626caefb0ff5d926ea8e5cdedf5dbc1c80292fd1
root@cuiyongchao:~#

① kubeadm Perform pre initialization checks .

② Generate token And certificates .

③ Generate KubeConfig file ,kubelet Need this file with Master signal communication .

④ install Master Components , From goolge Of Registry Download the Docker Mirror image , This step may take some time , It depends on the quality of the network .

⑤ Install add ons kube-proxy and kube-dns.

⑥ Kubernetes Master Successful initialization .

⑦ Tips on how to configure kubectl, We will practice later .

⑧ Tips on how to install Pod The Internet , We will practice later .

⑨ Tips on how to register other nodes to Cluster, We will practice later .

(2) To configure kubectl

​ kubectl Is management Kubernetes Cluster Command line tools for , We have installed all the nodes before kubectl.Master Some configuration work needs to be done after initialization , then kubectl You can use it . according to kubeadm init The output of the ⑦ Step tips , Recommend to use Linux Common user execution kubectl(root There will be some problems ).

We are ubuntu User configuration kubectl:

su - ubuntu
rm -rf $HOME/.kube
sudo mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
For convenience , Enable kubectl Automatic completion function of command .
echo "source <(kubectl completion bash)" >> ~/.bashrc
such ubuntu Users can use kubectl 了 .

such ubuntu Users can use kubectl 了 .

版权声明
本文为[cuiyongchao007]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210120233327026V.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云