K8S belongs to the master-slave device model (Master-Slave architecture), that is, the Master node is responsible for the scheduling, management and operation and maintenance of the core, and the Slave node executes the user's program.
API Server: K8S request entry service.The API Server is responsible for receiving all K8S requests (from the UI interface or CLI command line tool), and then the API Server notifies other components to work according to the user's specific request.
Scheduler: The scheduler of all Worker Nodes in K8S.When the user wants to deploy the service, the Scheduler will choose the most suitable Worker Node (server) to deploy.
Controller Manager: The monitor of all Worker Nodes in K8S.Controller Manager has many specific Controllers, such as Node Controller, Service Controller, Volume Controller, etc.The Controller is responsible for monitoring and adjusting the status of the services deployed on the Worker Node. For example, if the user requests that Service A deploy two copies, then when one of the services hangs up, the Controller will immediately adjust and let the Scheduler choose another Worker Node to redeploy the service.
etcd: K8S storage service.etcd stores the key configuration and user configuration of K8S. In K8S, only the API Server has read and write permissions, and other components must pass the interface of the API Server to read and write data.
2. Worker Node Component
Kubelet: Monitor of Worker Node and communicator with Master Node.Kubelet is the "eyeliner" placed by the Master Node on the Worker Node. It will regularly report to the Master Node the status of the services running on its Node, and accept instructions from the Master Node to take adjustment measures.Responsible for controlling the start and stop of all containers to ensure that the nodes work normally.
Kube-Proxy: K8S network proxy.Kube-Proxy is responsible for Node's network communication in K8S and load balancing for external network traffic.
Container Runtime: The running environment of Worker Node.That is, the software environment required for containerization is installed to ensure that the containerized program can run, such as the Docker Engine operating environment.
3. K8S workflow
In the process of deploying Nginx with K8S, how do the internal components of K8S work together: We execute a command on the master node to ask the master to deploy an nginx application (kubectl create deployment nginx --image=nginx)
This command is first sent to the gateway api server of the master node, which is the only entrance of matser
The api server sends the command request to the controller manager for control
controller manager for application deployment analysis
The controller manager will generate deployment information once and store the information in etcd storage through the api server
The scheduler scheduler gets the application to be deployed from the etcd storage through the api server, and starts scheduling to see which node has resources suitable for deployment
scheduler puts the calculated scheduling information into etcd through the api server
Kubelet, the monitoring component of each node node, keeps in touch with the master at any time (sending requests to the api-server to continuously obtain the latest data), and obtains the deployment information stored in etcd by the master node
Assuming that the kubelet of node2 gets the deployment information, it shows that its own node wants to deploy a certain application
kubelet runs an application on the current machine by itself, and reports the status information of the current application to the master at any time
The node and the master are also connected through the api-server component of the master
The kube-proxy on each machine can know all the networks of the cluster. As long as the node accesses others or others access the node, the kube-proxy network agent on the node will automatically calculate and forward the traffic.
Reminder: The above is what I will talk about today. This article only briefly introduces the core architecture principles of K8S.