We have learned about k8s Node stains and pod On the topic of node taint tolerance , Please refer to :https://www.cnblogs.com/qiuhom-1874/p/14255486.html; Today we're going to talk about expansion k8s Related topics ;

k8s The process of creating resource objects on

We know that k8s On , There are types of resources , Different types of resources , They are defined in different ways and use different fields ; The user creates a resource , In fact, it means k8s Abstract resources are instantiated , Namely the k8s Abstract resources , Assign values through the resource list , The created object is the result of instantiating the corresponding type resource ; The user creates a resource , First, the request will be sent to apiserver, adopt apiserver Certification of 、 to grant authorization 、 After access control , The definition of the corresponding creation resource is stored in etcd in , The controller passes watch Mechanism monitoring apiserver Resource changes on , Trigger the controller of the corresponding type of resource through the corresponding resource change event to create the corresponding resource , And through the inner reconciliation cycle of the controller, it monitors whether the corresponding resource state is the same as the user-defined expected state , If you find something different , The internal reconciliation cycle will be triggered , The corresponding controller will send apiserver Initiate a request to create a resource , Rebuild the corresponding resources , Let the state of the corresponding resource always meet the user's expectation ; From the above process , There are two steps for users to create a resource , The first step is to send the corresponding request to apiserver, adopt apiserver Store the information of corresponding resource definition in etcd in ; The second step is for the controller of the corresponding resource type to pass apiserver from etcd Read the definition of the corresponding resource in , Create it ; about etcd Come on , It was just one kv database , Can store any type of kv data , But in the k8s On ,apiserver Abstract different types of resource definitions into different resources , Enable users to create corresponding resources , It must satisfy the specification of the corresponding type resource definition , Then store the definition of the specification in etcd in ; In short apiserver It's about putting users in etcd There's a layer of abstraction for the data in , This makes it impossible for users to store arbitrary data in etcd in , Deposit in etcd The data in must satisfy the corresponding apiserver Specification of interface definition ; It's like we're using mysql Database time , The definition of the corresponding table in the corresponding library must be followed ;

stay k8s Create a custom resource type on

stay k8s On , There are many types of resources , such as pod,service,PersistentVolume,PersistentVolumeClaim wait , These are some basic resource types ; We're going to create some kind of resource , Directly use the corresponding resource type , Just instantiate an object ; If we want to be in k8s Create a cluster on , Can we use some kind of resources directly , Instantiating a cluster object ? Theoretically, it can , But the premise is correspondence k8s There are resources of the corresponding type on ; There are corresponding types of resources , The user can store the definition of the corresponding creation resource in etcd in ; In addition to having corresponding types of resources , We also need corresponding controllers to create corresponding resources ; In this way, for different clusters or applications , Its organization and logic are different , The types of resources and controllers used are also different ; If users want to instantiate more advanced resources , You have to define the resource type manually , Instantiate the corresponding resource type as an object ; In addition, you need to implement the controller of the corresponding resource when necessary ; In short, users want to implement more advanced resource types , You have to expand the existing k8s Resource types and controllers for ;

stay k8s There are three ways to extend the resource type on the , The first is crd,crd yes k8s Built in resource types , This type of resource is mainly used to create resources of user-defined resource type ; That is, through crd resources , User defined resource types can be converted to k8s Type of resource on ; The second is customization apiserver; This is a little more complicated than the first , Users need to manually develop programs to achieve the corresponding functions apiserver, Let its users create custom type resources through custom apiserver Realization ; The third way is to modify the existing system k8sapiserver, Let it support the corresponding user-defined resource type ;

Custom controller

Custom resource types we can use crd Resource realization , You can also use custom apiserver Or modify the original apiserver Code implementation , But only the resource type can't instantiate the corresponding user-defined resource into a user-defined resource object , Only custom resource types , When a user creates a resource object of the corresponding resource type , Only the definition information of the corresponding resource type can be written to etcd in , It can't really run , To really run , We also need a custom controller , It is specially responsible for monitoring resource changes of corresponding resource types , Instantiate the corresponding resource as the corresponding k8s Resource objects on ; Of course, not all custom type resources need custom controllers , If the corresponding user-defined type resource calls the underlying basic controller to control the corresponding user-defined resource , Then the corresponding custom type resource does not need to use a custom controller ; We know the controller is k8s An important component of , Its working logic is to register to listen in apiserver The corresponding types of resource changes on the Internet , If the corresponding resource state does not meet the user's expected state , It will request according to the internal reconciliation cycle apiserver Send the definition of the corresponding type resource to it , Then reconstruct the corresponding resource according to the resource definition , Keep the state consistent with the user's expectation all the time ; The same logic applies to custom controllers , The purpose of using the custom controller is to enable the corresponding custom type resources to be monitored by the custom controller , Once the corresponding resources change , It can put it in k8s Create it on , And always keep in line with the user's expectations ; Custom controller and custom resource type can be implemented separately , It can also be combined to achieve , That is, the custom controller program can be created automatically crd resources , Let its corresponding custom type resources be k8s Identify and create it ; Is it implemented separately or in combination , It's up to the programmer to develop the custom controller ;

crd Resource definition help

[root@master01 ~]# kubectl explain crd
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1 DESCRIPTION:
CustomResourceDefinition represents a resource that should be exposed on
the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata <Object> spec <Object> -required-
spec describes how the user wants the resources to appear status <Object>
status indicates the actual state of the CustomResourceDefinition [root@master01 ~]#

Tips :crd Resources are k8s One of the standard resources on , Its definition mainly includes apiVersion,kind,metadata,spec and status; among kind The type is CustomResourceDefinition,apiVersion yes apiextensions.k8s.io/v1; These two are fixed formats ;spec Fields are used to define the related properties of the resources corresponding to the specified resource type ;

crd.spec Field description

[root@master01 ~]# kubectl explain crd.spec
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1 RESOURCE: spec <Object> DESCRIPTION:
spec describes how the user wants the resources to appear CustomResourceDefinitionSpec describes how a user wants their resource to
appear FIELDS:
conversion <Object>
conversion defines conversion settings for the CRD. group <string> -required-
group is the API group of the defined custom resource. The custom resources
are served under `/apis/<group>/...`. Must match the name of the
CustomResourceDefinition (in the form `<names.plural>.<group>`). names <Object> -required-
names specify the resource and kind names for the custom resource. preserveUnknownFields <boolean>
preserveUnknownFields indicates that object fields which are not specified
in the OpenAPI schema should be preserved when persisting to storage.
apiVersion, kind, metadata and known fields inside metadata are always
preserved. This field is deprecated in favor of setting
`x-preserve-unknown-fields` to true in
`spec.versions[*].schema.openAPIV3Schema`. See
https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields
for details. scope <string> -required-
scope indicates whether the defined custom resource is cluster- or
namespace-scoped. Allowed values are `Cluster` and `Namespaced`. versions <[]Object> -required-
versions is the list of all API versions of the defined custom resource.
Version names are used to compute the order in which served versions are
listed in API discovery. If the version string is "kube-like", it will sort
above non "kube-like" version strings, which are ordered lexicographically.
"Kube-like" versions start with a "v", then are followed by a number (the
major version), then optionally the string "alpha" or "beta" and another
number (the minor version). These are sorted first by GA > beta > alpha
(where GA is a version with no suffix such as beta or alpha), and then by
comparing major version, then minor version. An example sorted list of
versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2,
foo1, foo10. [root@master01 ~]#

Tips :crd.spec in group Field is used to describe the group name of the corresponding custom type resource , Its value is the string ;names Fields are used to describe the corresponding types of custom type resources , Name and so on , Its value is an object ;scope Field is used to define the level of the corresponding custom resource ; The value of this field can only be Cluster or Namespaced;versions Field is used to specify the version of the corresponding custom resource , And the attribute fields of the corresponding type resources , This field is a list object ;

Example : Define a custom type resource

[root@master01 ~]# cat crontab-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# The name must be the same as spec Field matching , And the format is '< Plural form of a name >.< Group name >'
name: crontabs.stable.example.com
spec:
# Group name , be used for REST API: /apis/< Group >/< edition >
group: stable.example.com
# List this CustomResourceDefinition Version supported
versions:
- name: v1
# Every version is available through served Flag to enable or disable independently
served: true
# One and only one version must be marked as a storage version
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# It can be Namespaced or Cluster
scope: Namespaced
names:
# Plural form of a name , be used for URL:/apis/< Group >/< edition >/< Plural form of a name >
plural: crontabs
# The singular form of the name , As an alias for command line use and display
singular: crontab
# kind It's usually the singular form of hump coding (CamelCased) form . Your resource list will use this form .
kind: CronTab
# shortNames Allows you to use shorter strings on the command line to match resources
shortNames:
- ct
[root@master01 ~]#

Use before applying the resource list kubectl get crontab

[root@master01 ~]# kubectl get crontab
error: the server doesn't have a resource type "crontab"
[root@master01 ~]#

Tips : Use... Before applying the resource list kubectl get crontab, It tells us that what we don't have is crontab Resources for

Application resource list

[root@master01 ~]# kubectl apply -f crontab-crd.yaml
customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created
[root@master01 ~]# kubectl get crontab
No resources found in default namespace.
[root@master01 ~]#

Tips : After applying the resource list , Again using kubectl get crontab There is no wrong report , It's just a hint that default The namespace has no corresponding type resource ;

see crd

[root@master01 ~]# kubectl get crd
NAME CREATED AT
bgpconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
bgppeers.crd.projectcalico.org 2021-01-03T15:49:21Z
blockaffinities.crd.projectcalico.org 2021-01-03T15:49:21Z
clusterinformations.crd.projectcalico.org 2021-01-03T15:49:21Z
crontabs.stable.example.com 2021-01-12T12:39:00Z
felixconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworksets.crd.projectcalico.org 2021-01-03T15:49:21Z
hostendpoints.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamblocks.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamconfigs.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamhandles.crd.projectcalico.org 2021-01-03T15:49:21Z
ippools.crd.projectcalico.org 2021-01-03T15:49:21Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
networkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
networksets.crd.projectcalico.org 2021-01-03T15:49:22Z
[root@master01 ~]# kubectl get crd/crontabs.stable.example.com
NAME CREATED AT
crontabs.stable.example.com 2021-01-12T12:39:00Z
[root@master01 ~]#

Check the details

[root@master01 ~]# kubectl get crd/crontabs.stable.example.com
NAME CREATED AT
crontabs.stable.example.com 2021-01-12T12:39:00Z
[root@master01 ~]# kubectl describe crd/crontabs.stable.example.com
Name: crontabs.stable.example.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2021-01-12T12:39:00Z
Generation: 1
Managed Fields:
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind:
f:listKind:
f:plural:
f:shortNames:
f:singular:
f:conditions:
Manager: kube-apiserver
Operation: Update
Time: 2021-01-12T12:39:00Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
f:conversion:
.:
f:strategy:
f:group:
f:names:
f:kind:
f:listKind:
f:plural:
f:shortNames:
f:singular:
f:scope:
f:versions:
f:status:
f:storedVersions:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-12T12:39:00Z
Resource Version: 805506
UID: b92a90f4-c953-4876-a496-030c9ba023fd
Spec:
Conversion:
Strategy: None
Group: stable.example.com
Names:
Kind: CronTab
List Kind: CronTabList
Plural: crontabs
Short Names:
ct
Singular: crontab
Scope: Namespaced
Versions:
Name: v1
Schema:
openAPIV3Schema:
Properties:
Spec:
Properties:
Cron Spec:
Type: string
Image:
Type: string
Replicas:
Type: integer
Type: object
Type: object
Served: true
Storage: true
Status:
Accepted Names:
Kind: CronTab
List Kind: CronTabList
Plural: crontabs
Short Names:
ct
Singular: crontab
Conditions:
Last Transition Time: 2021-01-12T12:39:00Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2021-01-12T12:39:00Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
Events: <none>
[root@master01 ~]#

Use custom resource types crontab Create resources

[root@master01 ~]# cat my-crontab.yaml
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
name: my-new-cron-object
spec:
cronSpec: "* * * * */5"
image: my-awesome-cron-image
[root@master01 ~]#

Tips : The above list of resources represents the creation of a resource of type Crontab Resources for , The group version of this resource is stable.example.com/v1;

Application resource list

[root@master01 ~]# kubectl apply -f my-crontab.yaml
crontab.stable.example.com/my-new-cron-object created
[root@master01 ~]# kubectl get ct
NAME AGE
my-new-cron-object 5s
[root@master01 ~]# kubectl describe ct/my-new-cron-object
Name: my-new-cron-object
Namespace: default
Labels: <none>
Annotations: <none>
API Version: stable.example.com/v1
Kind: CronTab
Metadata:
Creation Timestamp: 2021-01-12T12:45:29Z
Generation: 1
Managed Fields:
API Version: stable.example.com/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:cronSpec:
f:image:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-12T12:45:29Z
Resource Version: 806182
UID: 31a88a3d-fa99-42b8-80f6-3e4559efdc40
Spec:
Cron Spec: * * * * */5
Image: my-awesome-cron-image
Events: <none>
[root@master01 ~]#

Tips : You can see that the corresponding type resource has been created successfully ; The above examples are simple crd Use example of , It doesn't have any real effect ;

Deploy custom controllers

Example : Deploy mongodb-aperator

1、 Cloning project

[root@master01 ~]# git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
Cloning into 'mongodb-kubernetes-operator'...
remote: Enumerating objects: 95, done.
remote: Counting objects: 100% (95/95), done.
remote: Compressing objects: 100% (74/74), done.
remote: Total 4506 (delta 30), reused 60 (delta 15), pack-reused 4411
Receiving objects: 100% (4506/4506), 18.04 MiB | 183.00 KiB/s, done.
Resolving deltas: 100% (2621/2621), done.
[root@master01 ~]#

2、 Create a namespace mongodb, And into the mongodb-kubernetes-operator Directory Application crd resources , Create a custom resource type

[root@master01 mongodb-kubernetes-operator]# kubectl create ns mongodb
namespace/mongodb created
[root@master01 mongodb-kubernetes-operator]# kubectl get ns
NAME STATUS AGE
default Active 35d
ingress-nginx Active 22d
kube-node-lease Active 35d
kube-public Active 35d
kube-system Active 35d
kubernetes-dashboard Active 11d
mongodb Active 4s
[root@master01 mongodb-kubernetes-operator]# ls
agent build deploy docs go.sum pkg release.json scripts testdata version
APACHE2 cmd dev_notes go.mod LICENSE.md README.md requirements.txt test tools.go
[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_mongodb_crd.yaml -n mongodb
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/mongodb.mongodb.com created
[root@master01 mongodb-kubernetes-operator]#

verification : see mongodb Whether the type resource has been created successfully ?

[root@master01 mongodb-kubernetes-operator]# kubectl get crd
NAME CREATED AT
bgpconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
bgppeers.crd.projectcalico.org 2021-01-03T15:49:21Z
blockaffinities.crd.projectcalico.org 2021-01-03T15:49:21Z
clusterinformations.crd.projectcalico.org 2021-01-03T15:49:21Z
crontabs.stable.example.com 2021-01-12T12:39:00Z
felixconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworksets.crd.projectcalico.org 2021-01-03T15:49:21Z
hostendpoints.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamblocks.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamconfigs.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamhandles.crd.projectcalico.org 2021-01-03T15:49:21Z
ippools.crd.projectcalico.org 2021-01-03T15:49:21Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
mongodb.mongodb.com 2021-01-13T06:38:22Z
networkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
networksets.crd.projectcalico.org 2021-01-03T15:49:22Z
[root@master01 mongodb-kubernetes-operator]# kubectl get crd/mongodb.mongodb.com
NAME CREATED AT
mongodb.mongodb.com 2021-01-13T06:38:22Z
[root@master01 mongodb-kubernetes-operator]#

3、 install operator

[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/operator/ -n mongodb
deployment.apps/mongodb-kubernetes-operator created
role.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
rolebinding.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
serviceaccount/mongodb-kubernetes-operator created
[root@master01 mongodb-kubernetes-operator]#

Tips :mongodb-kubernetes-operator This project implements custom controller and custom resource type separately ; Its operator Only responsible for creating and monitoring changes of corresponding resource types , When resources change , Instantiate as the corresponding resource object , And keep the corresponding resource object state consistent with the user's expected state ; One of the four lists is created sa Account , And the corresponding sa User authorization ;

operator.yaml Content

[root@master01 mongodb-kubernetes-operator]# cat deploy/operator/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-kubernetes-operator
spec:
replicas: 1
selector:
matchLabels:
name: mongodb-kubernetes-operator
template:
metadata:
labels:
name: mongodb-kubernetes-operator
spec:
serviceAccountName: mongodb-kubernetes-operator
containers:
- name: mongodb-kubernetes-operator
image: quay.io/mongodb/mongodb-kubernetes-operator:0.3.0
command:
- mongodb-kubernetes-operator
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "mongodb-kubernetes-operator"
- name: AGENT_IMAGE # The MongoDB Agent the operator will deploy to manage MongoDB deployments
value: quay.io/mongodb/mongodb-agent:10.19.0.6562-1
- name: VERSION_UPGRADE_HOOK_IMAGE
value: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
- name: MONGODB_IMAGE
value: "library/mongo"
- name: MONGODB_REPO_URL
value: "registry.hub.docker.com"
[root@master01 mongodb-kubernetes-operator]#

Tips : The above resources are mainly used for deploy The controller running corresponds to a custom controller pod;

verification : see operator Is it working

[root@master01 mongodb-kubernetes-operator]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 26s
[root@master01 mongodb-kubernetes-operator]#

Tips : Be able to see operator The normal operation , It means operator Installed successfully ;

verification : Use a custom resource type to create a mongodb Replica set cluster

[root@master01 mongodb-kubernetes-operator]# cat deploy/crds/mongodb.com_v1_mongodb_cr.yaml
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: example-mongodb
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
security:
authentication:
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: my-scram # the user credentials will be generated from this secret
# once the credentials are generated, this secret is no longer required
---
apiVersion: v1
kind: Secret
metadata:
name: my-user-password
type: Opaque
stringData:
password: 58LObjiMpxcjP1sMDW
[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_v1_mongodb_cr.yaml
mongodb.mongodb.com/example-mongodb created
secret/my-user-password created
[root@master01 mongodb-kubernetes-operator]#

Application list

[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_v1_mongodb_cr.yaml -n mongodb
mongodb.mongodb.com/example-mongodb created
secret/my-user-password created
[root@master01 mongodb-kubernetes-operator]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 0/2 Pending 0 9s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 88s
[root@master01 mongodb-kubernetes-operator]#

Tips : Here you can see the corresponding pod be in pending state ;

see pod Details

[root@master01 mongodb-kubernetes-operator]# kubectl describe pod/example-mongodb-0 -n mongodb|grep -A 10 "Events"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 66s (x2 over 66s) default-scheduler 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
[root@master01 mongodb-kubernetes-operator]#

Tips : Here's a hint that there's nothing to use pvc;

Delete mongodb Under namespace pvc

[root@master01 mongodb-kubernetes-operator]# kubectl get pvc -n mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-example-mongodb-0 Pending 92s
[root@master01 mongodb-kubernetes-operator]# kubectl delete pvc --all -n mongodb
persistentvolumeclaim "data-volume-example-mongodb-0" deleted
[root@master01 mongodb-kubernetes-operator]# kubectl get pvc -n mongodb
No resources found in mongodb namespace.
[root@master01 mongodb-kubernetes-operator]#

establish pv and pvc

[root@master01 mongodb-kubernetes-operator]# cd
[root@master01 ~]# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v1
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v1
server: 192.168.0.99
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v2
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v2
server: 192.168.0.99
--- apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v3
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v3
server: 192.168.0.99
[root@master01 ~]#

Application manifest creation pv

[root@master01 ~]# kubectl apply -f pv-demo.yaml
persistentvolume/nfs-pv-v1 created
persistentvolume/nfs-pv-v2 created
persistentvolume/nfs-pv-v3 created
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v2 1Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v3 1Gi RWO,ROX,RWX Retain Available 3s
[root@master01 ~]#

establish pvc detailed list

[root@master01 ~]# cat pvc-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-0
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-1
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-2
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
[root@master01 ~]#

Application manifest creation pvc

[root@master01 ~]# kubectl get pvc -n mongodb
No resources found in mongodb namespace.
[root@master01 ~]# kubectl apply -f pvc-demo.yaml
persistentvolumeclaim/data-volume-example-mongodb-0 created
persistentvolumeclaim/data-volume-example-mongodb-1 created
persistentvolumeclaim/data-volume-example-mongodb-2 created
[root@master01 ~]# kubectl get pvc -n mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-example-mongodb-0 Bound nfs-pv-v1 1Gi RWO,ROX,RWX 6s
data-volume-example-mongodb-1 Bound nfs-pv-v2 1Gi RWO,ROX,RWX 6s
data-volume-example-mongodb-2 Bound nfs-pv-v3 1Gi RWO,ROX,RWX 6s
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-0 102s
nfs-pv-v2 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-1 102s
nfs-pv-v3 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-2 102s
[root@master01 ~]#

Tips : You can see the corresponding pvc and pv It's already bound ;

verification : see mongodb Is the replica set cluster running ?

[root@master01 ~]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 6m19s
example-mongodb-1 0/2 PodInitializing 0 10s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 7m38s
[root@master01 ~]# kubectl get pods -n mongodb -w
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 6m35s
example-mongodb-1 1/2 Running 0 26s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 7m54s
example-mongodb-1 2/2 Running 0 43s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Init:0/1 0 0s
example-mongodb-2 0/2 Init:0/1 0 1s
example-mongodb-2 0/2 Terminating 0 4s
example-mongodb-2 0/2 Terminating 0 6s
example-mongodb-2 0/2 Terminating 0 20s
example-mongodb-2 0/2 Terminating 0 20s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Init:0/1 0 0s
example-mongodb-2 0/2 Init:0/1 0 1s
example-mongodb-2 0/2 PodInitializing 0 7s
example-mongodb-2 1/2 Running 0 14s
example-mongodb-2 2/2 Running 0 36s
^C[root@master01 ~]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 8m
example-mongodb-1 2/2 Running 0 111s
example-mongodb-2 2/2 Running 0 48s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 9m19s
[root@master01 ~]#

Tips : You can see the corresponding pod It's running normally ;

verification : Use mongo Connect mongodbpod, See if the corresponding replica set cluster works properly ?

[root@master01 ~]# kubectl get pods -n mongodb -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
example-mongodb-0 2/2 Running 0 9m12s 10.244.4.101 node04.k8s.org <none> <none>
example-mongodb-1 2/2 Running 0 3m3s 10.244.2.130 node02.k8s.org <none> <none>
example-mongodb-2 2/2 Running 0 2m 10.244.1.130 node01.k8s.org <none> <none>
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 10m 10.244.3.116 node03.k8s.org <none> <none>
[root@master01 ~]# mongo 10.244.4.101
MongoDB shell version v4.4.3
connecting to: mongodb://10.244.4.101:27017/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b9d16fe9-6a74-4638-96e6-70aaf3c83bfa") }
MongoDB server version: 4.2.6
WARNING: shell and server versions do not match
example-mongodb:PRIMARY> show dbs
example-mongodb:PRIMARY> db.auth('my-user','58LObjiMpxcjP1sMDW')
Error: Authentication failed.
0
example-mongodb:PRIMARY> use admin
switched to db admin
example-mongodb:PRIMARY> db.auth('my-user','58LObjiMpxcjP1sMDW')
1
example-mongodb:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
example-mongodb:PRIMARY> db.isMaster()
{
"hosts" : [
"example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"example-mongodb-1.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"example-mongodb-2.example-mongodb-svc.mongodb.svc.cluster.local:27017"
],
"setName" : "example-mongodb",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"me" : "example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"electionId" : ObjectId("7fffffff0000000000000003"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1610520741, 1),
"t" : NumberLong(3)
},
"lastWriteDate" : ISODate("2021-01-13T06:52:21Z"),
"majorityOpTime" : {
"ts" : Timestamp(1610520741, 1),
"t" : NumberLong(3)
},
"majorityWriteDate" : ISODate("2021-01-13T06:52:21Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2021-01-13T06:52:27.873Z"),
"logicalSessionTimeoutMinutes" : 30,
"connectionId" : 153,
"minWireVersion" : 0,
"maxWireVersion" : 8,
"readOnly" : false,
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1610520741, 1),
"signature" : {
"hash" : BinData(0,"EcWzL7O9Ue9kmm6cQ4FumkcIP6g="),
"keyId" : NumberLong("6917119940596072451")
}
},
"operationTime" : Timestamp(1610520741, 1)
}
example-mongodb:PRIMARY>

Tips : You can see three mongodb pod It's a replica set relationship ; among example-mongodb-0 It's the master node , The other two are slave nodes ;

Finally, let's say , I'm doing the experiment above , although mongodb operator It's working , But with mongo This client tool can't write data when it connects to the master node , Prompt no permission ; But the corresponding user has read and write permission in the corresponding library ; stay admin Creating users in the library can prompt users to add successfully , But after a few seconds to query user information , Found that the user doesn't exist , I don't know why , If you have any friends, please let me know ( Blogger email :linux-1874@qq.com), Bloggers will be grateful ..

Container arrangement system K8s And crd More articles on resources

  1. I'll show you through kubernetes Container arrangement system

    This article is You Yun. + Community publication author :turboxu Kubernetes As an important member of the container choreography ecosystem , yes Google Large-scale container management systems borg The open source version of the implementation , Draw upon google What we've learned in the production environment over the past decade ...

  2. ASP.NET Core With the help of K8S Play with container arrangement

    Production-Grade Container Orchestration - Automated container deployment, scaling, and management. ...

  3. K8s Container arrangement

    K8s Container arrangement Kubernetes(k8s) Has the complete cluster management ability : Including multi-level security protection and access mechanism Multi-tenant application support capability Transparent service registration and service discovery mechanism Built in intelligent load balancer Powerful fault detection and self-healing capabilities ...

  4. K8S - Container choreographer Kubernetes brief introduction

    1 - Kubernetes Kubernetes( abbreviation K8s, use 8 Instead of 8 Characters "ubernete") yes Google An open source container choreography engine . At present, the most widely and popular container scheduling system , ...

  5. k8s Container choreography

    1.K8S How to arrange containers ? stay K8S In the cluster , The container is not the smallest unit ,K8S The smallest scheduling unit in the cluster is Pod, The container is encapsulated in Pod In . Thus we can see that , One or more containers can belong to the same Pod In . 2.Pod How ...

  6. docker, Containers , layout , And container based system design patterns

    Catalog Let's start with containers background docker Realization principle The fight for choreography Container based distributed system design Single node cooperation mode Sidecar pattern( Side car mode ) Ambassador pattern( The diplomat model ) A ...

  7. K8S Container choreography and application choreography in

    as everyone knows ,Kubernetes It's a container choreography platform , It's very rich in primitive API To support container choreography , But for users, they are more concerned about the layout of an application , Contains a combination of multiple containers and services , Manage the dependencies between them , And how to manage ...

  8. DOCKER Learning notes 9 Kubernetes (K8s) Production level container scheduling On

    Preface In the last section . We have been able to go through the most basic Docker Swarm Create clusters , Then add the tasks we need to run in the cluster And the number of tasks So we create a service . Of course , This way in our local virtual machine ...

  9. How to improve the utilization of cluster resources ? Ali container scheduling system Sigma In depth analysis of

    Introduction to ali mei : In order to ensure the smooth operation of the system's online trading service , In the first few years , Ali is in double 11 Before the big promotion comes, a large number of machines will be purchased to reserve computing resources , It leads to double 11 After that, there were a lot of idle resources . Whether it is possible to deploy computing tasks and online services in a mixed way , In the existing flexible capital ...

  10. 01 . Introduction of container layout and Kubernetes The core concept

    Kubernetes brief introduction Kubernetes It's a secret weapon that Google has kept secret for more than ten years -Borg An open source version of , yes Docker Distributed system solutions .2014 Year by year Google The company started . Kubernetes It provides noodles ...

Random recommendation

  1. Read IL The code is so simple ( 3、 ... and ) Conclusion

    One Preface I wrote two articles about IL Instruction related articles , Put value type and reference type in The operation difference between heap and stack has been written in detail This third and last one , The reason is that the third chapter is over , Because at my current level , I've finished what I can understand , And I think ...

  2. c++ Iterative version of linked list merge sort

    I used to use js Write a merge sort non recursive version , And this time ,c++ When encapsulating the linked list, we also encounter a merge sort interface . Deng realized the recursive version of merge sort , But the accumulation of recursive call function stack takes up a lot of memory space . So , Then try to implement on the linked list structure to ...

  3. JS Learn more about closures

    Closure (closure) yes Javascript A difficulty in language , It's also a feature , Many advanced applications depend on closures .   One . Scope of variable To understand closures , First of all, we must understand Javascript Special variable scope . The operation of variables ...

  4. 【JQGRID DOCUMENTATION】. Learning notes .3.Pager

    When dealing with large amounts of data , Just want to show a small part at a time . It's time to Navigation Bar. at present Pager Cannot be used for TreeGrid. Definition }); Definition height grid,pager yes grid Part of , The width is equal to gird Of ...

  5. T-SQL Query advanced — understand SQL Server In the lock

    stay SQL Server in , Each query will find the shortest path to achieve its goal . If the database accepts only one connection and executes only one query at a time . So, of course, the query is to complete the work more quickly, better and more economically . But for most databases, multiple queries need to be processed at the same time . these ...

  6. Ban win7 Configure yourself ipv4 Address

    The phenomenon A new computer , Connected to the cable , No, dhcp, You need to configure it manually Ip. It's configured with a Ip after , Find out ping The gateway doesn't work . ipconfig Found to have 2 individual IP:   Configure yourself IPv4 Address  . . . . . . ...

  7. ArcMap The road to consolidation -- This section of road is combined into a complete road

    #1: use Arctoolbox\Data Management Tools\Generalization\dissolve Tools #2: Options:dissolve field Item selection " ...

  8. asp.net MVC In the framework, the controller uses Newtonsoft.Json Parse the string from the front end

    Let me share my experience with you with an example ,asp.net MVC In the framework, the controller uses Newtonsoft.Json Parse the string from the front end . using Newtonsoft.Json; usin ...

  9. ORACLE How to open and close the archive log

    One Set to archive mode 1 sql> archive log list;   # Check if it's filed 2 sql> alter system set log_archive_start=true ...

  10. C# A simple singleton system

    Singleton base class public class CSingletonBase<TYPE> { public static TYPE Singleton { get { return m_singlet ...