CRD resources of container layout system k8s

Linux-1874 2021-01-14 19:48:17
crd resources container layout k8s


We have learned about k8s Node stains and pod On the topic of node taint tolerance , Please refer to :https://www.cnblogs.com/qiuhom-1874/p/14255486.html; Today we're going to talk about expansion k8s Related topics ;

k8s The process of creating resource objects on

We know that k8s On , There are types of resources , Different types of resources , They are defined in different ways and use different fields ; The user creates a resource , In fact, it means k8s Abstract resources are instantiated , Namely the k8s Abstract resources , Assign values through the resource list , The created object is the result of instantiating the corresponding type resource ; The user creates a resource , First, the request will be sent to apiserver, adopt apiserver Certification of 、 to grant authorization 、 After access control , The definition of the corresponding creation resource is stored in etcd in , The controller passes watch Mechanism monitoring apiserver Resource changes on , Trigger the controller of the corresponding type of resource through the corresponding resource change event to create the corresponding resource , And through the inner reconciliation cycle of the controller, it monitors whether the corresponding resource state is the same as the user-defined expected state , If you find something different , The internal reconciliation cycle will be triggered , The corresponding controller will send apiserver Initiate a request to create a resource , Rebuild the corresponding resources , Let the state of the corresponding resource always meet the user's expectation ; From the above process , There are two steps for users to create a resource , The first step is to send the corresponding request to apiserver, adopt apiserver Store the information of corresponding resource definition in etcd in ; The second step is for the controller of the corresponding resource type to pass apiserver from etcd Read the definition of the corresponding resource in , Create it ; about etcd Come on , It was just one kv database , Can store any type of kv data , But in the k8s On ,apiserver Abstract different types of resource definitions into different resources , Enable users to create corresponding resources , It must satisfy the specification of the corresponding type resource definition , Then store the definition of the specification in etcd in ; In short apiserver It's about putting users in etcd There's a layer of abstraction for the data in , This makes it impossible for users to store arbitrary data in etcd in , Deposit in etcd The data in must satisfy the corresponding apiserver Specification of interface definition ; It's like we're using mysql Database time , The definition of the corresponding table in the corresponding library must be followed ;

stay k8s Create a custom resource type on

stay k8s On , There are many types of resources , such as pod,service,PersistentVolume,PersistentVolumeClaim wait , These are some basic resource types ; We're going to create some kind of resource , Directly use the corresponding resource type , Just instantiate an object ; If we want to be in k8s Create a cluster on , Can we use some kind of resources directly , Instantiating a cluster object ? Theoretically, it can , But the premise is correspondence k8s There are resources of the corresponding type on ; There are corresponding types of resources , The user can store the definition of the corresponding creation resource in etcd in ; In addition to having corresponding types of resources , We also need corresponding controllers to create corresponding resources ; In this way, for different clusters or applications , Its organization and logic are different , The types of resources and controllers used are also different ; If users want to instantiate more advanced resources , You have to define the resource type manually , Instantiate the corresponding resource type as an object ; In addition, you need to implement the controller of the corresponding resource when necessary ; In short, users want to implement more advanced resource types , You have to expand the existing k8s Resource types and controllers for ;

stay k8s There are three ways to extend the resource type on the , The first is crd,crd yes k8s Built in resource types , This type of resource is mainly used to create resources of user-defined resource type ; That is, through crd resources , User defined resource types can be converted to k8s Type of resource on ; The second is customization apiserver; This is a little more complicated than the first , Users need to manually develop programs to achieve the corresponding functions apiserver, Let its users create custom type resources through custom apiserver Realization ; The third way is to modify the existing system k8sapiserver, Let it support the corresponding user-defined resource type ;

Custom controller

Custom resource types we can use crd Resource realization , You can also use custom apiserver Or modify the original apiserver Code implementation , But only the resource type can't instantiate the corresponding user-defined resource into a user-defined resource object , Only custom resource types , When a user creates a resource object of the corresponding resource type , Only the definition information of the corresponding resource type can be written to etcd in , It can't really run , To really run , We also need a custom controller , It is specially responsible for monitoring resource changes of corresponding resource types , Instantiate the corresponding resource as the corresponding k8s Resource objects on ; Of course, not all custom type resources need custom controllers , If the corresponding user-defined type resource calls the underlying basic controller to control the corresponding user-defined resource , Then the corresponding custom type resource does not need to use a custom controller ; We know the controller is k8s An important component of , Its working logic is to register to listen in apiserver The corresponding types of resource changes on the Internet , If the corresponding resource state does not meet the user's expected state , It will request according to the internal reconciliation cycle apiserver Send the definition of the corresponding type resource to it , Then reconstruct the corresponding resource according to the resource definition , Keep the state consistent with the user's expectation all the time ; The same logic applies to custom controllers , The purpose of using the custom controller is to enable the corresponding custom type resources to be monitored by the custom controller , Once the corresponding resources change , It can put it in k8s Create it on , And always keep in line with the user's expectations ; Custom controller and custom resource type can be implemented separately , It can also be combined to achieve , That is, the custom controller program can be created automatically crd resources , Let its corresponding custom type resources be k8s Identify and create it ; Is it implemented separately or in combination , It's up to the programmer to develop the custom controller ;

crd Resource definition help

[root@master01 ~]# kubectl explain crd
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1
DESCRIPTION:
CustomResourceDefinition represents a resource that should be exposed on
the API server. Its name MUST be in the format <.spec.name>.<.spec.group>.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
spec <Object> -required-
spec describes how the user wants the resources to appear
status <Object>
status indicates the actual state of the CustomResourceDefinition
[root@master01 ~]#

Tips :crd Resources are k8s One of the standard resources on , Its definition mainly includes apiVersion,kind,metadata,spec and status; among kind The type is CustomResourceDefinition,apiVersion yes apiextensions.k8s.io/v1; These two are fixed formats ;spec Fields are used to define the related properties of the resources corresponding to the specified resource type ;

crd.spec Field description

[root@master01 ~]# kubectl explain crd.spec
KIND: CustomResourceDefinition
VERSION: apiextensions.k8s.io/v1
RESOURCE: spec <Object>
DESCRIPTION:
spec describes how the user wants the resources to appear
CustomResourceDefinitionSpec describes how a user wants their resource to
appear
FIELDS:
conversion <Object>
conversion defines conversion settings for the CRD.
group <string> -required-
group is the API group of the defined custom resource. The custom resources
are served under `/apis/<group>/...`. Must match the name of the
CustomResourceDefinition (in the form `<names.plural>.<group>`).
names <Object> -required-
names specify the resource and kind names for the custom resource.
preserveUnknownFields <boolean>
preserveUnknownFields indicates that object fields which are not specified
in the OpenAPI schema should be preserved when persisting to storage.
apiVersion, kind, metadata and known fields inside metadata are always
preserved. This field is deprecated in favor of setting
`x-preserve-unknown-fields` to true in
`spec.versions[*].schema.openAPIV3Schema`. See
https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields
for details.
scope <string> -required-
scope indicates whether the defined custom resource is cluster- or
namespace-scoped. Allowed values are `Cluster` and `Namespaced`.
versions <[]Object> -required-
versions is the list of all API versions of the defined custom resource.
Version names are used to compute the order in which served versions are
listed in API discovery. If the version string is "kube-like", it will sort
above non "kube-like" version strings, which are ordered lexicographically.
"Kube-like" versions start with a "v", then are followed by a number (the
major version), then optionally the string "alpha" or "beta" and another
number (the minor version). These are sorted first by GA > beta > alpha
(where GA is a version with no suffix such as beta or alpha), and then by
comparing major version, then minor version. An example sorted list of
versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2,
foo1, foo10.
[root@master01 ~]#

Tips :crd.spec in group Field is used to describe the group name of the corresponding custom type resource , Its value is the string ;names Fields are used to describe the corresponding types of custom type resources , Name and so on , Its value is an object ;scope Field is used to define the level of the corresponding custom resource ; The value of this field can only be Cluster or Namespaced;versions Field is used to specify the version of the corresponding custom resource , And the attribute fields of the corresponding type resources , This field is a list object ;

Example : Define a custom type resource

[root@master01 ~]# cat crontab-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# The name must be the same as spec Field matching , And the format is '< Plural form of a name >.< Group name >'
name: crontabs.stable.example.com
spec:
# Group name , be used for REST API: /apis/< Group >/< edition >
group: stable.example.com
# List this CustomResourceDefinition Version supported
versions:
- name: v1
# Every version is available through served Flag to enable or disable independently
served: true
# One and only one version must be marked as a storage version
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# It can be Namespaced or Cluster
scope: Namespaced
names:
# Plural form of a name , be used for URL:/apis/< Group >/< edition >/< Plural form of a name >
plural: crontabs
# The singular form of the name , As an alias for command line use and display
singular: crontab
# kind It's usually the singular form of hump coding (CamelCased) form . Your resource list will use this form .
kind: CronTab
# shortNames Allows you to use shorter strings on the command line to match resources
shortNames:
- ct
[root@master01 ~]#

Use before applying the resource list kubectl get crontab

[root@master01 ~]# kubectl get crontab
error: the server doesn't have a resource type "crontab"
[root@master01 ~]#

Tips : Use... Before applying the resource list kubectl get crontab, It tells us that what we don't have is crontab Resources for

Application resource list

[root@master01 ~]# kubectl apply -f crontab-crd.yaml
customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created
[root@master01 ~]# kubectl get crontab
No resources found in default namespace.
[root@master01 ~]#

Tips : After applying the resource list , Again using kubectl get crontab There is no wrong report , It's just a hint that default The namespace has no corresponding type resource ;

see crd

[root@master01 ~]# kubectl get crd
NAME CREATED AT
bgpconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
bgppeers.crd.projectcalico.org 2021-01-03T15:49:21Z
blockaffinities.crd.projectcalico.org 2021-01-03T15:49:21Z
clusterinformations.crd.projectcalico.org 2021-01-03T15:49:21Z
crontabs.stable.example.com 2021-01-12T12:39:00Z
felixconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworksets.crd.projectcalico.org 2021-01-03T15:49:21Z
hostendpoints.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamblocks.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamconfigs.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamhandles.crd.projectcalico.org 2021-01-03T15:49:21Z
ippools.crd.projectcalico.org 2021-01-03T15:49:21Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
networkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
networksets.crd.projectcalico.org 2021-01-03T15:49:22Z
[root@master01 ~]# kubectl get crd/crontabs.stable.example.com
NAME CREATED AT
crontabs.stable.example.com 2021-01-12T12:39:00Z
[root@master01 ~]#

Check the details

[root@master01 ~]# kubectl get crd/crontabs.stable.example.com
NAME CREATED AT
crontabs.stable.example.com 2021-01-12T12:39:00Z
[root@master01 ~]# kubectl describe crd/crontabs.stable.example.com
Name: crontabs.stable.example.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2021-01-12T12:39:00Z
Generation: 1
Managed Fields:
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind:
f:listKind:
f:plural:
f:shortNames:
f:singular:
f:conditions:
Manager: kube-apiserver
Operation: Update
Time: 2021-01-12T12:39:00Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
f:conversion:
.:
f:strategy:
f:group:
f:names:
f:kind:
f:listKind:
f:plural:
f:shortNames:
f:singular:
f:scope:
f:versions:
f:status:
f:storedVersions:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-12T12:39:00Z
Resource Version: 805506
UID: b92a90f4-c953-4876-a496-030c9ba023fd
Spec:
Conversion:
Strategy: None
Group: stable.example.com
Names:
Kind: CronTab
List Kind: CronTabList
Plural: crontabs
Short Names:
ct
Singular: crontab
Scope: Namespaced
Versions:
Name: v1
Schema:
openAPIV3Schema:
Properties:
Spec:
Properties:
Cron Spec:
Type: string
Image:
Type: string
Replicas:
Type: integer
Type: object
Type: object
Served: true
Storage: true
Status:
Accepted Names:
Kind: CronTab
List Kind: CronTabList
Plural: crontabs
Short Names:
ct
Singular: crontab
Conditions:
Last Transition Time: 2021-01-12T12:39:00Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2021-01-12T12:39:00Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
Events: <none>
[root@master01 ~]#

Use custom resource types crontab Create resources

[root@master01 ~]# cat my-crontab.yaml
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
name: my-new-cron-object
spec:
cronSpec: "* * * * */5"
image: my-awesome-cron-image
[root@master01 ~]#

Tips : The above list of resources represents the creation of a resource of type Crontab Resources for , The group version of this resource is stable.example.com/v1;

Application resource list

[root@master01 ~]# kubectl apply -f my-crontab.yaml
crontab.stable.example.com/my-new-cron-object created
[root@master01 ~]# kubectl get ct
NAME AGE
my-new-cron-object 5s
[root@master01 ~]# kubectl describe ct/my-new-cron-object
Name: my-new-cron-object
Namespace: default
Labels: <none>
Annotations: <none>
API Version: stable.example.com/v1
Kind: CronTab
Metadata:
Creation Timestamp: 2021-01-12T12:45:29Z
Generation: 1
Managed Fields:
API Version: stable.example.com/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:cronSpec:
f:image:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-01-12T12:45:29Z
Resource Version: 806182
UID: 31a88a3d-fa99-42b8-80f6-3e4559efdc40
Spec:
Cron Spec: * * * * */5
Image: my-awesome-cron-image
Events: <none>
[root@master01 ~]#

Tips : You can see that the corresponding type resource has been created successfully ; The above examples are simple crd Use example of , It doesn't have any real effect ;

Deploy custom controllers

Example : Deploy mongodb-aperator

1、 Cloning project

[root@master01 ~]# git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
Cloning into 'mongodb-kubernetes-operator'...
remote: Enumerating objects: 95, done.
remote: Counting objects: 100% (95/95), done.
remote: Compressing objects: 100% (74/74), done.
remote: Total 4506 (delta 30), reused 60 (delta 15), pack-reused 4411
Receiving objects: 100% (4506/4506), 18.04 MiB | 183.00 KiB/s, done.
Resolving deltas: 100% (2621/2621), done.
[root@master01 ~]#

2、 Create a namespace mongodb, And into the mongodb-kubernetes-operator Directory Application crd resources , Create a custom resource type

[root@master01 mongodb-kubernetes-operator]# kubectl create ns mongodb
namespace/mongodb created
[root@master01 mongodb-kubernetes-operator]# kubectl get ns
NAME STATUS AGE
default Active 35d
ingress-nginx Active 22d
kube-node-lease Active 35d
kube-public Active 35d
kube-system Active 35d
kubernetes-dashboard Active 11d
mongodb Active 4s
[root@master01 mongodb-kubernetes-operator]# ls
agent build deploy docs go.sum pkg release.json scripts testdata version
APACHE2 cmd dev_notes go.mod LICENSE.md README.md requirements.txt test tools.go
[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_mongodb_crd.yaml -n mongodb
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/mongodb.mongodb.com created
[root@master01 mongodb-kubernetes-operator]#

verification : see mongodb Whether the type resource has been created successfully ?

[root@master01 mongodb-kubernetes-operator]# kubectl get crd
NAME CREATED AT
bgpconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
bgppeers.crd.projectcalico.org 2021-01-03T15:49:21Z
blockaffinities.crd.projectcalico.org 2021-01-03T15:49:21Z
clusterinformations.crd.projectcalico.org 2021-01-03T15:49:21Z
crontabs.stable.example.com 2021-01-12T12:39:00Z
felixconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
globalnetworksets.crd.projectcalico.org 2021-01-03T15:49:21Z
hostendpoints.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamblocks.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamconfigs.crd.projectcalico.org 2021-01-03T15:49:21Z
ipamhandles.crd.projectcalico.org 2021-01-03T15:49:21Z
ippools.crd.projectcalico.org 2021-01-03T15:49:21Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-01-03T15:49:21Z
mongodb.mongodb.com 2021-01-13T06:38:22Z
networkpolicies.crd.projectcalico.org 2021-01-03T15:49:21Z
networksets.crd.projectcalico.org 2021-01-03T15:49:22Z
[root@master01 mongodb-kubernetes-operator]# kubectl get crd/mongodb.mongodb.com
NAME CREATED AT
mongodb.mongodb.com 2021-01-13T06:38:22Z
[root@master01 mongodb-kubernetes-operator]# 

3、 install operator

[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/operator/ -n mongodb
deployment.apps/mongodb-kubernetes-operator created
role.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
rolebinding.rbac.authorization.k8s.io/mongodb-kubernetes-operator created
serviceaccount/mongodb-kubernetes-operator created
[root@master01 mongodb-kubernetes-operator]#

Tips :mongodb-kubernetes-operator This project implements custom controller and custom resource type separately ; Its operator Only responsible for creating and monitoring changes of corresponding resource types , When resources change , Instantiate as the corresponding resource object , And keep the corresponding resource object state consistent with the user's expected state ; One of the four lists is created sa Account , And the corresponding sa User authorization ;

operator.yaml Content

[root@master01 mongodb-kubernetes-operator]# cat deploy/operator/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-kubernetes-operator
spec:
replicas: 1
selector:
matchLabels:
name: mongodb-kubernetes-operator
template:
metadata:
labels:
name: mongodb-kubernetes-operator
spec:
serviceAccountName: mongodb-kubernetes-operator
containers:
- name: mongodb-kubernetes-operator
image: quay.io/mongodb/mongodb-kubernetes-operator:0.3.0
command:
- mongodb-kubernetes-operator
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "mongodb-kubernetes-operator"
- name: AGENT_IMAGE # The MongoDB Agent the operator will deploy to manage MongoDB deployments
value: quay.io/mongodb/mongodb-agent:10.19.0.6562-1
- name: VERSION_UPGRADE_HOOK_IMAGE
value: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
- name: MONGODB_IMAGE
value: "library/mongo"
- name: MONGODB_REPO_URL
value: "registry.hub.docker.com"
[root@master01 mongodb-kubernetes-operator]#

Tips : The above resources are mainly used for deploy The controller running corresponds to a custom controller pod;

verification : see operator Is it working

[root@master01 mongodb-kubernetes-operator]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 26s
[root@master01 mongodb-kubernetes-operator]#

Tips : Be able to see operator The normal operation , It means operator Installed successfully ;

verification : Use a custom resource type to create a mongodb Replica set cluster

[root@master01 mongodb-kubernetes-operator]# cat deploy/crds/mongodb.com_v1_mongodb_cr.yaml
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
name: example-mongodb
spec:
members: 3
type: ReplicaSet
version: "4.2.6"
security:
authentication:
modes: ["SCRAM"]
users:
- name: my-user
db: admin
passwordSecretRef: # a reference to the secret that will be used to generate the user's password
name: my-user-password
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: my-scram
# the user credentials will be generated from this secret
# once the credentials are generated, this secret is no longer required
---
apiVersion: v1
kind: Secret
metadata:
name: my-user-password
type: Opaque
stringData:
password: 58LObjiMpxcjP1sMDW
[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_v1_mongodb_cr.yaml
mongodb.mongodb.com/example-mongodb created
secret/my-user-password created
[root@master01 mongodb-kubernetes-operator]#

Application list

[root@master01 mongodb-kubernetes-operator]# kubectl apply -f deploy/crds/mongodb.com_v1_mongodb_cr.yaml -n mongodb
mongodb.mongodb.com/example-mongodb created
secret/my-user-password created
[root@master01 mongodb-kubernetes-operator]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 0/2 Pending 0 9s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 88s
[root@master01 mongodb-kubernetes-operator]#

Tips : Here you can see the corresponding pod be in pending state ;

see pod Details

[root@master01 mongodb-kubernetes-operator]# kubectl describe pod/example-mongodb-0 -n mongodb|grep -A 10 "Events"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 66s (x2 over 66s) default-scheduler 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
[root@master01 mongodb-kubernetes-operator]#

Tips : Here's a hint that there's nothing to use pvc;

Delete mongodb Under namespace pvc

[root@master01 mongodb-kubernetes-operator]# kubectl get pvc -n mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-example-mongodb-0 Pending 92s
[root@master01 mongodb-kubernetes-operator]# kubectl delete pvc --all -n mongodb
persistentvolumeclaim "data-volume-example-mongodb-0" deleted
[root@master01 mongodb-kubernetes-operator]# kubectl get pvc -n mongodb
No resources found in mongodb namespace.
[root@master01 mongodb-kubernetes-operator]#

establish pv and pvc

[root@master01 mongodb-kubernetes-operator]# cd
[root@master01 ~]# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v1
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v1
server: 192.168.0.99
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v2
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v2
server: 192.168.0.99
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v3
labels:
app: example-mongodb-svc
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: ["ReadWriteOnce","ReadWriteMany","ReadOnlyMany"]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/v3
server: 192.168.0.99
[root@master01 ~]#

Application manifest creation pv

[root@master01 ~]# kubectl apply -f pv-demo.yaml
persistentvolume/nfs-pv-v1 created
persistentvolume/nfs-pv-v2 created
persistentvolume/nfs-pv-v3 created
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v2 1Gi RWO,ROX,RWX Retain Available 3s
nfs-pv-v3 1Gi RWO,ROX,RWX Retain Available 3s
[root@master01 ~]#

establish pvc detailed list

[root@master01 ~]# cat pvc-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-0
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-1
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-volume-example-mongodb-2
namespace: mongodb
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
[root@master01 ~]#

Application manifest creation pvc

[root@master01 ~]# kubectl get pvc -n mongodb
No resources found in mongodb namespace.
[root@master01 ~]# kubectl apply -f pvc-demo.yaml
persistentvolumeclaim/data-volume-example-mongodb-0 created
persistentvolumeclaim/data-volume-example-mongodb-1 created
persistentvolumeclaim/data-volume-example-mongodb-2 created
[root@master01 ~]# kubectl get pvc -n mongodb
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-example-mongodb-0 Bound nfs-pv-v1 1Gi RWO,ROX,RWX 6s
data-volume-example-mongodb-1 Bound nfs-pv-v2 1Gi RWO,ROX,RWX 6s
data-volume-example-mongodb-2 Bound nfs-pv-v3 1Gi RWO,ROX,RWX 6s
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-0 102s
nfs-pv-v2 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-1 102s
nfs-pv-v3 1Gi RWO,ROX,RWX Retain Bound mongodb/data-volume-example-mongodb-2 102s
[root@master01 ~]#

Tips : You can see the corresponding pvc and pv It's already bound ;

verification : see mongodb Is the replica set cluster running ?

[root@master01 ~]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 6m19s
example-mongodb-1 0/2 PodInitializing 0 10s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 7m38s
[root@master01 ~]# kubectl get pods -n mongodb -w
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 6m35s
example-mongodb-1 1/2 Running 0 26s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 7m54s
example-mongodb-1 2/2 Running 0 43s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Init:0/1 0 0s
example-mongodb-2 0/2 Init:0/1 0 1s
example-mongodb-2 0/2 Terminating 0 4s
example-mongodb-2 0/2 Terminating 0 6s
example-mongodb-2 0/2 Terminating 0 20s
example-mongodb-2 0/2 Terminating 0 20s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Pending 0 0s
example-mongodb-2 0/2 Init:0/1 0 0s
example-mongodb-2 0/2 Init:0/1 0 1s
example-mongodb-2 0/2 PodInitializing 0 7s
example-mongodb-2 1/2 Running 0 14s
example-mongodb-2 2/2 Running 0 36s
^C[root@master01 ~]# kubectl get pods -n mongodb
NAME READY STATUS RESTARTS AGE
example-mongodb-0 2/2 Running 0 8m
example-mongodb-1 2/2 Running 0 111s
example-mongodb-2 2/2 Running 0 48s
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 9m19s
[root@master01 ~]#

Tips : You can see the corresponding pod It's running normally ;

verification : Use mongo Connect mongodbpod, See if the corresponding replica set cluster works properly ?

[root@master01 ~]# kubectl get pods -n mongodb -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
example-mongodb-0 2/2 Running 0 9m12s 10.244.4.101 node04.k8s.org <none> <none>
example-mongodb-1 2/2 Running 0 3m3s 10.244.2.130 node02.k8s.org <none> <none>
example-mongodb-2 2/2 Running 0 2m 10.244.1.130 node01.k8s.org <none> <none>
mongodb-kubernetes-operator-7d557bcc95-th8js 1/1 Running 0 10m 10.244.3.116 node03.k8s.org <none> <none>
[root@master01 ~]# mongo 10.244.4.101
MongoDB shell version v4.4.3
connecting to: mongodb://10.244.4.101:27017/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b9d16fe9-6a74-4638-96e6-70aaf3c83bfa") }
MongoDB server version: 4.2.6
WARNING: shell and server versions do not match
example-mongodb:PRIMARY> show dbs
example-mongodb:PRIMARY> db.auth('my-user','58LObjiMpxcjP1sMDW')
Error: Authentication failed.
0
example-mongodb:PRIMARY> use admin
switched to db admin
example-mongodb:PRIMARY> db.auth('my-user','58LObjiMpxcjP1sMDW')
1
example-mongodb:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
example-mongodb:PRIMARY> db.isMaster()
{
"hosts" : [
"example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"example-mongodb-1.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"example-mongodb-2.example-mongodb-svc.mongodb.svc.cluster.local:27017"
],
"setName" : "example-mongodb",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"me" : "example-mongodb-0.example-mongodb-svc.mongodb.svc.cluster.local:27017",
"electionId" : ObjectId("7fffffff0000000000000003"),
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1610520741, 1),
"t" : NumberLong(3)
},
"lastWriteDate" : ISODate("2021-01-13T06:52:21Z"),
"majorityOpTime" : {
"ts" : Timestamp(1610520741, 1),
"t" : NumberLong(3)
},
"majorityWriteDate" : ISODate("2021-01-13T06:52:21Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 100000,
"localTime" : ISODate("2021-01-13T06:52:27.873Z"),
"logicalSessionTimeoutMinutes" : 30,
"connectionId" : 153,
"minWireVersion" : 0,
"maxWireVersion" : 8,
"readOnly" : false,
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1610520741, 1),
"signature" : {
"hash" : BinData(0,"EcWzL7O9Ue9kmm6cQ4FumkcIP6g="),
"keyId" : NumberLong("6917119940596072451")
}
},
"operationTime" : Timestamp(1610520741, 1)
}
example-mongodb:PRIMARY>

Tips : You can see three mongodb pod It's a replica set relationship ; among example-mongodb-0 It's the master node , The other two are slave nodes ;

Finally, let's say , I'm doing the experiment above , although mongodb operator It's working , But with mongo This client tool can't write data when it connects to the master node , Prompt no permission ; But the corresponding user has read and write permission in the corresponding library ; stay admin Creating users in the library can prompt users to add successfully , But after a few seconds to query user information , Found that the user doesn't exist , I don't know why , If you have any friends, please let me know ( Blogger email :linux-1874@qq.com), Bloggers will be grateful ..

版权声明
本文为[Linux-1874]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210114194749391g.html

  1. springboot异常处理之404
  2. Spring boot security international multilingual I18N
  3. Spring boot exception handling 404
  4. Netty系列化之Google Protobuf编解码
  5. Netty之编解码
  6. Java编解码
  7. Netty解码器
  8. Netty与TCP粘包拆包
  9. Netty开发入门
  10. Java集合遍历时遇到的坑
  11. Spring IOC 源码解析(下)
  12. Spring IoC源码解析(上)
  13. Google protobuf codec of netty serialization
  14. Encoding and decoding of netty
  15. Java codec
  16. Netty decoder
  17. Netty and TCP packet sticking and unpacking
  18. Introduction to netty development
  19. Problems encountered in Java collection traversal
  20. Spring IOC source code analysis (2)
  21. Spring IOC source code analysis (Part one)
  22. 半小时用Spring Boot注解实现Redis分布式锁
  23. Implementing redis distributed lock with spring boot annotation in half an hour
  24. What should we do if we can't get tickets for Spring Festival transportation? You can solve this problem by using these ticket grabbing apps!
  25. 百度智能(文本识别),API传图OC代码与SDK使用
  26. springboot源码解析-管中窥豹系列之aware(六)
  27. Baidu intelligent (text recognition), API map, OC code and SDK
  28. Spring boot source code analysis
  29. springboot源码解析-管中窥豹系列之aware(六)
  30. 百度智能(文本识别),API传图OC代码与SDK使用
  31. Spring boot source code analysis
  32. Baidu intelligent (text recognition), API map, OC code and SDK
  33. Java学习笔记
  34. Java learning notes
  35. Sentry(v20.12.1) K8S 雲原生架構探索, SENTRY FOR JAVASCRIPT 手動捕獲事件基本用法
  36. 我的程式設計師之路:自學Java篇
  37. SpringBoot專案,如何優雅的把介面引數中的空白值替換為null值?
  38. Sentry (v20.12.1) k8s cloud native architecture exploration, sentry for JavaScript manual capture event basic usage
  39. My way of programmer: self study java
  40. Spring boot project, how to gracefully replace the blank value in the interface argument with null value?
  41. Redis 用的很溜,了解过它用的什么协议吗?
  42. Redis is easy to use. Do you know what protocol it uses?
  43. 《零基础看得懂的C++入门教程 》——(10)面向对象
  44. Introduction to zero basic C + + (10) object oriented
  45. HTTP status code and troubleshooting
  46. Java NIO之Channel(通道)入门
  47. Introduction to Java NiO channel
  48. Spring中的@Valid 和 @Validated注解你用对了吗
  49. Are you using the @ valid and @ validated annotations correctly in spring
  50. Spring中的@Valid 和 @Validated注解你用对了吗
  51. Are you using the @ valid and @ validated annotations correctly in spring
  52. Redis | 慢查询
  53. Redis | slow query
  54. RabbitMQ一个优秀的.NET消息队列框架
  55. Autofac一个优秀的.NET IoC框架
  56. 如何使用Redis实现分布式缓存
  57. Rabbitmq an excellent. Net message queue framework
  58. Autofac is an excellent. Net IOC framework
  59. How to use redis to realize distributed cache
  60. JDK1.7-HashMap原理