k8s- Storage -pv and pvc and StatefulSet

 

One Concept

1.1 pv

PersistentVolume (PV)

Is the storage set by the Administrator , It's part of the cluster . Just as a node is a resource in a cluster ,PV It's also a resource in the cluster . PV yes Volume And so on , But it is independent of use PV Of Pod Life cycle of . this API Object contains the details of the storage implementation , namely NFS、iSCSI Or cloud vendor specific storage systems

1.2 pvc

PersistentVolumeClaim (PVC)

Is a request stored by the user . It is associated with Pod be similar .Pod Consume node resources ,PVC Consume PV resources .Pod A specific level of resources can be requested (CPU And memory ). Declarations can request specific sizes and access patterns ( for example , Can read / Write once or Read only multiple mode mount )

 

  • effect :

PVC The purpose of protection is to ensure that pod In use PVC Will not be removed from the system , Because if it is removed, it may cause data loss PVC Protect alpha When the function , If the user removes one pod In use PVC, Then PVC Will not be deleted immediately .PVC The deletion of will be delayed , until PVC No longer by any pod Use

1.3 pv and pvc binding

master The control loop in monitors the new PVC, Look for a match PV( If possible ), And bind them together . If it's new PVC Dynamic deployment PV, Then the loop will always connect the PV Bound to the PVC. otherwise , Users will always get the storage they request , But the capacity may exceed the required number . once PV and PVC After binding , PersistentVolumeClaim Binding is exclusive , No matter how they are bound . PVC Follow PV Binding is a one-to-one mapping

1.4 pv Access type

PersistentVolume It can be mounted on the host in any way supported by the resource provider . As shown in the following table , Suppliers have different functions , Every PV The access mode of the volume will be set to the specific mode supported by the volume . for example ,NFS Can support multiple reads / Write client , But specific NFS PV It may be exported to the server as read-only . Every PV Each has its own set of access patterns to describe specific functions

 

ReadWriteOnce—— The volume can be read by a single node / Write mode mount

ReadOnlyMany—— The volume can be mounted in read-only mode by multiple nodes

ReadWriteMany—— The volume can be read by multiple nodes / Write mode mount

On the command line , The access mode is abbreviated as :

RWO - ReadWriteOnce

ROX - ReadOnlyMany

RWX - ReadWriteMany

1.5 pvc Recovery strategy

Retain( Retain )—— Manual recycling

Recycle( Recycling )—— Basic erase ( rm -rf /thevolume/* )

Delete( Delete )—— Associated storage assets ( for example AWS EBS、GCE PD、Azure Disk and OpenStack Cinder volume )

Will be deleted from the current , Only NFS and HostPath Support recycling strategy .AWS EBS、GCE PD、Azure Disk and Cinder Volumes support deletion policies

1.6 pvc The state of

Available( You can use )—— An idle resource has not been bound by any declaration

Bound( Bound )—— The volume has been declared bound

Released( Released )—— The statement was deleted , But the resource has not been redeclared by the cluster

Failed( Failure )—— Auto recycle of the volume failed

1.7 establish pv and pvc

  • install nfs

Be careful Yours node Nodes also need to be installed nfs-utils, To mount

 

yum install -y nfs-common nfs-utils rpcbind
mkdir /nfs{1..3}
chmod 666 /nfs{1..3}
chown nfsnobody /nfs{1..3}
[[email protected] pv]# cat /etc/exports
/nfs1 *(rw,no_root_squash,no_all_squash,sync)
/nfs2 *(rw,no_root_squash,no_all_squash,sync)
/nfs3 *(rw,no_root_squash,no_all_squash,sync)
systemctl start rpcbind
systemctl start nfs

 

  • establish pv

 

[[email protected] pv]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv1 #pv name
spec:
  capacity:
    storage: 10Gi
  accessModes: #  Access pattern
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle # Recycling mode
  storageClassName: nfs # The name of the volume
  nfs:
    path: /nfs1 # Mount Host Directory
    server: 192.168.1.210 # nfsip
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv2
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: hu
  nfs:
    path: /nfs2
    server: 192.168.1.210
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv3
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs3
    server: 192.168.1.210

 

see pv

 

  • establish pvc

 

[[email protected] pv]# cat pod.yaml 
#  Create headless Services
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: wangyanglinux/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec: # The following two must be met ,pod Will be bound to pv above
      accessModes: [ "ReadWriteOnce" ] # Read mode
      storageClassName: "nfs" # binding pv The name is nfs
      resources:
        requests:
          storage: 1Gi

 

Then look at the binding results , Found only one pod Binding success , Because the other two pc Not meeting the conditions

 

 

And then we test it again , Take the other two pv It's the same as nfspv1 Same configuration , You'll find three pod It's all started

 

then 3 individual pv It's all bound up

Then I'm looking at pvc

 

Then visit three pod, stay node On the node , You need to go to the host ,nfs1,nfs2,nfs3, Three directories to add index.html file

Even if you delete one pod, His ip The address has changed , Then you visit again , Data discovery is still not lost

 

 

Two StatefulSet

2.1 statefulset characteristic

  • matching Pod name ( Network identity ) Model for :$(statefulset name )-$( Serial number ), For example, the example above :web-0,web-1,web-2

 

  • StatefulSet For each Pod The replica creates a DNS domain name , The format of this domain name is : $(podname).(headless servername), It means that the service room is through Pod Domain names to communicate instead of Pod IP, Because when Pod Where Node In case of failure , Pod It will be moved to other places Node On ,Pod IP Will change , however Pod The domain name will not change

verification : Log in to one of them pod, then ping , Container name .svc name

  • StatefulSet Use Headless Service to control Pod Domain name of , Of this domain FQDN by :$(servicename).$(namespace).svc.cluster.local, among ,“cluster.local” Refers to the domain name of the cluster according to volumeClaimTemplates, For each Pod Create a pvc,pvc The naming of the pattern matches the pattern :(volumeClaimTemplates.name)-(pod_name), Like the one above volumeMounts.name=www, Podname=web-[0-2], And so created PVC yes www-web-0、www-web-1、www-web-2

 

verification :10.244.2.9 yes k8s Inside dns,nginx yes svc Name ,default It's the name of the namespace

  • Delete Pod It will not be deleted pvc, Delete manually pvc Will automatically release pv

 

2.2 see statefulset

2.3 Statefulset The start stop sequence of

  • Orderly deployment : Deploy StatefulSet when , If there are more than one Pod copy , They will be created sequentially ( from 0 To N-1) also , The next Pod Run all before Pod Must be Running and Ready state .

 

  • In order to delete : When Pod Is deleted , The order in which they are terminated is from N-1 To 0.

  • The orderly expansion : When the Pod When performing an extension operation , Just like deployment , The one in front of it Pod It has to be all in Running and Ready state .

 

2.4 StatefulSet Use scenarios

  • Stable persistent storage , namely Pod You can still access the same persistent data after rescheduling , be based on PVC To achieve .

  • Stable network identifier , namely Pod After rescheduling PodName and HostName unchanged .

  • Orderly deployment , The orderly expansion , be based on init containers To achieve .

  • Orderly contraction .

 

3、 ... and How to completely delete pv

1 First, delete the corresponding pod and svc, Then delete pvc,

 

kubectl delete -f pod.yaml

kubectl delete pvc --all

If you still can't delete it , You need to edit it directly pv, Delete the... In the red box 4 That's ok

 

kubectl edit pv nfspv3

 

Then you'll find out pv They're all releasing resources

 

pv,pvc,pod Direct relation