[kubernetes enhancement] don't let docker volume trigger terminating pod

Netease Shufan 2021-01-21 09:59:02
kubernetes enhancement don let docker


Terminating Pod It's a typical problem after business containerization , The incentives are different . This paper records the number of sails of Netease - canoe Kubernetes Enhance how the technical team checks step by step , Find out Docker Volume Too many directories lead to Terminating Pod The experience of the problem , The solution is given . I hope the sharing of this article will help readers to check and avoid similar problems .

The problem background

Recently, in the user's cluster, there is a Pod For a long time Terminating Status issues . At first we thought it was 18.06.3 Version of several classic Docker and Containerd The problem led to , But after logging in to the problem node, we found that the environment is as follows :

Component Version
OS Debian GNU/Linux 10 (buster)
Kernel 4.19.87-netease
Docker 18.09.9
Containerd 1.2.10
Kubernetes v1.13.12-netease

Terminating Pod The metadata of is as follows :

apiVersion: v1
kind: Pod
metadata:
 creationTimestamp: "2020-09-17T06:59:28Z"
 deletionGracePeriodSeconds: 60
 deletionTimestamp: "2020-10-09T07:33:11Z"
 name: face-compare-for-music-4-achieve-848f89dfdf-4v7p6
 namespace: ai-cv-team
 # ......
spec:
 # ......
status:
 # ......
 containerStatuses:
 - containerID: docker://de6d3812bfc8b6bef915d663c3d0e0cf5b3f95a41c2a98c867a1376dbccd30d3
 lastState: {}
 name: docker-image
 ready: false
 restartCount: 0
 state:
 running:
 startedAt: "2020-09-17T06:59:30Z"
 hostIP: 10.194.172.224
 phase: Running
 podIP: 10.178.132.136
 qosClass: Guaranteed
 startTime: "2020-09-17T06:59:28Z"

On the node through docker Command view found Terminating Pod Business container under de6d3812bfc8 Still not deleted :

$ docker ps -a | grep de6d3812bfc8
de6d3812bfc8 91f062eaa3a0 "java -Xms4096m -Xmx…" 3 weeks ago Up 3 weeks k8s_docker-image_face-compare-for-music-4-achieve-......

Re pass ctr Command view found Containerd There is still metadata for the container in :

$ ctr --address /var/run/containerd/containerd.sock --namespace moby container list | grep de6d3812bfc8
de6d3812bfc8b6bef915d663c3d0e0cf5b3f95a41c2a98c867a1376dbccd30d3 - io.containerd.runtime.v1.linux

We suspect that Shim There is a problem with the process recycling of , adopt ps Command find de6d3812bfc8 Container of Shim The process is ready to get the stack for analysis , But the container's Shim Processes and business processes . It is found in the log that Docker and Containerd Container exit has been processed :

Oct 9 15:46:36 ai-data-k8s-3 dockerd[10017]: time="2020-10-09T15:46:36.824919188+08:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/exit
Oct 9 15:46:36 ai-data-k8s-3 containerd[1965]: time="2020-10-09T15:46:36.863630606+08:00" level=info msg="shim reaped" id=de6d3812bfc8b6bef915d663c3d0e0cf5b3f95a41c2a98c867a1376dbccd30d3
Oct 9 15:46:36 ai-data-k8s-3 dockerd[10017]: time="2020-10-09T15:46:36.873487822+08:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/delete
Oct 9 15:46:36 ai-data-k8s-3 dockerd[10017]: time="2020-10-09T15:46:36.873531302+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

At this time, there are many new businesses Pod Is scheduled to this node , New dispatch Pod The container of has been in Created state . This phenomenon is similar to several known ones Docker and Containerd The problem is different :

$ docker ps -a | grep Created
03fed51454c2 c2e6abc00a12 "java -Xms8092m -Xmx…" 3 minutes ago Created k8s_docker-image_dynamic-bg-service-cpu-28-achieve-......
......

in summary , The following phenomena have been observed :

  • Kubelet Delete Pod The logic of has been triggered .
  • Docker Has been received and processed Kubelet Request to delete container .
  • Of the container Shim Processes and business processes have been cleaned up .
  • For some reason Docker and Containerd Metadata in cannot be deleted .
  • Containers created since then have been in Created state .

Cause analysis

By looking at the monitoring, we find that the disk utilization of the node is very high when there is a problem, and CPU Abnormal load :

Disk utilization is very high

CPU Abnormal load

We initially guess that the problem is related to the abnormal node disk utilization .

Why the new dispatch Pod The container of has been in Created state

New dispatch Pod The container of has been in Created The state is that we are Docker Version is 18.09.9 A new phenomenon in our environment . In view of this phenomenon , We are Docker Multiple blocks were found in the stack containing github.com/docker/docker/daemon.(*Daemon).ContainerCreate Functional Goroutine, And the reason for the blockage is semacquire. One of them Goroutine The contents are as follows :

goroutine 19962397 [semacquire, 8 minutes]:
sync.runtime_SemacquireMutex(0xc000aee824, 0xc0026e4600)
/usr/local/go/src/runtime/sema.go:71 +0x3f
sync.(*Mutex).Lock(0xc000aee820)
/usr/local/go/src/sync/mutex.go:134 +0x101
github.com/docker/docker/volume/local.(*Root).Get(0xc000aee820, 0xc003d1e3c0, 0x40, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/docker/docker/volume/local/local.go:237 +0x33
github.com/docker/docker/volume/service.(*VolumeStore).getVolume(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0xc003d1e3c0, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/volume/service/store.go:717 +0x611
github.com/docker/docker/volume/service.(*VolumeStore).create(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0xc003d1e3c0, 0x40, 0x0, 0x0, 0x0, 0x0, 0x203000, ...)
/go/src/github.com/docker/docker/volume/service/store.go:582 +0x950
github.com/docker/docker/volume/service.(*VolumeStore).Create(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0xc003d1e3c0, 0x40, 0x0, 0x0, 0xc002b0f090, 0x1, 0x1, ...)
/go/src/github.com/docker/docker/volume/service/store.go:468 +0x1c2
github.com/docker/docker/volume/service.(*VolumesService).Create(0xc000a96540, 0x5597cbf18f40, 0xc00013e038, 0xc003d1e3c0, 0x40, 0x0, 0x0, 0xc002b0f090, 0x1, 0x1, ...)
/go/src/github.com/docker/docker/volume/service/service.go:61 +0xc6
github.com/docker/docker/daemon.(*Daemon).createContainerOSSpecificSettings(0xc0009741e0, 0xc000c3a6c0, 0xc0024f2b40, 0xc00225bc00, 0x0, 0x0)
/go/src/github.com/docker/docker/daemon/create_unix.go:62 +0x364
github.com/docker/docker/daemon.(*Daemon).create(0xc0009741e0, 0xc000ef8ca3, 0x74, 0xc0024f2b40, 0xc00225bc00, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/daemon/create.go:177 +0x44f
github.com/docker/docker/daemon.(*Daemon).containerCreate(0xc0009741e0, 0xc000ef8ca3, 0x74, 0xc0024f2b40, 0xc00225bc00, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/daemon/create.go:72 +0x1c8
github.com/docker/docker/daemon.(*Daemon).ContainerCreate(0xc0009741e0, 0xc000ef8ca3, 0x74, 0xc0024f2b40, 0xc00225bc00, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/daemon/create.go:31 +0xa6
......

From the contents of the stack, we find that Goroutine Blocking at address 0xc000aee820 Of Mutex On , And the address is related to github.com/docker/docker/volume/local.(*Root).Get Of Function Receiver identical . Let's look at this through the code Root What kind of data structure is it :

// Store is an in-memory store for volume drivers
type Store struct {
extensions map[string]volume.Driver
mu sync.Mutex
driverLock *locker.Locker
pluginGetter getter.PluginGetter
}

// Driver is for creating and removing volumes.
type Driver interface {
// Name returns the name of the volume driver.
 Name() string
// Create makes a new volume with the given name.
 Create(name string, opts map[string]string) (Volume, error)
// Remove deletes the volume.
 Remove(vol Volume) (err error)
// List lists all the volumes the driver has
 List() ([]Volume, error)
// Get retrieves the volume with the requested name
 Get(name string) (Volume, error)
// Scope returns the scope of the driver (e.g. `global` or `local`).
 // Scope determines how the driver is handled at a cluster level
 Scope() string
}

// Root implements the Driver interface for the volume package and
// manages the creation/removal of volumes. It uses only standard vfs
// commands to create/remove dirs within its provided scope.
type Root struct {
m sync.Mutex
scope string
path string
volumes map[string]*localVolume
rootIdentity idtools.Identity
}

Root yes Volume The implementation of the driver , Used to manage Volume Life cycle of . It caches all of the Volume And by the Mutex Secure cached data .github.com/docker/docker/volume/local.(*Root).Get Blocked in 237 That's ok wait for Mutex Logically , So the newly created container on the node is always in Created state :

// Get looks up the volume for the given name and returns it if found
func (r *Root) Get(name string) (volume.Volume, error) {
r.m.Lock() // Line 237
 v, exists := r.volumes[name]
r.m.Unlock()
if !exists {
return nil, ErrNotFound
}
return v, nil
}

It seems that the newly created container has been in Created The state is just the result , So who holds this address for 0xc000aee820 Of Mutex Well ?

Who holds the other Goroutine Of Mutex

Block by searching at address 0xc000aee820 Of Mutex, We've found the right Mutex Of Goroutine:

goroutine 19822190 [syscall]:
syscall.Syscall(0x107, 0xffffffffffffff9c, 0xc0026a5710, 0x200, 0x0, 0x0, 0x15)
/usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
syscall.unlinkat(0xffffffffffffff9c, 0xc0026a55f0, 0x83, 0x200, 0x5597cbefb200, 0xc000c913e8)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:126 +0x8d
syscall.Rmdir(0xc0026a55f0, 0x83, 0x5597cbefb200, 0xc000c913e8)
/usr/local/go/src/syscall/syscall_linux.go:158 +0x49
os.Remove(0xc0026a55f0, 0x83, 0x24, 0x83)
/usr/local/go/src/os/file_unix.go:310 +0x70
os.RemoveAll(0xc0026a55f0, 0x83, 0x5e, 0x5597cb0bebee)
/usr/local/go/src/os/path.go:68 +0x4f
os.RemoveAll(0xc001b8b320, 0x5e, 0x2, 0xc000843620)
/usr/local/go/src/os/path.go:109 +0x4f7
github.com/docker/docker/volume/local.removePath(0xc001b8b320, 0x5e, 0x5e, 0x1)
/go/src/github.com/docker/docker/volume/local/local.go:226 +0x4f
github.com/docker/docker/volume/local.(*Root).Remove(0xc000aee820, 0x5597cbf258c0, 0xc001508e10, 0x0, 0x0)
/go/src/github.com/docker/docker/volume/local/local.go:217 +0x1f8
github.com/docker/docker/volume/service.(*VolumeStore).Remove(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0x5597cbf25da0, 0xc002587b00, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/docker/docker/volume/service/store.go:796 +0x71f
github.com/docker/docker/volume/service.(*VolumesService).Remove(0xc000a96540, 0x5597cbf18f40, 0xc00013e038, 0xc002550c40, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/docker/docker/volume/service/service.go:135 +0x1e8
github.com/docker/docker/daemon.(*Daemon).removeMountPoints(0xc0009741e0, 0xc000bba6c0, 0x1, 0x0, 0x0)
/go/src/github.com/docker/docker/daemon/mounts.go:40 +0x239
github.com/docker/docker/daemon.(*Daemon).cleanupContainer(0xc0009741e0, 0xc000bba6c0, 0x101, 0xc000bba6c0, 0x0)
/go/src/github.com/docker/docker/daemon/delete.go:141 +0x827
github.com/docker/docker/daemon.(*Daemon).ContainerRm(0xc0009741e0, 0xc000bb8089, 0x40, 0xc000998066, 0x0, 0x0)
/go/src/github.com/docker/docker/daemon/delete.go:45 +0x272
......

from Goroutine In the stack we see github.com/docker/docker/volume/local.(*Root).Remove The function holds the address 0xc000aee820 Of Mutex, And it's done 217 That's ok , This function is responsible for calling os.RemoveAll Function to delete the specified Volume As well as data :

// Remove removes the specified volume and all underlying data. If the
// given volume does not belong to this driver and an error is
// returned. The volume is reference counted, if all references are
// not released then the volume is not removed.
func (r *Root) Remove(v volume.Volume) error {
r.m.Lock()
defer r.m.Unlock()

// ......

if err := removePath(realPath); err != nil { // Line 217
 return err
}

delete(r.volumes, lv.name)
return removePath(filepath.Dir(lv.path))
}

Through observation Goroutine You can find ,os.RemoveAll The function appears twice in the stack , Looking at the source code, we know that os.RemoveAll The implementation of is recursive . stay 109 That's ok Logic that contains recursive calls :

// RemoveAll removes path and any children it contains.
// It removes everything it can but returns the first error
// it encounters. If the path does not exist, RemoveAll
// returns nil (no error).
func RemoveAll(path string) error {
// Simple case: if Remove works, we're done.
 err := Remove(path) // Line 68
 if err == nil || IsNotExist(err) {
return nil
}

// ......

err = nil
for {
// ......

names, err1 := fd.Readdirnames(request)

// Removing files from the directory may have caused
 // the OS to reshuffle it. Simply calling Readdirnames
 // again may skip some entries. The only reliable way
 // to avoid this is to close and re-open the
 // directory. See issue 20841.
 fd.Close()

for _, name := range names {
err1 := RemoveAll(path + string(PathSeparator) + name) // Line 109
 if err == nil {
err = err1
}
}

// ......
 }

// ......
}

Goroutine At the top of the stack is syscall.unlinkat function , That is, through system call unlinkat Delete the container's file system directory . We found a Terminating Pod The container of Volume There are abnormal :

$ ls -l /var/lib/docker/volumes/0789a0f8cbfdc59de30726a7ea21a76dd36fea0e4e832c9f806cdf39c29197c5/
total 4
drwxr-xr-x 1 root root 512378124 Aug 26 2020 _data

The size of the directory file exceeds 500MB however Link The count is only 1, By looking at ext4 The document found the following :

dir_nlink
Normally, ext4 allows an inode to have no more than 65,000
hard links. This applies to regular files as well as
directories, which means that there can be no more than
64,998 subdirectories in a directory (because each of the
'.' and '..' entries, as well as the directory entry for
the directory in its parent directory counts as a hard
link). This feature lifts this limit by causing ext4 to
use a link count of 1 to indicate that the number of hard
links to a directory is not known when the link count
might exceed the maximum count limit.

When a ext4 The number of subdirectories in the directory under the file system exceeds 64998 when , The contents of this catalog are Link Will be set to 1 To indicate that the hard link count has exceeded the maximum limit . After traversing the files in this directory, we found that there are more than 500 Ten thousand empty directories , It's far more than 64998 The limitation of . So at the first trigger delete Pod After logic, the disk utilization of this node is always high and CPU Abnormal load ,Volume The process of file deletion is very slow, leading to the logical block of container deletion for all the same business . By looking at Related codes It can be confirmed that Kubelet When you delete the container Volume It was recycled together :

// RemoveContainer removes the container.
func (ds *dockerService) RemoveContainer(_ context.Context, r *runtimeapi.RemoveContainerRequest) (*runtimeapi.RemoveContainerResponse, error) {
// Ideally, log lifecycle should be independent of container lifecycle.
 // However, docker will remove container log after container is removed,
 // we can't prevent that now, so we also clean up the symlink here.
 err := ds.removeContainerLogSymlink(r.ContainerId)
if err != nil {
return nil, err
}
err = ds.client.RemoveContainer(r.ContainerId, dockertypes.ContainerRemoveOptions{RemoveVolumes: true, Force: true}) // Line 280
 if err != nil {
return nil, fmt.Errorf("failed to remove container %q: %v", r.ContainerId, err)
}
return &runtimeapi.RemoveContainerResponse{}, nil
}

Why? Containerd Metadata in cannot be deleted

There is another question , Why? ctr Command can find the container metadata that needs to be deleted ? We found another kind of waiting for the Mutex Of Goroutine:

goroutine 19943568 [semacquire, 95 minutes]:
sync.runtime_SemacquireMutex(0xc000aee824, 0x5597c9ab7300)
/usr/local/go/src/runtime/sema.go:71 +0x3f
sync.(*Mutex).Lock(0xc000aee820)
/usr/local/go/src/sync/mutex.go:134 +0x101
github.com/docker/docker/volume/local.(*Root).Get(0xc000aee820, 0xc002b12180, 0x40, 0x5597cbf22080, 0xc000aee820, 0x0, 0x0)
/go/src/github.com/docker/docker/volume/local/local.go:237 +0x33
github.com/docker/docker/volume/service.lookupVolume(0x5597cbf18f40, 0xc00013e038, 0xc000cc5500, 0xc000c914c8, 0x5, 0xc002b12180, 0x40, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/volume/service/store.go:744 +0xc7
github.com/docker/docker/volume/service.(*VolumeStore).getVolume(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0xc002b12180, 0x40, 0x5597cb0c1541, 0x5, 0x5597c9a85bb5, 0x0, 0xc003deb198, ...)
/go/src/github.com/docker/docker/volume/service/store.go:688 +0x299
github.com/docker/docker/volume/service.(*VolumeStore).Get(0xc000aeec80, 0x5597cbf18f40, 0xc00013e038, 0xc002b12180, 0x40, 0xc003deb240, 0x1, 0x1, 0x0, 0x0, ...)
/go/src/github.com/docker/docker/volume/service/store.go:636 +0x173
github.com/docker/docker/volume/service.(*VolumesService).Unmount(0xc000a96540, 0x5597cbf18f40, 0xc00013e038, 0xc0016f1ce0, 0xc00381b040, 0x40, 0x5597c9a5bfeb, 0xc0018ced50)
/go/src/github.com/docker/docker/volume/service/service.go:105 +0xc6
github.com/docker/docker/daemon.(*volumeWrapper).Unmount(0xc003cdede0, 0xc00381b040, 0x40, 0x0, 0x0)
/go/src/github.com/docker/docker/daemon/volumes.go:414 +0x6a
github.com/docker/docker/volume/mounts.(*MountPoint).Cleanup(0xc001d89680, 0xc0013cbc50, 0xc003deb3f8)
/go/src/github.com/docker/docker/volume/mounts/mounts.go:83 +0x7a
github.com/docker/docker/container.(*Container).UnmountVolumes(0xc000c3ad80, 0xc003deb4e0, 0x60, 0x0)
/go/src/github.com/docker/docker/container/container.go:475 +0x102
github.com/docker/docker/daemon.(*Daemon).Cleanup(0xc0009741e0, 0xc000c3ad80)
/go/src/github.com/docker/docker/daemon/start.go:257 +0x4ae
......

The Goroutine The stack contains github.com/docker/docker/daemon.(*Daemon).Cleanup Function and execute to 257 That's ok , This function is responsible for releasing the container network resources and anti mounting the container's file system :

// Cleanup releases any network resources allocated to the container along with any rules
// around how containers are linked together. It also unmounts the container's root filesystem.
func (daemon *Daemon) Cleanup(container *container.Container) {
// ......

if container.BaseFS != nil && container.BaseFS.Path() != "" {
if err := container.UnmountVolumes(daemon.LogVolumeEvent); err != nil { // Line 257
 logrus.Warnf("%s cleanup: Failed to umount volumes: %v", container.ID, err)
}
}

container.CancelAttachContext()

if err := daemon.containerd.Delete(context.Background(), container.ID); err != nil {
logrus.Errorf("%s cleanup: failed to delete container from containerd: %v", container.ID, err)
}
}

And the function calls Containerd Delete metadata in 257 Yes github.com/docker/docker/container.(*Container).UnmountVolumes After the function , And that explains why through ctr Command view found Containerd There is still metadata for the container in .

The truth

these Volume As many as 500MB How did the container come from ? After communicating with users, we got the answer , The original user did not understand Docker Volume The meaning and use scenarios of , stay Dockerfile Used in Volume:

# ......
VOLUME /app/images
VOLUME /app/logs
# ......

Users frequently report to Volume Write data and no effective garbage collection , A large number of empty directory leaks are triggered after a period of time Terminating Pod The problem of . So far, the reason for our problem is clear ,Terminating Pod The process of the problem is as follows :

  • Of a business Pod Contains frequent Volume The logic of writing data causes the file hard link count to exceed the maximum limit .
  • The user made a rollover update , Trigger Pod The time of deletion is recorded to .metadata.deletionTimestamp.
  • Delete Docker Volume The logic of is called , because Volume Too many empty directories in cause unlinkat System calls are executed in large numbers .
  • function os.RemoveAll Recursive delete Volume The directory is executed in large quantities unlinkat System calls result in very high disk utilization of the node and CPU Abnormal load .
  • First execution Volume Delete logical Goroutine Hold protection Root The cache Mutex, Because the function os.RemoveAll Delete Volume The directory is handled recursively 500 Ten thousand files are too slow to return , Subsequent pairs of Volume All operations are blocked and waiting Mutex Logically .
  • Use Volume The container for cannot be deleted , At this point, multiple Terminating Pod.

Sequence diagram

summary

Finally, our online environment adopts the scheme that the node goes offline to format the disk, and then goes online again for emergency recovery , Users are advised to discard it as soon as possible Docker Volume And switch to Kubernetes Local disk solution . The user has modified... At our suggestion Dockerfile And layout templates , And the logic in the business code is optimized , This kind of problem has been eliminated .

In the process , canoe Kubernetes Enhancing the technical team also benefits , Solving problems from a technical point of view is only one dimension of our work , The gap between users' perception of cloud native technology and service providers' implementation of specifications deserves more attention . Although at present we have solved the problem of users using Kubernetes The basic question of , But in helping users effectively solve the pain points in the process of cloud native landing 、 Deepen users' understanding of new technologies 、 Reduce the cost of users and let users really enjoy the technological dividend , We have a long way to go .

 

Author's brief introduction

Huang Jiuyuan , Netease Shufan Senior Development Engineer , Focus on cloud native and distributed systems , Participated in Netease cloud music 、 NetEase Media 、 Netease strictly selected 、 The large-scale container landing of many users such as koala Haimai and the production of Netease light boat container platform , The main directions include cluster monitoring 、 Intelligent operation and maintenance system construction 、Kubernetes as well as Docker Core component maintenance, etc . Currently, I am mainly responsible for the design of Netease light boat cloud native fault automatic diagnosis system 、 Development and commercialization of products .

版权声明
本文为[Netease Shufan]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210121095341592O.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云