The basis of Kafka principle analysis

Docker 2021-05-04 01:19:08
basis kafka principle analysis


Original article , Reprint please mark .https://www.cnblogs.com/boycelee/p/14728638.html

One 、Kafka Two 、 solve the problem Asynchronous processing The application of decoupling Traffic peak clipping 3、 ... and 、 characteristic Efficiency of reading and writing Network transmission Concurrency capability Persistence capability reliability Horizontal expansion Four 、 Basic concepts news & batch news batch The theme & Partition journal Log Basic concepts Log Save and compress Log saving Log compression Broker copy producer consumer Consumer group Messaging mode Kafka Architecture Overview 5、 ... and 、 Details of core features consumer Single consumer group Multi consumer group heartbeat The rebalancing mechanism Rebalance triggers Avoid rebalancing The consumer decides “ die ” Conditions Consumers don't report to Coordinator Send heartbeat request It's not consumed within the prescribed time poll Method To prevent consumers from being sentenced to “ die ” To avoid being “ Conditions 1” Death sentence To avoid being “ Conditions 2” Death sentence Displacement Management Displacement theme The reason for introducing The message format Displacement submission Automatic submission Manual submission Partition Replica mechanism The advantages of replica mechanism Copy definition Copy role Multiple copies of the same partition , How to ensure replica message consistency ? The reason why the follower copy does not provide services to the outside world Sync copy (ISR) With asynchronous copies (OSR) Standards for synchronous replicas HWLEOLeader The election A few copies are down All copies are down Why don't the minority obey the majority ? Physical storage Storage overview Basic concepts File category The logging stored Indexes Message compression Offset index 6、 ... and 、 Reference resources 7、 ... and 、 summary

One 、Kafka

Kafka It's a distributed messaging system .

Two 、 solve the problem

Message systems are often used for asynchronous processing 、 The application of decoupling 、 Traffic peak clipping 、 Message communication, etc .

Asynchronous processing

image-20210315230129937
image-20210315230129937

The producer writes the message to the message queue , Consumers pull message queue messages asynchronously , So as to improve the message processing ability .

The application of decoupling

image-20210315230734421
image-20210315230734421

Kafka As a medium of communication , Each subsystem only needs to do what is within the responsibility of the system . producer - Consumer model ,Kafka It's message queuing .

Traffic peak clipping

image-20210315232052783
image-20210315232052783

Under normal circumstances , Upstream services ( If you quote 、 Marketing, etc ) The annual flow is large , In the face of large traffic can be more calm to deal with , But downstream applications ( Such as : transaction 、 Orders, etc ) Because the annual flow is small , In the face of large traffic, because of lack of preparation , And the system is broken down , Causing an avalanche .

In order to deal with this problem , Message queues can be used as temporary data storage nodes , Consumers according to their ability to consume , Control the consumption speed by pulling , To achieve the purpose of peak shaving .

3、 ... and 、 characteristic

Efficiency of reading and writing

Kafka In the face of massive data , It can efficiently handle the storage and query of messages . Through the software design to avoid the hardware read disk performance bottleneck .

Network transmission

Batch read messages , Batch compression of messages , So as to improve network utilization .

Concurrency capability

Kafka Support message partition , The sequence of messages in each partition is guaranteed , Multiple partitions can support concurrent operations , promote Kafka Concurrent operations .

Persistence capability

Kafka Persist messages to hard disk . Network transmission is unreliable , So you need to persist the data . It uses zero copy 、 Sequential reading 、 Sequential writing 、 Page caching and other technologies make Kafa It has the characteristics of high throughput .

reliability

Support partition multiple copies ,Leader Copies read and write ,Follow The replica is only responsible for synchronization Leader Copy data , Realize redundant backup of messages , promote Kafka Disaster resilience .

Horizontal expansion

many Producer、Broker、Consumer, All are distributed , many Consumer You can join the same Consumer Group, Each partition can only be assigned one Consumer, When Kafka When the server increases the number of partitions for horizontal expansion , Can be directed to Consumer Group add to Consumer, Increase consumption power . When Consumer Group There is Consumer When a fault occurs and you are offline , By rebalancing (Rebalance) Redistribution of partitions .

Four 、 Basic concepts

news & batch

news

(1) The news is Kafka The basic unit of ;

(2) Message by key and value Of byte Array composition ;

(3)key It can send messages to the specified partition according to the policy .

batch

(1) To improve efficiency , Messages are written in batches kafka, The same group of messages must belong to the same partition of the same topic ;

(2) Sending in batches can reduce network overhead , Speed up transmission .

The theme & Partition

The theme (Topic) Is the logical unit used to store message classification relationships , Think of it as a collection of stored messages . Partition (partition) yes Kafka The basic unit of data storage , You can think of it as a subset of a collection of stored messages .Kafka Messages are classified by topic , same Topic Different sections of (partition) It's going to be distributed over the unused ones Broker On , Partition mechanism provides the basis for horizontal expansion , You can add and distribute partition To enhance Kafka Message parallel processing capabilities .

image-20210417181748356
image-20210417181748356

journal

Log Basic concepts

(1) The partition logically corresponds to a Log, The producer writes the message to the partition, which is actually corresponding to the partition Log;

(2)Log Can correspond to the folder on the disk , It consists of many Segment form , Every Segment Corresponding to a log file and index file ;

(3) When Segment When size exceeds limit , Will create new Segment;

(4)Kafka Take the order I/O, So only the latest Segment Additional data ;

(5) The index is sparse , The runtime maps it to memory , Increase index speed .

image-20210417191546608
image-20210417191546608

Log Save and compress

Log saving

(1) The time limit

According to the retention time , When the news is in kafka The storage time in is longer than the specified time , It will be deleted .

(2) Size limit

according to Topic Storage size , When Topic The size of the log occupied is greater than a threshold , You can start deleting the oldest message .Kafka A new thread will be started , Periodically check for messages that can be deleted .

Log compression

In many scenes ,Kafka News key And value The value is constantly changing , It's like the data in the database is constantly modified , Consumers only care about the latest key Corresponding value. If the log compression function is turned on ,Kafka Will turn on the thread , Timing is the same for key To merge the messages , And keep the latest value value .

Broker

independent Kafka Service is a broker,broker The main job is to receive messages from producers , Distribute offset And save it to disk .Broker In addition to receiving messages from producers , And deal with consumers 、 other Broker Request , According to the request type, the corresponding processing line and response return . Normally, one machine corresponds to one broker.

copy

A copy is a redundant backup of the message process , Distributed systems store each other's data on different machines . stay Kafka in , Every section (partition) There can be multiple copies , The message in each copy is the same ( At the same time , The messages between multiple machines are not exactly the same ).

producer

producer (Producer) My main job is to generate messages . Push the message release to Topic In the corresponding partition of . for example :(1) Yes key Conduct hash;(2) polling ;(3) Customize .

consumer

consumer (Consumer) The main work of consumer news . Pull... From the corresponding partition Topic The news of consumption . Consumers need to go through offset Record where you spend .

Consumer group

Multiple consumers (Consumer) Make up the consumer group (Consumer Group). Consumer group (Consumer Group) Subscribed topics (Topic) Each partition of can only be assigned to , One consumer in the same consumer group deals with . But a consumer can consume the same theme (Topic) Multiple sections of .

image-20210418163922518
image-20210418163922518

Messaging mode

image-20210417120121342
image-20210417120121342

​ kafka No message push , It's just news pull . But consumers can push messages by polling and pulling .

Kafka Architecture Overview

image-20210316000534417
image-20210316000534417

5、 ... and 、 Details of core features

consumer

(1) The consumer saves the offset of the subscription topic consumption message to the name "__consumer_offsets" In the theme of ;

(2) Recommended Kafka To store consumer offsets ,zookeeper Not suitable for high concurrency .

Single consumer group

Multiple consumers who consume the same topic only need to group_id The settings are the same , You can form a consumer group .

Situation 1 : In a consumer group , There is only one consumer .

image-20210418213958556
image-20210418213958556

Situation two : There are multiple consumers in the consumer group .

image-20210418213256470
image-20210418213256470

Situation three : The number of partitions is the same as the number of consumer groups .

image-20210418213522999
image-20210418213522999

Situation four : The number of consumers in the consumer group is greater than the number of partitions . Idle consumers don't receive messages .

image-20210418214525504
image-20210418214525504

Multi consumer group

One topic corresponds to multiple consumer groups , Each consumer group is able to consume all messages on the topic .

image-20210418215624764
image-20210418215624764

heartbeat

Kafka The heart beat mechanism guarantees Consumer and Broker Health between , When Broker Coordinator Normal ,Consumer Will send heartbeat .

The rebalancing mechanism

Rebalancing is an agreement that specifies how to allocate information when there is a change between the consumer and the topic in the consumer group .

Rebalance triggers

(1) Consumers in the consumer group change .( Changes in the number of consumer groups , For example, the consumer group is out of the consumer group due to downtime )

(2) The number of partitions corresponding to the theme changes .(kafka Only add partition is supported )

(3) Subscription topics change .( Consumer groups use regular expressions to subscribe to topics , At this time, the corresponding topic is created )

Situation 1 : Normal condition , Each partition can only be assigned to one consumer .

image-20210419000612825
image-20210419000612825

Situation two : Consumer machine down , The consumer exits the consumer group , Trigger rebalancing , Redistribute partitions to consumers in consumer groups .

image-20210419000928251
image-20210419000928251

Situation three :Broker Machine down , Cause partition 3 Unable to provide services . If the partition has copies, rebalancing is triggered , If there is no copy, the consumer 3 idle .

image-20210419001510688
image-20210419001510688

Situation four : Subscribe to topics using regular expressions , When a new topic is added , The partition corresponding to the theme will be assigned to the current consumer , It triggers rebalancing .

image-20210419003311991
image-20210419003311991

Avoid rebalancing

The number of subscription topics and topic partitions have changed , Generally, it is triggered by operation and maintenance , There is no need to avoid rebalancing normally . So we can focus on the rebalancing caused by the change in the number of consumers in the consumer group .

After rebalancing , Each consumer instance will report to Coodinator Send heartbeat request .

The consumer decides “ die ” Conditions
Consumers don't report to Coordinator Send heartbeat request

(1)session.timeout.ms Parameter identification determines the time threshold of consumer death . The default value of the parameter is 10 second , That is, if 10 Not received within seconds Group I'm going to Consumer Instance heartbeat request , It is determined that Consumer example “ Death ”, Removed from the Group.

(2)heartbeat.interval.ms Parameter identifies the frequency of heartbeat requests sent . The smaller the value. ,Consumer The more often instances send heartbeat requests .

It's not consumed within the prescribed time poll Method

(1)max.poll.interval.ms Parameter identification Consumer The instance poll Method's maximum time interval . The default value is 5 minute , Express Comsumer If in 5 You can't spend it in minutes poll Method , Will be removed Group.

To prevent consumers from being sentenced to “ die ”
To avoid being “ Conditions 1” Death sentence

session.timeout.ms >= 3 * heartbeat.interval.ms. Guarantee Consumer At least before being sentenced to death 3 A heartbeat request .

for example : Set up session.timeout.ms = 6s; Set up heartbeat.interval.ms = 2s.

To avoid being “ Conditions 2” Death sentence

As much as possible max.poll.interval.ms The time setting is larger . You can use the longest time in the consumer instance as a basis , On this basis, we should expand the scope 1-1.5 times . Leave enough processing time for business processing , Avoid rebalancing due to long message consumption time .

Displacement Management

Displacement theme

Kafka Consumers consume messages according to the displacement order of messages , The displacement of consumers is managed by consumers , Can be stored in zookeeper in , Can be stored in Kafka The theme __consumer_offse in hjmgbknjk.n,jvgnvmnn/.vt.sconsumer_offsets It's the displacement theme .

The reason for introducing

(1) The old version of displacement management Zookeeper, The displacement data will be automatically or manually submitted to Zookeeper Preservation . When Consumer After restart , It will automatically start from Zookeeper Read the displacement data in , Continue to consume from the place where consumption was last cut off . This kind of design is Kafka Broker No need to save displacement data .

(2) but Zookeeper Not suitable for high frequency write operation , So in 0.8.2.x After the new version of Consumer A new displacement management mechanism has been introduced . take Consumer As a general Kafka news , Submitted to the __consumer_offsets.

(3) In this case, you don't need to modify it , You can't write to the topic at will , Because it can lead to Kafka Unable to parse normally .

The message format

(1)Key Contained in the GroupID、 topic name 、 Zone number ;

(2)Value Contains the displacement value .

Displacement submission

(1)Consumer You need to Kafka Record your own displacement data , This reporting process is called Submit displacement (Committing Offsets)

(2)Consumer Each partition assigned to it needs to submit its own displacement data

(3) Displacement submitted by Consumer I'm responsible for ,Kafka Only responsible for the custody of .__consumer_offsets

(4) Displacement submission is divided into automatic submission and manual submission

(5) Displacement commit is divided into synchronous commit and asynchronous commit

Automatic submission

(1) Set up enable.auto.commit The value is true;

(2) adopt auto.commit.interval.ms, You can set the time interval for automatic submission , The default value is 5 second ;

(3)Kafka Will ensure that at the beginning of the call poll When the method is used , Submit last time poll The displacement information of all messages returned .poll The logic of the method is to submit the displacement of the previous batch of messages first , And then process the next batch of messages , So it can ensure that there is no loss of consumption ;

(4) Automatic submission leads to repeated consumption of messages . example :Consumer Every time 5s Submit offset, When the displacement information is submitted 3 Second rebalancing , all Consumer Will be submitted from the last time offset Start spending , But at this point offset It's already 3 Seconds ago offset 了 , So we're going to spend again before rebalancing 3 All the data in seconds . We can only narrow the submission offset Time window of , But there's no way to avoid repeat consumption .

Manual submission

1、 Synchronous commit

(1) Use KafkaConsumer#commitSync(): Will submit KafkaConsumer#poll() Back to the latest offset;

(2) This method is synchronous operation , Wait until offset Returned only after successfully submitted ;

1while (true) {
2            ConsumerRecords<String, String> records =
3                        consumer.poll(Duration.ofSeconds(1));
4            process(records); //  Process the message
5            try {
6                        consumer.commitSync();
7            } catch (CommitFailedException e) {
8                        handle(e); //  Handle commit failure exception
9            }

(3) Synchronous commit makes Consumer In a blocking state ;

(4) Synchronous commit automatically retries when an exception occurs .

2、 Asynchronous submission

(1) Use asynchronous commit to circumvent Consumer Blocking ;

(2) abnormal (GC、 The network jitter ) Use synchronous commit to try again .

 1try {
2     while(true) {
3          ConsumerRecords<String, String> records = 
4                                    consumer.poll(Duration.ofSeconds(1));
5          process(records); //  Process the message
6          commitAysnc(); //  Use asynchronous commit to avoid blocking
7     }
8catch(Exception e) {
9            handle(e); //  Handling exceptions
10finally {
11    try {
12        consumer.commitSync(); //  The last commit uses synchronous blocking commit
13    } finally {
14       consumer.close();
15    }

Partition

Partition (Partition) yes Kafka The basic unit of data . The same theme (topic) Data will be stored in multiple places partion in , These partitions can be assigned to the same machine or to different machines . The advantage is that it is conducive to horizontal expansion , Avoid disk space and performance limitations on a single machine , At the same time, data redundancy can be increased by replication , So as to improve the disaster recovery ability . In order to achieve uniform distribution , commonly partition The number of people is usually Broker Server An integral multiple of a quantity .

Replica mechanism

The advantages of replica mechanism

Partitions have multiple copies , Provide redundant data , It helps to ensure that kafka High availability .

Copy definition

(1) Each topic can be divided into several sections , Configure multiple copies per partition ;

(2) The essence of a replica is a commit log that can only be appended with messages ;

(3) All copies in the same partition keep the same message sequence ;

(4) Different copies written by partition are stored in different places Broker On , Answer Broker When partition data is not available during downtime .

image-20210501184109411
image-20210501184109411

Copy role

Multiple copies of the same partition , How to ensure replica message consistency ?

The most common solution is based on leaders (Leader-based) Copy mechanism of .

image-20210501190758088
image-20210501190758088

1、 Copies fall into two categories

(1) Leader copy ;

(2) Copy of follower .

2、 stay Kafka The replication mechanism of is different from other distributed systems

(1) stay kafka in , Follower copies are not available for external service . All requests must be processed by the leader copy ;

(2) The only task of the follower replica is to pull messages asynchronously from the leader replica , And write it in your own submission log , To synchronize with the leader copy .

3、 Where is the leader Broker Downtime

(1)Kafka Depending on the Zookeeper The monitoring function provided can sense Broker Downtime , And start a new election ;

(2) The old Leader After the replica restarts , You can only join the cluster as a follower copy .

The reason why the follower copy does not provide services to the outside world

1、 Easy to implement “Read-your-writes”

(1) Producers use API Want to Kafka After the message is successfully written , Be able to use consumers right away API Look at the production information just now .

(2) If followers are allowed to provide services , Because the follower copy is asynchronous , Therefore, it is possible that the follower copy does not get the latest information from the leader copy , There will be a message that cannot be read and written immediately .

2、 Easy to achieve monotonic reading (Monotonic Reads)

(1) What is monotonous reading ? For message consumers , News doesn't come and go .

(2) If followers are allowed to provide services , Because the follower copy is asynchronous , Messages pulled by multiple replicas from the leader replica are not necessarily synchronized , There will be multiple requests to read copies of different followers , Sometimes the data is read without . If the read is all handled by the leader copy , that Kafka It's easy to achieve monotonic read consistency .

Sync copy (ISR) With asynchronous copies (OSR)

Because the follower copy needs to pull the leader copy asynchronously , Then we need to determine how to synchronize with the leader replica .

Kafka Introduced In-Sync Replicas, That is to say ISR( Sync ) A collection of copies , The copy is in Zookeeper On maintenance . If it exists in ISR Medium means synchronization with the leader copy , On the contrary, asynchronous copies (OSR)

Standards for synchronous replicas
image-20210503235617510
image-20210503235617510

(1)replica.lag.time.max.ms Parameter value identification Follower Copies can be slower than Leader Maximum time interval between copies , The default value is 10 second .

(2) if Follower Copy lags behind Leader The maximum continuous time interval of the replica does not exceed the replica.lag.time.max.ms Parameter value setting size , Then it is decided that Follower Copy and Leader The replica is synchronized , Otherwise, it is considered as asynchronous , Will copy from ISR Remove from replica set (Follower Copy pull is slower than Leader The speed at which copies write messages , And the time interval exceeds the set threshold ).

(3)ISR It's a dynamic adjustment set , Non static . When Follower When the copy catches up , Will be added again, will be ISR aggregate .

HW

High water level (HW yes High Watermark Abbreviation ), Represents the offset of a particular message , Consumers can only pull this offset Previous data .

LEO

LEO yes Log End Offset Abbreviation , Represents the next message to be written in the current log file offset

image-20210503235536271
image-20210503235536271

Leader The election

A few copies are down

(1) When Leader The copy corresponds to broker After downtime , It will start from Follower Select one of the copies as Leader;

(2) When it comes down broker When you recover, you'll start again from leader in pull data .

All copies are down

unclean.leader.election.enable Control whether to allow Unclean Leader election .

(1) Don't open Unclean. wait for ISR A recovery in , And choose to be leader;( Wait a long time , Reduced availability )

(2) Turn on Unclean. Select the first restored copy as the new leader, Whether it's ISR copy .( Turning it on will cause data loss )

(3) Under normal circumstances, it is not recommended to turn it on , At the expense of high availability , But maintaining data consistency , Avoid message loss .

Why don't the minority obey the majority ?

choice Leader If more than half of the synchronous copies are required to approve , The algorithm needs more redundant synchronous copies .( One machine failed , Need 3 Two synchronous copies )

Physical storage

Storage overview

Basic concepts

(1)Kafka Use log files to save messages sent by producers ;

(2) Every message has a offset The value represents its offset in the partition ;

(3)offset A value is a logical value, not a real physical address . It is similar to the primary key in a database , Uniquely identifies a piece of data in a database table . and offset stay Kafka A partition in uniquely identifies a message .

(4)Log One to one correspondence with the partition ,Log It's not a file, it's a folder ;

(5) Folder to topicName_pratiitonID name , All partition messages are stored in the log file under the secondary folder ;

(6)Kafka By segmenting Log Divided into several LogSegment,LogSegment It's a logical concept , Corresponding to... On disk Log A log file and index file in the directory ;

(7) The naming rule for log files is [baseOffset].log,baseOffset Is the first message in the log file offset;

(8)Kafka Logs are appended sequentially ;

(9) Each log file corresponds to an index file , The index file uses sparse index to index some messages in the file log .

(10) Log file structure diagram

image-20210503153647283
image-20210503153647283

(11)Log Example

image-20210504001658365
image-20210504001658365

Created a tp_demo_01 The theme of , There is 6 individual partition, For each of them partition There is a Topic-partition Named message log .

(12)LogSegment Example

image-20210504001721206
image-20210504001721206

File category

image-20210504001748640
image-20210504001748640

The logging stored

Indexes

In order to improve the speed of message search ,Kafka from 0.8 Version start , Add the corresponding index file for each log file .IndexFile And MassageSet File Together LogSegment.

Offset index file is used to record the mapping relationship between message offset and physical address . The timestamp index file looks up the corresponding offset according to the timestamp .

The format of the index entries in the index file : Each index entry is 8 byte , In two parts , The first part is relative offset(4 byte ), That is relative to the baseOffset The offset (baseOffset It's the base offset , The log file is named after the base offset ). The second part is the physical address (4 byte ), That is, the corresponding index message in the log file position Location . Through these two parts, we can achieve offset Mapping to physical addresses .

Message compression
image-20210503214424850
image-20210503214424850
Offset index
image-20210503214049171
image-20210503214049171

give an example

Suppose you need to find startOffset by 1067. Need to put offset=1067 Convert to the corresponding physical address , What is the process ?

(1) It will be absolutely offset Turn into relative offset, absolute offset subtract baseOffset, Get relative offset=67;

(2) By relative offset Look up the index file , obtain (58,1632) Index entry ( Through the way of jump table to locate a index file , And then through the dichotomy to find no greater than relative offset The largest index entry for );

(3) from position by 1632 Start searching in sequence at , Find the absolute offset=1067 The news of ;

(4) In the end offset by 1070 Location message for .( Because the message is compressed ,offset=1067 This message is compressed to form offset=1070 This message ).

6、 ... and 、 Reference resources

《Apache Kafka Source analysis 》

《 Geek time -Kafka Core technology and actual combat 》

《 retractor Java-Kafka》

7、 ... and 、 summary

This article from the scene 、 characteristic 、 Basic concepts 、 The core characteristics and other dimensions are elaborated in detail Kafka Knowledge about . About kafka Stability and specific source code implementation will be described in the advanced chapter .

Don't know much , Too little . Welcome criticism and correction !

Original article , Reprint please mark .https://www.cnblogs.com/boycelee/p/14728638.html

版权声明
本文为[Docker]所创,转载请带上原文链接,感谢
https://javamana.com/2021/05/20210504011650899s.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云