Detailed explanation of HBase basic principle

InfoQ 2021-01-14 19:32:19
detailed explanation hbase basic principle


{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"HBase brief introduction ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase It's a distributed one 、 Column oriented open source database . Based on the HDFS above .Hbase The source of my name is Hadoop database, namely Hadoop database .HBase The computing and storage capacity of depends on Hadoop colony .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" It is between NoSql and RDBMS Between , Only through the primary key (row key) And the primary key range To retrieve data , Only one line transactions... Are supported ( It can be done by Hive Support to implement multiple tables join And so on ).","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase The characteristics of Chinese table :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Big : A table can have billions of rows , Millions of column ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" For the column : For the column ( family ) Storage and rights control for , Column ( family ) Independent search .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" sparse :","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" For empty (null) The column of , It doesn't take up storage space , therefore , Tables can be designed very sparsely ","attrs":{}},{"type":"text","text":".","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"HBase Underlying principle ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" System architecture ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/64/64b66b06b6790d89a8e0eb9a8e933f68.png","alt":"HBase System architecture ","title":"HBase System architecture ","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase System architecture ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" According to this picture , Explain HBase The components of ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Client","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Include access hbase The interface of ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Client Some of them are being maintained cache To speed up hbase The interview of ","attrs":{}},{"type":"text","text":", such as regione Location information for .","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Zookeeper","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase You can use the built-in Zookeeper, You can also use external , In the actual production environment , To maintain unity , Generally use external Zookeeper.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Zookeeper stay HBase The role of :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Guarantee any time , Only one in the cluster master","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" Store all Region Address entry for ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" Real-time monitoring Region Server The state of , take Region server Real time notification of online and offline information to Master","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"HMaster","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" by Region server Distribute region","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" be responsible for region server Load balancing of ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" Found inoperative region server And reallocate the region","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"HDFS Garbage collection on ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":" Handle schema Update request ","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"HRegion Server","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HRegion server","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" maintain HMaster Assigned to it region","attrs":{}},{"type":"text","text":", Deal with these region Of IO request ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HRegion server Responsible for segmentation that becomes too large during operation region","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" As you can see from the diagram ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Client visit HBase There is no need for HMaster Participate in ","attrs":{}},{"type":"text","text":"( Address access Zookeeper and HRegion server, Data read and write access HRegione server)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HMaster Just the maintainer table and HRegion Metadata information , Very low load .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"HBase Table data model for ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a5/a5ea27dd8097742bcdcff5f8bf0cc860.png","alt":"HBase The table structure ","title":"HBase The table structure ","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase The table structure ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" The line of key Row Key","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" And nosql The database is the same ,row key Is the primary key used to retrieve the record . visit hbase table The lines in the , There are only three ways :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Through a single row key visit ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" adopt row key Of range","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" Full table scan ","attrs":{}}]}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Row Key The line key can be any string (","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" The maximum length is 64KB","attrs":{}},{"type":"text","text":", In practice, the length is generally 10-100bytes), stay hbase Inside ,row key Save as an array of bytes .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Hbase The data in the table will be processed according to rowkey Sort ( Dictionary order )","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When the storage , Data according to Row key Dictionary sequence (byte order) Sorting storage . Design key when , To fully sort and store this feature , Stores rows that are often read together .( Positional correlation ).","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Be careful :","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The dictionary order is right int The result of the sorting is ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1,10,100,11,12,13,14,15,16,17,18,19,2,20,21 … .","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" To maintain the natural order of plastic surgery , The line key must be 0 Fill left .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" One read and write of a line is an atomic operation ( No matter how many columns you read or write at a time )","attrs":{}},{"type":"text","text":". This design decision can make it easy for users to understand the behavior of the program in Concurrent update operations on the same line .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" Column family Column Family","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HBase Each column in the table , They all belong to a family ","attrs":{}},{"type":"text","text":". A column family is a table schema Part of ( The column is not ),","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" You must define the table before using it ","attrs":{}},{"type":"text","text":".","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Column names are prefixed by column families . for example courses:history , courses:math All belong to courses This column family .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Access control 、 Disk and memory usage statistics are conducted at the column family level . The more families , To be involved in fetching a row of data IO、 The more documents we search for , therefore , If not necessary , Don't set too many column families .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" Column Column","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Specific columns below column families , Belong to one of ColumnFamily, Similar to in mysql The specific columns created in .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" Time stamp Timestamp","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase Pass through row and columns Defined as a storage unit is called cell. Every cell All keep multiple versions of the same data . Versions are indexed by timestamps . The timestamp type is 64 An integer .","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" The timestamp can be determined by hbase( Automatically when data is written ) assignment ","attrs":{}},{"type":"text","text":", The timestamp is the current system time accurate to milliseconds . Timestamps can also be assigned explicitly by the client . If the application is to avoid data version conflicts , You must generate your own unique timestamp .","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Every cell in , Different versions of the data are sorted in reverse chronological order ","attrs":{}},{"type":"text","text":", That is, the latest data is at the top of the list .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" To avoid management caused by having too many versions of the data ( Includes storage and indexes ) burden ,hbase Two methods of data version recovery are provided :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Save the last of the data n A version ","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" Save the latest version ( Set the life cycle of the data TTL).","attrs":{}}]}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Users can set this for each column family .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" unit Cell","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" from {row key, column( = + ), version} The only identified unit .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"cell There is no type of data in , All in bytecode form .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":" Version number VersionNum","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Version number of the data , Each data can have multiple version numbers , The default value is system timestamp , The type is Long.","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Physical storage ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1. The overall structure ","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c8/c85d9e9b9e94a31ebb780f92d36f7afb.png","alt":"HBase The overall structure ","title":"HBase The overall structure ","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase The overall structure ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Table All the lines in are in accordance with Row Key The dictionary order of .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"Table Split into multiple... In the direction of the line HRegion.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HRegion Divided by size ( Default 10G), Each table starts with only one individual HRegion, As the data continues to be inserted into the table ,HRegion Growing , When it reaches a threshold ,HRegion It's going to be bisected, it's going to be two new HRegion. When Table More and more lines in , There will be more and more HRegion.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HRegion yes HBase The smallest unit of distributed storage and load balancing in .","attrs":{}},{"type":"text","text":" The smallest unit means different HRegion Can be distributed in different HRegion Server On . but ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" One HRegion Will not be split into multiple Server Upper .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HRegion Although it's the smallest unit of load balancing , But it's not the smallest unit of physical storage .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" in fact ,HRegion By one or more Store form ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Every Store Save one Column Family.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Every Strore By another MemStore and 0 At most StoreFile form . Pictured above .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2. StoreFile and HFile structure ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"StoreFile With HFile The format is saved in HDFS On .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile The format is :","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/02/0253c96557ac8d366a059bb691a42c8a.png","alt":"HFile Format ","title":"HFile Format ","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile Format ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" First HFile Files are variable length , Only two of them are fixed in length :Trailer and FileInfo. As the picture shows ,Trailer There are pointers to other numbers in According to the starting point of the block .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"File Info Some of the files are recorded in Meta Information , for example :AVG_KEY_LEN, AVG_VALUE_LEN, LAST_KEY, COMPARATOR, MAX_SEQ_ID_KEY etc. .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Data Index and Meta Index Block records each Data Block and Meta The starting point of the block .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Data Block yes HBase I/O The basic unit of , In order to improve efficiency ,HRegionServer Based on LRU Of Block Cache Mechanism . Every Data The size of the block can be used to create a Table It is specified by parameters , Large size Block It's good for order Scan, trumpet Block It's good for random queries . Every Data Except for the beginning Magic Beyond that, there are KeyValue It's made by splicing , Magic The content is just random numbers , The purpose is to prevent data corruption .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile Each of them KeyValue Yes, it's a simple byte Array . But this byte Array contains many items , And it has a fixed structure . Let's take a look at the structure inside :","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/be/be088411e92d68acaec980b0f7cf1e52.png","alt":"HFile Specific structure ","title":"HFile Specific structure ","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile Specific structure ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" It starts with two fixed length values , respectively Key Length and Value The length of . Followed by the Key, It starts with a fixed length number , Express RowKey The length of , Followed by the RowKey, Then there's the fixed length number , Express Family The length of , And then there was Family, Next is Qualifier, Then there are two fixed length values , Express Time Stamp and Key Type(Put/Delete).Value Part of it doesn't have such a complex structure , It's pure binary data .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile It's divided into six parts :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"Data Block paragraph – Save the data in the table , This part can be compressed .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"Meta Block paragraph ( Optional )– Save user-defined kv Yes , Can be compressed .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"File Info paragraph –Hfile Meta information of , Not compressed , Users can also add their own meta information in this section .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"Data Block Index paragraph –Data Block The index of . Of each index key Yes, it is block The first record of key.","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"Meta Block Index paragraph ( Optional )–Meta Block The index of .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"Trailer– This is a fixed length . The offset of each segment is saved , Read a HFile when , Will first read Trailer,Trailer The starting position of each segment is saved ( Part of the Magic Number For security check), then ,DataBlock Index Will be read into memory , such , When searching for something key when , You don't have to scan the whole thing HFile, And just find it in memory key Where block, Through a disk io Will the whole block Read into memory , Find what you need key.DataBlock Index use LRU Mechanism elimination .","attrs":{}}]}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HFile Of Data Block,Meta Block Usually, it is stored in compression , Compression can greatly reduce the network IO And disk IO, The cost that comes with it, of course, is that it costs cpu Compress and decompress .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" at present HFile There are two ways to compress :Gzip,Lzo.","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3. Memstore And StoreFile","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" One HRegion By multiple Store form , Every Store Contains all the data of a column family Store Including... In memory Memstore And on the hard disk StoreFile.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Write first Memstore, When Memstore The amount of data in reaches a certain threshold ,HRegionServer start-up FlashCache Process write StoreFile, Each write forms a separate StoreFile","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When StoreFile When the size exceeds a certain threshold , It will HRegion Split into two , And by the HMaster Assigned to HRegion The server , Load balancing ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" When the client retrieves data , First in memstore look for , No more storefile.","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4. HLog(WAL log)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"WAL Meaning for Write ahead log, similar mysql Medium binlog, be used for For disaster recovery ,Hlog Record all changes to the data , Once the data is modified , You can start from log Recovery in .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Every Region Server Maintain a Hlog, Not every one of them Region One ","attrs":{}},{"type":"text","text":". It's different region( From different table) My logs will be mixed up , The purpose of this is to continuously append a single file, as opposed to writing multiple files at the same time , Can reduce the number of disk addressing ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" So we can improve our understanding of table Write performance ","attrs":{}},{"type":"text","text":". The trouble is , If one region server Offline , in order to ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Restore the... On it region, Need to put region server Upper log To break up ","attrs":{}},{"type":"text","text":", And then distribute it to other region server Recovery on the Internet .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HLog The document is a common Hadoop Sequence File:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"HLog Sequence File Of Key yes HLogKey object ,HLogKey The home information of the written data is recorded in , except table and region Out of the name , It also includes sequence number and timestamp,timestamp yes ” Write time ”,sequence number Starting at 0, Or the last time it was put into the file system sequence number.","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"HLog Sequece File Of Value yes HBase Of KeyValue object , The corresponding HFile Medium KeyValue, As described above .","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Reading and writing process ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1. Read request process :","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HRegionServer preserved meta Tables and table data , To access table data , First Client Go visit first zookeeper, from zookeeper Get in there meta Table location information , That is to find this meta Where is the watch HRegionServer There's... On top .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" next Client From what we just got HRegionServer Of IP To visit Meta Where the watch is HRegionServer, So it reads Meta, And then get Meta Metadata stored in the table .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Client Through information stored in metadata , Access the corresponding HRegionServer, And then scan the location HRegionServer Of Memstore and Storefile To query data .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Last HRegionServer Respond the query data to Client.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" see meta Table information ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2. Write request process :","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Client Also visit first zookeeper, find Meta surface , And get the Meta Table metadata .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Determine the corresponding... Of the data to be written HRegion and HRegionServer The server .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Client To this HRegionServer The server initiates a write request , then HRegionServer Receive a request and respond to .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Client First write the data to HLog, To prevent data loss .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Then write the data to Memstore.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" If HLog and Memstore All are written successfully , The data is written successfully ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" If Memstore Threshold reached , Will be able to Memstore Data in flush To Storefile in .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When Storefile More and more , Will trigger Compact Merge operation , Put too much Storefile Merge into a large Storefile.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When Storefile More and more big ,Region It's going to get bigger and bigger , When the threshold is reached , Will trigger Split operation , take Region Split in two .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Details :","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HBase Use MemStore and StoreFile Store updates to tables .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The data is first written to Log(WAL log) And memory (MemStore) in ,MemStore The data in is sorted ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" When MemStore Accumulate to a certain threshold , I'm going to create a new one MemStore","attrs":{}},{"type":"text","text":", And will be old MemStore Add to flush queue , By a separate thread flush To disk , Become a StoreFile. At the same time , The system will be in zookeeper Record a redo point, Indicates that the changes before this time have been persisted .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When the system has an accident , May cause memory (MemStore) Data loss in , At this time to use Log(WAL log) To restore checkpoint Later data .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"StoreFile Is read-only , Once created, it cannot be modified . therefore HBase The update of is actually a continuous operation . When one Store Medium StoreFile After reaching a certain threshold , There will be a merger (minor_compact, major_compact), Will be for the same key The changes are combined , Form a big one StoreFile, When StoreFile After a certain threshold value is reached , I'll be right again StoreFile Conduct split, Divide equally into two StoreFile.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Because the update of the table is continuously added ,compact when , Need to access Store All of them StoreFile and MemStore, Press them row key A merger , because StoreFile and MemStore It's all sorted , also StoreFile With in memory index , The process of merging is still relatively fast .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"HRegion management ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"HRegion Distribute ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Any time ,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" One HRegion Only one... Can be assigned HRegion Server","attrs":{}},{"type":"text","text":".HMaster What are the current records available HRegion Server. And what's currently HRegion To which HRegion Server, Which? HRegion It's not allocated yet . When new ones need to be allocated HRegion, And there's a HRegion Server When there is free space on ,HMaster Here it is HRegion Server Send a load request , hold HRegion Assign to this HRegion Server.HRegion Server After getting the request , It's about this HRegion Provide services .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"HRegion Server go online ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HMaster Use zookeeper To keep track of HRegion Server state ","attrs":{}},{"type":"text","text":". When a HRegion Server Startup time , First in zookeeper Upper server Create your own znode. because HMaster Subscribe to the server Change message on Directory , When server When adding or deleting files in the directory ,HMaster You can get it from zookeeper Real time notification of . So once HRegion Server go online ,HMaster You can get the news right away .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"HRegion Server Offline ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When HRegion Server When offline , It and zookeeper The conversation is broken ,zookeeper And automatic release represents this server Exclusive lock on file of .HMaster You can be sure :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"HRegion Server and zookeeper The network between is disconnected .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"HRegion Server Hang up .","attrs":{}}]}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" In either case ,HRegion Server Can't go on for it HRegion Service provided , here HMaster Will delete server The catalogue represents this one HRegion Server Of znode data , And put this HRegion Server Of HRegion Assigned to other living nodes .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"HMaster Working mechanism ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"master go online ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"master Start with the following steps :","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" from zookeeper On ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" Get the only representative active master Lock of ","attrs":{}},{"type":"text","text":", To stop the others HMaster Become master.","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" scanning zookeeper Upper server Parent node , Get the currently available HRegion Server list .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" And each HRegion Server signal communication , Get the currently assigned HRegion and HRegion Server Correspondence of .","attrs":{}}]}],"attrs":{}},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":" scanning .META.region Set , Calculate the currently unallocated HRegion, Put them in the waiting list HRegion list .","attrs":{}}]}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"master Offline ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" because ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HMaster Only tables and region Metadata ","attrs":{}},{"type":"text","text":", And not participate in table data IO The process of ,HMaster Offline only causes all metadata changes to be frozen ( Unable to create delete table , Cannot modify table's schema, Unable to proceed HRegion Load balancing of , Unable to deal with HRegion Up and down line , Unable to proceed HRegion The merger of , The only exception is HRegion Of split It can be done normally , Because only HRegion Server Participate in ),","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":" The data reading and writing of the table can also be carried out normally ","attrs":{}},{"type":"text","text":". therefore ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"HMaster Offline for a short time on the whole HBase Clustering has no impact ","attrs":{}},{"type":"text","text":".","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" We can see from the online process that ,HMaster All the information saved can be redundant ( Can be collected or calculated from other parts of the system )","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" therefore , commonly HBase There is always one in the cluster HMaster In service , And more than one ‘HMaster’ Waiting for the moment to take its place .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"HBase Three important mechanisms ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1. flush Mechanism ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.regionserver.global.memstore.size","attrs":{}},{"type":"text","text":") Default ; Pile size 40%","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"regionServer Overall situation memstore Size , Exceeding this size will trigger flush Operations to disk , The default is heap size 40%, and regionserver Grade flush Blocks client reads and writes ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.hregion.memstore.flush.size","attrs":{}},{"type":"text","text":") Default :128M","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Single region in memstore Cache size of , Beyond that whole HRegion will flush,","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.regionserver.optionalcacheflushinterval","attrs":{}},{"type":"text","text":") Default :1h","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The maximum amount of time a file in memory can survive before being refreshed automatically ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.regionserver.global.memstore.size.lower.limit","attrs":{}},{"type":"text","text":") Default : Heap size * 0.4 * 0.95","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Sometimes it's clustered “ Write load ” Very high , The number of writes has always exceeded flush The amount of , At this time , We just hope memstore Don't exceed certain security settings . under these circumstances , Write operations will be blocked until memstore Back to a “ Manageable ” Size , This is the default size of the heap * 0.4 * 0.95, That is to say regionserver Grade flush After the operation is sent , Will block the client from writing , All the way to the whole thing regionserver Grade memstore The size is Heap size * 0.4 *0.95 until ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.hregion.preclose.flush.size","attrs":{}},{"type":"text","text":") The default is :5M","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When one region Medium memstore When the size of is greater than this value , We triggered again region Of close when , Will run first “pre-flush” operation , Clean up this one that needs to be closed memstore, then Put this region Offline . When one region Get offline , We can't do any more write operations . If one memstore Very big time ,flush Operation takes a lot of time .\"pre-flush\" Operation means in region Before going offline , I will put memstore Empty . So in the end close During operation ,flush The operation will be quick .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6.(","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"hbase.hstore.compactionThreshold","attrs":{}},{"type":"text","text":") Default : exceed 3 individual ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" One store What's allowed in it hfile The number of , More than this number will be written to a new one hfile Inside That is, every region For each column family of memstore stay flush by hfile When , By default, when more than 3 individual hfile These files will be merged and rewritten into a new file , The larger the number, the less time it takes to trigger the merge , But the longer each merger takes ","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2. compact Mechanism ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Put the small storeFile Files merged into large HFile file .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Clean up expired data , Including deleted data ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Save the version number of the data as 1 individual .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"split Mechanism ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When HRegion Threshold reached , It's too big HRegion Split in two .","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Default one HFile achieve 10Gb When you do it, you do it .","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Search official account “ Five minutes to learn big data ”, Deeply study big data technology ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210114193151933W.html

  1. springboot异常处理之404
  2. Spring boot security international multilingual I18N
  3. Spring boot exception handling 404
  4. Netty系列化之Google Protobuf编解码
  5. Netty之编解码
  6. Java编解码
  7. Netty解码器
  8. Netty与TCP粘包拆包
  9. Netty开发入门
  10. Java集合遍历时遇到的坑
  11. Spring IOC 源码解析(下)
  12. Spring IoC源码解析(上)
  13. Google protobuf codec of netty serialization
  14. Encoding and decoding of netty
  15. Java codec
  16. Netty decoder
  17. Netty and TCP packet sticking and unpacking
  18. Introduction to netty development
  19. Problems encountered in Java collection traversal
  20. Spring IOC source code analysis (2)
  21. Spring IOC source code analysis (Part one)
  22. 半小时用Spring Boot注解实现Redis分布式锁
  23. Implementing redis distributed lock with spring boot annotation in half an hour
  24. What should we do if we can't get tickets for Spring Festival transportation? You can solve this problem by using these ticket grabbing apps!
  25. 百度智能(文本识别),API传图OC代码与SDK使用
  26. springboot源码解析-管中窥豹系列之aware(六)
  27. Baidu intelligent (text recognition), API map, OC code and SDK
  28. Spring boot source code analysis
  29. springboot源码解析-管中窥豹系列之aware(六)
  30. 百度智能(文本识别),API传图OC代码与SDK使用
  31. Spring boot source code analysis
  32. Baidu intelligent (text recognition), API map, OC code and SDK
  33. Java学习笔记
  34. Java learning notes
  35. Sentry(v20.12.1) K8S 雲原生架構探索, SENTRY FOR JAVASCRIPT 手動捕獲事件基本用法
  36. 我的程式設計師之路:自學Java篇
  37. SpringBoot專案,如何優雅的把介面引數中的空白值替換為null值?
  38. Sentry (v20.12.1) k8s cloud native architecture exploration, sentry for JavaScript manual capture event basic usage
  39. My way of programmer: self study java
  40. Spring boot project, how to gracefully replace the blank value in the interface argument with null value?
  41. Redis 用的很溜,了解过它用的什么协议吗?
  42. Redis is easy to use. Do you know what protocol it uses?
  43. 《零基础看得懂的C++入门教程 》——(10)面向对象
  44. Introduction to zero basic C + + (10) object oriented
  45. HTTP status code and troubleshooting
  46. Java NIO之Channel(通道)入门
  47. Introduction to Java NiO channel
  48. Spring中的@Valid 和 @Validated注解你用对了吗
  49. Are you using the @ valid and @ validated annotations correctly in spring
  50. Spring中的@Valid 和 @Validated注解你用对了吗
  51. Are you using the @ valid and @ validated annotations correctly in spring
  52. Redis | 慢查询
  53. Redis | slow query
  54. RabbitMQ一个优秀的.NET消息队列框架
  55. Autofac一个优秀的.NET IoC框架
  56. 如何使用Redis实现分布式缓存
  57. Rabbitmq an excellent. Net message queue framework
  58. Autofac is an excellent. Net IOC framework
  59. How to use redis to realize distributed cache
  60. JDK1.7-HashMap原理