Do you know that there are ten ways to improve the performance of concurrent HashMap?

Java back end technology 2021-04-16 18:07:59
know ways improve performance concurrent


Previous hot articles :

1、 In the past The best selected blogs are all here !
2、Java Medium Switch All support String 了 , Why not support long?
3、 Why do database fields use NOT NULL?
4、CTO Said the , misuse @Autowired and @Resource You can get your lunch box
5、 The whole story of the programmer's resignation

Some digressions

How to improve system throughput under high concurrency is the goal of all back-end developers ,Java The creator of concurrency Doug Lea stay Java 7 ConcurrentHashMap Some reference answers are given in the design of , This paper summarizes in detail ConcurrentHashMap Ten details in the source code that affect concurrency performance , There's a common spin lock ,CAS Use , There are also delays in writing memory ,volatile Semantic degradation and other uncommon techniques , I hope it will be helpful to your development and design .
because ConcurrentHashMap There's a lot of content , and Java 7 and Java 8 There is a big difference between the two versions , If we adopt the way of writing that we compared in the last part , It's hard to avoid missing some details in the limited space , So I decided to use two articles to elaborate on the two versions of ConcurrentHashMap Technical details , But to help readers understand systematically , Three articles ( contain HashMap That one of ) The structure of the whole article will be consistent .

Read the book thin

《 Alibaba Java Development Manual 》 The author of ConcurrentHashMap It's a great design , He said :“ ConcurrentHashMap Source code is learning Java Code development specification is a very good learning material , I suggest that students can often go to see , There will always be new deliveries ”, I believe you can hear a lot about ConcurrentHashMap The praise of design , In the unfolding hidden in ConcurrentHashMap Before all the little secrets , You have to have this picture in your brain first :
 picture
img
about Java 7 Come on , This picture has been able to completely ConcurrentHashMap It's clear that :
  1. ConcurrentHashMap Is a thread safe Map Realization , Its read does not need to be locked , By introducing Segment, The locking force is small enough when writing
  2. By introducing Segment, ConcurrentHashMap You need to do two hashes when reading and writing , But these two hashes are in exchange for finer force granularity locks , This means that it can support higher concurrency
  3. In each bucket array key-value Yes, it is still stored in the bucket in the form of linked list , This and HashMap It's consistent .

Read the book thick

About Java 7 Of ConcurrentHashMap The overall structure of , It can be summed up in the above three or two sentences , This picture should be in your mind soon , Next, let's try to attract you to continue to look through a few questions , Read the book thick :
  1. ConcurrentHashMap Which operations need to be locked ?
  2. ConcurrentHashMap How to realize the lock free read of ?
  3. Call... In a multithreaded scenario size() Method to get ConcurrentHashMap What's the challenge of the size of ? ConcurrentHashMap How did it work out ?
  4. There is Segment Under the premise of existence , How to expand the capacity ?
In the last article we summarized HashMap There are four of the most important points in : initialization , Data addressing -hash Method , data storage -put Method , Capacity expansion -resize Method , about ConcurrentHashMap Come on , These four operations are still the most important , But because it introduces more complex data structures , So in the call size() View the entire ConcurrentHashMap There's also a big challenge when it comes to the number of people , We'll also focus on Doug Lea stay size() Design in method

initialization

public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
     if (!(loadFactor >  0) || initialCapacity <  0 || concurrencyLevel <=  0)
         throw  new IllegalArgumentException();
     if (concurrencyLevel > MAX_SEGMENTS)
        concurrencyLevel = MAX_SEGMENTS;
     // Find power-of-two sizes best matching arguments
     int sshift =  0;
     int ssize =  1;
     //  Guarantee ssize It is greater than concurrencyLevel The smallest 2 Omega to an integer power
     while (ssize < concurrencyLevel) {
        ++sshift;
        ssize <<=  1;
    }
     //  Addressing requires two hashes , The high bits of the hash are used to determine segment, The low order user determines the elements in the bucket array
     this.segmentShift =  32 - sshift;
     this.segmentMask = ssize -  1;
     if (initialCapacity > MAXIMUM_CAPACITY)
        initialCapacity = MAXIMUM_CAPACITY;
     int c = initialCapacity / ssize;
     if (c * ssize < initialCapacity)
        ++c;
     int cap = MIN_SEGMENT_TABLE_CAPACITY;
     while (cap < c)
        cap <<=  1;
    Segment<K,V> s0 =  new Segment<K,V>(loadFactor, ( int)(cap * loadFactor), (HashEntry<K,V>[]) new HashEntry[cap]);
    Segment<K,V>[] ss = (Segment<K,V>[]) new Segment[ssize];
    UNSAFE.putOrderedObject(ss, SBASE, s0);  // ordered write of segments[0]
     this.segments = ss;
}
Three important things are done in initialization methods :
  1. To determine the segments The size of the array ssize, ssize According to the reference concurrencyLevel determine , Take greater than concurrencyLevel The smallest 2 Omega to an integer power
  2. Determine the offset for hash addressing , This offset determines that the element is in segment Array, we use
  3. initialization segment First element in array , Element type is HashEntry Array of , The length of this array is initialCapacity / ssize, That is, the initialization size divided by segment Size of array , segment The other elements in the array follow put During operation, refer to the first initialized instance to initialize
static  final  class HashEntry<K,V{
     final  int hash; 
     final K key;
     volatile V value;
     volatile HashEntry<K,V> next; 
 
    HashEntry( int hash, K key, V value, HashEntry<K,V> next) {
         this.hash = hash;
         this.key = key;
         this.value = value;
         this.next = next;
    }
     final void setNext(HashEntry<K,V> n) {
        UNSAFE.putOrderedObject( this, nextOffset, n);
    }
}
there HashEntry and HashMap Medium HashEntry It's the same thing , It is ConcurrentHashMap Data item of , There are two details to pay attention to here :
Detail one :
HashEntry Member variables of value and next It's a keyword volatile Embellished , That is to say, all threads can check the changes of other threads to these two variables in time , So you can read the latest values of these two references without locking
Detail two :
HashEntry Of setNext Method is called UNSAFE.putOrderedObject, This interface belongs to sun In the security library api, Not at all J2SE Part of , Its role and volatile On the contrary , Call this api Set the value so that volatile Modify the variable delay write to main memory , When is it written to main memory ?
JMM There is a rule in :
Execute on a variable unlock Before the operation , You must first synchronize this variable to main memory ( perform store and write operation )
The following is about put We'll look at it in detail setNext Usage of

Hash

By introducing segment, So whether it's calling get Method read or call put Method writing , You need to hash twice , I still remember that when we talked about initialization above, the system did an important thing :
  • Determine the offset for hash addressing , This offset determines that the element is in segment Array, we use
Yes, that's the code :
this.segmentShift =  32 - sshift;
Here we use 32 It's because int The length of the shape is 32, With segmentShift, ConcurrentHashMap How to do the first hash ?
public V put(K key, V value) {
    Segment<K,V> s;
     if (value ==  null)
         throw  new NullPointerException();
     int hash = hash(key);
     //  Variable j Represents that the data item is in segment The... In the array j term
     int j = (hash >>> segmentShift) & segmentMask;
         //  If segment[j] by null, The following method is responsible for initialization
        s = ensureSegment(j); 
     return s.put(key, hash, value,  false);
}
We use put Methods as an example , Variable j Represents that the data item is in segment The... In the array j term . As shown in the figure below, if segment The size of the array is 2 Of n Power , be hash >>> segmentShift Just took key The hash value of n position , And then with the mask segmentMask Xiangyu is equivalent to and still uses key The high bit of the hash of the segment Position in array .
 picture
image-20210409232020703
hash Methods and non thread safe HashMap be similar , I won't go into details here .
Detail three :
In delayed initialization Segment Array time , The author adopts CAS Avoid locking , and CAS It can ensure that the final initialization can only be completed by one thread . In the final decision to call CAS Two more checks were performed before initialization , The first check can avoid repeated initialization tab Array , The second check can avoid repeated initialization Segment object , Every line of code is carefully considered by the author .
private Segment<K,V> ensureSegment(int k) {
     final Segment<K,V>[] ss =  this.segments;
     long u = (k << SSHIFT) + SBASE;  // raw offset  The actual byte offset
    Segment<K,V> seg;
     if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) ==  null) {
        Segment<K,V> proto = ss[ 0];  // use segment 0 as prototype
         int cap = proto.table.length;
         float lf = proto.loadFactor;
         int threshold = ( int)(cap * lf);
        HashEntry<K,V>[] tab = (HashEntry<K,V>[]) new HashEntry[cap];
         if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) ==  null) {  // recheck  Check again if it has been initialized
            Segment<K,V> s =  new Segment<K,V>(lf, threshold, tab);
             while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) ==  null) {
                 if (UNSAFE.compareAndSwapObject(ss, u,  null, seg = s))  //  Use  CAS  Make sure it is initialized only once
                     break;
            }
        }
    }
     return seg;
}

put Method

final V put(K key, int hash, V value, boolean onlyIfAbsent) {
    HashEntry<K,V> node = tryLock() ?  null : scanAndLockForPut(key, hash, value); 
    V oldValue;
     try {
        HashEntry<K,V>[] tab = table;
         int index = (tab.length -  1) & hash;
        HashEntry<K,V> first = entryAt(tab, index);
         for (HashEntry<K,V> e = first;;) {
             if (e !=  null) {
                K k;  //  If you find key Same data item , Then directly replace
                 if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
                    oldValue = e.value;
                     if (!onlyIfAbsent) {
                        e.value = value;
                        ++modCount; 
                    }
                     break;
                }
                e = e.next;
            }
             else {
                 if (node !=  null)
                     // node If it is not empty, it means that it has been initialized while waiting for spin , Note that the setNext, Not directly next
                    node.setNext(first); 
                 else
                     //  otherwise , Create a new one here HashEntry
                    node =  new HashEntry<K,V>(hash, key, value, first);
                 int c = count +  1//  First plus 1
                 if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                    rehash(node);
                 else
                     //  Write the new node to , Note that the method called here has a gateway
                    setEntryAt(tab, index, node); 
                ++modCount;
                count = c;
                oldValue =  null;
                 break;
            }
        }
    }  finally {
        unlock();
    }
     return oldValue;
}
This code is in the whole ConcurrentHashMap It's brilliant in the design of , In this short 40 Line code ,Doug Lea Like a magic magician , In the twinkling of an eye, several kinds of Magic have changed , It's gaping , Sigh for its deep understanding of concurrency , Let's analyze it slowly Doug Lea The magic used in this code :
Detail four :
CPU It's fair that we have a good schedule , If the thread is suspended because the lock cannot be obtained, the efficiency of the thread will be reduced , Not to mention rescheduling after suspending , Switch context , It's another big expense . If you can meet other threads, it won't take long to hold the lock , Spin would be a better choice , There is also a trade-off , If another thread takes too long to hold the lock , On the contrary, it is better to suspend blocking and wait , Let's see ConcurrentHashMap How to do it :
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
    HashEntry<K,V> first = entryForHash( this, hash);
    HashEntry<K,V> e = first;
    HashEntry<K,V> node =  null;
     int retries = - 1// negative while locating node
     while (!tryLock()) {  //  Spin wait
        HashEntry<K,V> f;  // to recheck first below
         if (retries <  0) {
             if (e ==  null) {  //  It's not written in this bucket yet k-v term
                 if (node ==  null// speculatively create node  Create a new node directly
                    node =  new HashEntry<K,V>(hash, key, value,  null);
                retries =  0;  
            }
             // key The values are equal , Jump out and try to get the lock
             else  if (key.equals(e.key))
                retries =  0;
             else  //  Traversing the linked list
                e = e.next;
        }
         else  if (++retries > MAX_SCAN_RETRIES) {
             //  After waiting more than a certain number of times, the thread can only be suspended , I'm waiting
            lock();
             break;
        }
         else  if ((retries &  1) ==  0 && (f = entryForHash( this, hash)) != first) { 
             //  If the head node changes , Then reset the number of times , Keep spinning and waiting
            e = first = f; 
            retries = - 1
        }
    }
     return node;
}
ConcurrentHashMap Our strategy is spin MAX_SCAN_RETRIES Time , If the lock has not been acquired, call lock Hang up, block, wait , Of course, if other threads change the head node of the linked list by using the head insertion method , Reset the number of spin waits .
Detail five :
Need to know , If we want to improve the concurrency of the system from the perspective of coding , A golden rule is to reduce the size of the concurrency critical area . stay scanAndLockForPut The design of this method , There's a little detail that brightens my eyes , It's in the process of spinning that we initialize a HashEntry, The advantage of this is that the thread does not need to initialize after getting the lock HashEntry 了 , The time to hold the lock is reduced accordingly , To improve performance .
Detail six :
stay put The beginning of the method , There is such a small line of code :
HashEntry<K,V>[] tab = table;
It looks like a simple temporary variable assignment , In fact, it has a bright future , Let's see table Statement of :
transient  volatile HashEntry<K,V>[] table;
table Variable is replaced by keyword volatile modification , CPU Processing volatile When modifying a variable, it will have the following behavior :
Sniffing
Each processor sniffs the data propagated on the bus to check whether its cached value is out of date , When the processor finds that the memory address corresponding to its cache line is modified , Will set the current processor's cache row to an invalid state , When the processor modifies the data , It will re read the data from the system memory to the processor cache
Therefore, the performance consumption of reading and writing such variables directly is greater than that of ordinary variables , So in put The beginning of the method will table The purpose of assigning a variable to a normal local variable is to eliminate volatile Performance loss . Here's another question : Will that lead to table The semantic change of , Let other threads not read the latest value ? take it easy , Let's move on .
Detail seven :
Be careful put This method in the method : entryAt():
static  final <K,V>  HashEntry<K,V> entryAt(HashEntry<K,V>[] tab, int i) {
     return (tab ==  null) ?  null : (HashEntry<K,V>) UNSAFE.getObjectVolatile(tab, (( long)i << TSHIFT) + TBASE);
}
The underlying layer of this method calls UNSAFE.getObjectVolatile, The purpose of this method is to read ordinary variables like volatile Modify the variable to read the latest value , In the previous article, we analyzed , Because of the variable tab Now it's a normal temporary variable , If called directly tab[i] You may not be able to get the latest first node . Careful readers may think that :Doug Lea Are you confused , It's back to the origin , Why don't you just start volatile Variable , It took a lot of effort . Let's move on .
Detail eight :
stay put Method implementation , If there is no key Data items with equal values , The new data item will be inserted into the chain header and written into the array , The method of calling is :
static  final <K,V>  void setEntryAt(HashEntry<K,V>[] tab, int i, HashEntry<K,V> e) {
    UNSAFE.putOrderedObject(tab, (( long)i << TSHIFT) + TBASE, e);
}
putOrderedObject The data written by this interface will not be immediately obtained by other threads , But in put Method last call unclock Before it is visible to other threads , See above for JMM Description of :
Execute on a variable unlock Before the operation , You must first synchronize this variable to main memory ( perform store and write operation )
There are two advantages , The first is performance , Because there is no need to synchronize the main memory operation in the critical area of holding the lock , So it takes less time to hold the lock . The second is to ensure the consistency of data , stay put Operation of the finally Before the statement is executed , put The new data is not displayed to other threads , This is a ConcurrentHashMap The key reason to realize lock free reading .
Let's sum it up a little bit here put The three most important details of the method , First of all, will volatile Change variables into ordinary variables to improve performance , Because in put You need to read the latest data in , So next call UNSAFE.getObjectVolatile Get the latest header node , But by calling UNSAFE.putOrderedObject Delay writing variables to main memory until put The end of the method , First, reduce the critical area and improve performance , It can also ensure that other threads read complete data .
Detail nine :
If put You really need to insert data items into the chain header , You have to pay attention to that , ConcurrentHashMap The corresponding sentence is :
node.setNext(first);
Let's take a look at setNext The concrete realization of :
final void setNext(HashEntry<K,V> n) {
    UNSAFE.putOrderedObject( this, nextOffset, n);
}
because next A variable is used volatile Keyword modifier , This call UNSAFE.putOrderedObject It's a change volatile The semantics of the , There are two considerations , The first is still performance , This implementation has significantly higher performance , This point has been analyzed in detail before , The second point is to consider semantic consistency , about put Method because it calls UNSAFE.getObjectVolatile, You can still get the latest data , about get Method , stay put Before the end of the method , I don't want incomplete data to be passed by other threads get Method , It's also reasonable .

resize Capacity expansion

private void rehash(HashEntry<K,V> node) {
    HashEntry<K,V>[] oldTable = table;
     int oldCapacity = oldTable.length;
     int newCapacity = oldCapacity <<  1;
    threshold = ( int)(newCapacity * loadFactor);
    HashEntry<K,V>[] newTable = (HashEntry<K,V>[])  new HashEntry[newCapacity];
     int sizeMask = newCapacity -  1;
     for ( int i =  0; i < oldCapacity ; i++) {
        HashEntry<K,V> e = oldTable[i];
         if (e !=  null) {
            HashEntry<K,V> next = e.next;
             int idx = e.hash & sizeMask;
             if (next ==  null//  Single node on list  Only one node , Simple handling
                newTable[idx] = e;
             else { 
                HashEntry<K,V> lastRun = e;
                 int lastIdx = idx;
                 //  Make sure that newTable[k] Not for null
                 for (HashEntry<K,V> last = next;
                        last !=  null;
                        last = last.next) {
                     int k = last.hash & sizeMask;
                     if (k != lastIdx) {
                        lastIdx = k;
                        lastRun = last;
                    }
                }
                newTable[lastIdx] = lastRun;
                 // Clone remaining nodes  Copy the nodes that cannot be reused before the tag , And then add it to the corresponding hash Go in the bucket
                 for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {
                    V v = p.value;
                     int h = p.hash;
                     int k = h & sizeMask;
                    HashEntry<K,V> n = newTable[k];
                    newTable[k] =  new HashEntry<K,V>(h, p.key, v, n);
                }
            }
        }
    }
     int nodeIndex = node.hash & sizeMask;  // add the new node  Part of the put function , Add the new node to the top of the list
    node.setNext(newTable[nodeIndex]);
    newTable[nodeIndex] = node;
    table = newTable;
}
If you've seen our last analysis HashMap Of rehash The process of looking at this code will be easier , We analyzed it in the last article , The length of the whole bucket array is 2 In this case , Elements in the same bucket before expansion will only be distributed in two buckets after expansion , The subscript of one of the buckets remains unchanged , We call it the old barrel , The subscript of the other bucket is the old bucket subscript plus the old capacity , We call it the new barrel , In fact, the first for The purpose of the loop is to find the last data item in a linked list that should be moved to a new bucket , Move directly into the new barrel , This is done to ensure that the following call HashEntry<K,V> n = newTable[k]; It won't read null. the second for It's simpler , Move all data items to the new bucket array , When all the operations are finished, we will newTable Assign a value to table.
rehash There is no lock in the method , It's not that calling this method doesn't require locking , The author put a lock on the outer layer , This needs attention .

size Method

I was analyzing HashMap We didn't talk about the method size Method , Because in a single thread environment, this method can use a global variable to solve , The same solution can also be used in multithreading scenarios , However, to read global variables in a multithreaded environment will lead to endless difficulties “ lock ” in , This is something we don't want to see , that ConcurrentHashMap How to solve this problem :
public int size() {
     final Segment<K,V>[] segments =  this.segments;
     int size;
     boolean overflow;  // true if size overflows 32 bits
     long sum;          // sum of modCounts
     long last =  0L;    // previous sum
     int retries = - 1// first iteration isn't retry
     try {
         for (;;) {
             if (retries++ == RETRIES_BEFORE_LOCK) {
                 for ( int j =  0; j < segments.length; ++j)
                    ensureSegment(j).lock();  // force creation
            }
            sum =  0L;
            size =  0;
            overflow =  false;
             for ( int j =  0; j < segments.length; ++j) {
                Segment<K,V> seg = segmentAt(segments, j);
                 if (seg !=  null) {
                    sum += seg.modCount;
                     int c = seg.count;
                     if (c <  0 || (size += c) <  0)
                        overflow =  true;
                }
            }
             if (sum == last)
                 break;
            last = sum;
        }
    }  finally {
         if (retries > RETRIES_BEFORE_LOCK) {
             for ( int j =  0; j < segments.length; ++j)
                segmentAt(segments, j).unlock();
        }
    }
     return overflow ? Integer.MAX_VALUE : size;
}
Introduced earlier put Method, we chose to ignore a small member variable modCount, This variable is very useful here , Its main function is to record the whole Segment Number of write operations in , Because the write operation will affect the whole ConcurrentHashMap The size of the .
Because it's reading ConcurrentHashMap You need to make sure that you read the latest value when you change the size , So it calls UNSAFE.getObjectVolatile This method , Although the performance of this method is worse than that of ordinary variables , But compared to global locking , But much better .
static final <K,V> Segment<K,V> segmentAt(Segment<K,V>[] ss, int j) {
    long u = (j << SSHIFT) + SBASE; //  Calculate the actual byte offset
     return ss == null ? null : (Segment<K,V>) UNSAFE.getObjectVolatile(ss, u);
}
Details ten :
stay size The design of the method , ConcurrentHashMap Try the lock free method first , If you traverse all of them twice segment Array, the whole ConcurrentHashMap No write operation occurred , Then return each segment Array of size() The sum of the , Otherwise, it will be traversed again , If write operations are frequent , Then it has to be locked , The lock here is equivalent to a global lock , Because of segment Every element of the array is locked . How to judge the whole ConcurrentHashMap How often do I write ? Just look at the number of no lock retries , When the number of no lock retries exceeds the threshold, global lock processing is performed .

summary

After reading ConcurrentHashMap We'll try to answer the questions raised at the beginning of the article :
  1. ConcurrentHashMap Which operations need to be locked ?
    answer : Only write operations need to be locked , Read operations do not require locking
  2. ConcurrentHashMap How to realize the lock free read of ?
    answer : First HashEntry Medium value and next All have volatile Embellished , Secondly, when writing operation, call UNSAFE The library delayed synchronization of main memory , Ensure the consistency of data
  3. Call... In a multithreaded scenario size() Method to get ConcurrentHashMap What's the challenge of the size of ? ConcurrentHashMap How did it work out ?
    answer : size() With global semantics , It is a big challenge to ensure that the global state value can be read without global lock , ConcurrentHashMap By checking whether there is a write operation between two lockless reads to determine the read size() Is it believable , If write operations are frequent , Then it degenerates into global lock read .
  4. There is Segment Under the premise of existence , How to expand capacity ?
    answer : segment The size of the array is determined at the beginning of initialization , The main expansion is HashEntry Array , The basic idea and HashTable Agreement , But this is a thread unsafe method , You need to lock before calling .
Previous hot articles :

1、 A list of classified historical articles ! The best selected blogs are all here !》

2 The whole story of the programmer's resignation
3、 Don't write code all the time , this 130 A website is more important than a pay rise
4、 I want to point north
5、 How to solve MySQL order by limit Statement paging data duplication problem ?
6、Java Eight potential memory leak risks in , How many do you know? ?

7、 A tough guy Multi level cache Implementation scheme !
8、 Ali side : How to protect information 100% Delivery successful 、 Message idempotency ?
9、 GitHub Hot list : Crazy spoof by netizens 「 Ants, hey 」 The project is finally open source !
10、 remember ! Be sure to check whether the domain name is HTTPS Of , Otherwise ....
 picture
版权声明
本文为[Java back end technology]所创,转载请带上原文链接,感谢
https://javamana.com/2021/04/20210416162222548x.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云