Serial portal :

notes : This article is based on JDK1.8.

Why use ConcurrentHashMap?

Before thinking about it , We can think about : If not ConcurrentHashMap Words , What other containers are available for us to choose from ? And what are their flaws ?

A hash table can cost O(1) According to the time complexity of key find value value , The container that can meet this need is HashTable and HashMap.

HashTable

HashTable Use synchronized Keywords ensure the security of multithreading environment , But the implementation of locking is exclusive , All visits HashTable Must compete for the same lock , Low performance .

 public synchronized V put(K key, V value) {
// ...
}

HashMap

JDK1.8 Version of HashMap In the reading hash When the slot is opened, it reads the object pointed by the reference in the working memory , In multithreaded environment , The values modified by other threads cannot be read in time .

This leads to some possible problems :

For example, during the insert operation , The first time will be based on key Of hash Value to determine whether the current slot is occupied , If not, it will be inserted value. In a concurrent environment , If A The thread determines that the slot is not occupied , It happens that the time slice runs out while writing , here B The thread happens to do the same thing , The first to insert B Of value, here A By chance CPU Reschedule to continue writing , And then the thread B Of value Cover .

There's another situation where you're in the same hash In the trough ,HashMap Always keep key only , At the time of insertion , If there is key, It will value Cover . Concurrent , If A The thread determines that the last node has not found a duplicate key, The insert operation is performed , If B The thread is in A The same operation is performed between judgment and insertion , Data coverage also happens , The loss of data .

Of course , There are still some concurrency problems like this , I won't go into details here , Just interested in small partners can check the following information .

ConcurrentHashMap Structural characteristics of

Java8 Before

stay Java8 Before , The underlying the Segment+HashEntry The way to achieve .

Using the concept of segment lock , Bottom use Segment Array ,Segment By inheritance ReentrantLock To do the locking , Each operation that needs to be locked will lock one segment, Segmentation ensures that each segment is thread safe .

Picture source :Java7/8 Medium HashMap and ConcurrentHashMap Full resolution

Java8 after

JDK1.8 After using CAS + Synchronized To ensure concurrency security .

use 【Node Array 】 Add 【 Linked list 】 Add 【 Red and black trees 】 Structure , And HashMap similar .

Picture source :Java7/8 Medium HashMap and ConcurrentHashMap Full resolution

Basic constants

Just go through it , Don't worry too much , Some fields may be for compatibility Java8 Previous version .

 /* ---------------- Constants -------------- */
// Maximum capacity allowed 
private static final int MAXIMUM_CAPACITY = 1 << 30; // Default initial value 16, Must be 2 The power of
private static final int DEFAULT_CAPACITY = 16; // toArray The amount that the relevant method may require
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8; // In order to and Java8 Previous segmentation related content is compatible , Not used
private static final int DEFAULT_CONCURRENCY_LEVEL = 16; // Load factor
private static final float LOAD_FACTOR = 0.75f; // List to red black tree threshold > 8 The linked list is converted to a red-black tree
static final int TREEIFY_THRESHOLD = 8; // Tree linked list threshold , Less than or equal to 6(tranfer when ,lc、hc=0 The two counters are respectively ++ Records of the original bin、 new binTreeNode Number ,<=UNTREEIFY_THRESHOLD be untreeify(lo))
static final int UNTREEIFY_THRESHOLD = 6; // The minimum capacity of linked list treeing treeifyBin When , If the capacity is insufficient 64, Will choose to expand to 64
static final int MIN_TREEIFY_CAPACITY = 64; // Minimum rebinding per step
private static final int MIN_TRANSFER_STRIDE = 16; // sizeCtl The number of bits used to generate the tag in
private static int RESIZE_STAMP_BITS = 16; // 2^15-1,help resize Is the maximum number of threads
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1; // 32-16=16,sizeCtl It's recorded in size The offset of the size
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS; // forwarding nodes Of hash value
static final int MOVED = -1; // root-nodal hash value
static final int TREEBIN = -2; // ReservationNode Of hash value
static final int RESERVED = -3; // For ordinary node node hash use
static final int HASH_BITS = 0x7fffffff; // Number of processors available
static final int NCPU = Runtime.getRuntime().availableProcessors();

Important member variables

 /* ---------------- Fields -------------- */
// It's what we call the bottom Node Array , Lazy , It is initialized the first time it is inserted , The size needs to be 2 The power of 
transient volatile Node<K,V>[] table; /**
* Capacity expansion resize It is used in table
*/
private transient volatile Node<K,V>[] nextTable; /**
* Basic counter , It's through CAS To update
*/
private transient volatile long baseCount; /**
* Table initialization and resizing control.
* If it's negative : Indicates that capacity is being initialized or expanded , As follows 、
* -1 Denotes initialization ,
* -N,N-1 Represents the number of threads being expanded * The default is 0, After initialization , Save the size of the next expansion
*/
private transient volatile int sizeCtl; /**
* Split when expanding table The index of
*/
private transient volatile int transferIndex; /**
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
*/
private transient volatile int cellsBusy; /**
* Table of counter cells. When non-null, size is a power of 2.
*/
private transient volatile CounterCell[] counterCells; // View
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;

Construction method

and HashMap equally ,table The initialization of an array is only done when it is first inserted .

 /**
* Create a new , Empty map, The default size is 16
*/
public ConcurrentHashMap() {
} /**
* Specify initial capacity
*/
public ConcurrentHashMap(int initialCapacity) {
if (initialCapacity < 0)
throw new IllegalArgumentException();
int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
MAXIMUM_CAPACITY :
// hashmap I told you in , It is used to return the smallest value greater than or equal to the passed in value 2 Power square
// https://blog.csdn.net/Sky_QiaoBa_Sum/article/details/104095675#tableSizeFor_105
tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
// sizeCtl Initialize to capacity
this.sizeCtl = cap;
} /**
* Receive one map object
*/
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
} /**
* Specify initial capacity and load factor
* @since 1.6
*/
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
} /**
* The most comprehensive : Capacity 、 Load factor 、 Concurrency level
*/
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}

there tableSizeFor Method in HashMap It's been analyzed in :https://blog.csdn.net/Sky_QiaoBa_Sum/article/details/104095675#tableSizeFor_105

tableSizeFor

We can see from the annotation that , This method returns the smallest value greater than or equal to the passed in value 2 Power square ( Pass in 1 when , by 1). How on earth did it come true , Let's look at the specific source code :

 static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
123456789

Tell the truth , I'll see the implementation of this method , With a sigh , Good at math ! I put in the numbers , Read many articles and videos about this part , Through simple examples , Let's summarize .

  • Let's think about it first , We want to be better than n Large minimum 2 The power just needs to be In front of the highest position 1, All in the back 0 Just ok Is that right . Such as 0101 It stands for 5,1000 Just meet our needs for 8.

  • We'll pass in a bigger number , For the convenience of writing , Here is to 8 Position as an example :

  • First step int n = cap -1 This step is actually to prevent cap As such 2 In the case of the power of , Without this step , After an operation , It's going to double . For example, it is 8, It's going to be 16, So subtract... In advance 1, Guaranteed results .

  • Last n<0 The judgment of the situation of , Excluding the incoming capacity of 0 The situation of .

  • n>=MAXIMUM_CAPACITY The judgment of the situation of , After excluding the shift sum or operation, all are 1 The situation of .

put Method store value

putVal

 public V put(K key, V value) {
return putVal(key, value, false);
} /** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
// Yes key Do hash calculations : (h ^ (h >>> 16)) & HASH_BITS;
int hash = spread(key.hashCode());
// Record chain length
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// for the first time put, It's here to initialize
if (tab == null || (n = tab.length) == 0)
tab = initTable();
// To find the hash The index of the array corresponding to the value , I get my first node f,
// Judge here f Is it empty , Is there any node in this position
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// without , use CAS Try putting the value in the , Insert the success , The exit for loop
// If CAS Failure , Indicates that there is concurrent competition , Enter the cycle again
if (casTabAt(tab, i, null,
new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
// hash yes Node node f A property of , be equal to MOVED Indicates that the node is in the migration state
else if ((fh = f.hash) == MOVED)
// Help move 【 The internal calls transfer, Back analysis 】
tab = helpTransfer(tab, f);
else {
// Entering this branch means : according to key Calculated hash, The position we get is there Node Of , Then I will traverse the linked list
V oldVal = null;
// Lock the Node node
synchronized (f) {
// The second check after locking , in the light of tab It may be modified by other threads
if (tabAt(tab, i) == f) {
if (fh >= 0) { // Head of the node hash attribute >= 0
binCount = 1; // Record chain length
// List traversal operation , You'll see
for (Node<K,V> e = f;; ++binCount) {
K ek;
// Find the same key 了
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
// onlyIfAbsent by false Words , This is going to cover , The default is overlay
if (!onlyIfAbsent)
e.val = value;
// And then it's over
break;
}
Node<K,V> pred = e;
// It's the end of the traversal , hold Node Insert the tail
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
else if (f instanceof TreeBin) { // Red and black trees
Node<K,V> p;
binCount = 2;
// Red and black trees putTreeVal
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
if (binCount != 0) {
// Judge whether it is necessary to convert the linked list into a red black tree Number of nodes >= 8 When
if (binCount >= TREEIFY_THRESHOLD)
// The tree, It will be analyzed separately later
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}

You'll find out , If you've seen it HashMap Source code , understand ConcurrentHashMap In fact, the operation is relatively clear , I believe you have a basic understanding of . Next, we will analyze several key methods

initTable

Using delayed initialization , for the first time put When , call initTable() initialization Node Array .

 /**
* Initializes table, using the size recorded in sizeCtl.
*/
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
while ((tab = table) == null || tab.length == 0) {
// If it is less than 0, Indicates that there are other threads scrambling to initialize
if ((sc = sizeCtl) < 0)
Thread.yield(); // lost initialization race; just spin
// Here, try to cas Grab it , If you get it, you will sc Set to -1, Declare sovereignty , If you can't get it, you're in the loop again
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
if ((tab = table) == null || tab.length == 0) {
// Set the initial capacity
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
// Create a capacity of n Array of
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
// Assign a value to volatile Variable table The underlying array
table = tab = nt;
// This is actually sc = n - n/4 = 0.75 * n
sc = n - (n >>> 2);
}
} finally {
// Just as 12 Well
sizeCtl = sc;
}
break;
}
}
return tab;
}

How to solve the concurrency problem of initialization ?

adopt volatile Of sizeCtl Variables , At the first initialization , If there are multiple threads initializing at the same time , They will judge sizeCtl Is less than 0, Less than 0 Indicates that there are other threads initializing .

Because the thread that gets the initialization right has passed cas Operation will sizeCtl To change the value of -1 了 , And volatile This feature ensures the visibility of variables between threads .

next , An array of appropriate capacity will be created , And will sizeCtl Is set to cap*loadFactor.

treeifyBin

This part contains the logic of converting linked list to red black tree , Of course , Some preconditions need to be met :

  1. First, of course, the number of nodes that need to be linked >=TREEIFY_THRESHOLD It's time , The default is 8.
 if (binCount != 0) {
// Judge whether it is necessary to convert the linked list into a red black tree Number of nodes >= 8 When
if (binCount >= TREEIFY_THRESHOLD)
// The tree,
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
  1. In fact, there is another condition , Namely : If the array length < MIN_TREEIFY_CAPACITY When , Will call... First tryPresize Expand the array .
 private final void treeifyBin(Node<K,V>[] tab, int index) {
Node<K,V> b; int n, sc;
if (tab != null) {
// If the array length is less than MIN_TREEIFY_CAPACITY Will give priority to expansion tryPresize
if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
tryPresize(n << 1);
else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
// Lock the head node
synchronized (b) {
if (tabAt(tab, index) == b) {
// Traversing the linked list , Create a red black tree
TreeNode<K,V> hd = null, tl = null;
for (Node<K,V> e = b; e != null; e = e.next) {
TreeNode<K,V> p =
new TreeNode<K,V>(e.hash, e.key, e.val,
null, null);
if ((p.prev = tl) == null)
hd = p;
else
tl.next = p;
tl = p;
}
// Set the red black tree to the corresponding position
setTabAt(tab, index, new TreeBin<K,V>(hd));
}
}
}
}
}

tryPresize

Array expansion operations are generally the core , Take a closer look at .

 private final void tryPresize(int size) {
int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
tableSizeFor(size + (size >>> 1) + 1);// tableSizeFor(1.5*size)
int sc;
while ((sc = sizeCtl) >= 0) {
Node<K,V>[] tab = table; int n;
// This is what I said before initTable Part of the code
if (tab == null || (n = tab.length) == 0) {
n = (sc > c) ? sc : c;
if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
if (table == tab) {
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
table = nt;
sc = n - (n >>> 2);
}
} finally {
sizeCtl = sc;
}
}
}
else if (c <= sc || n >= MAXIMUM_CAPACITY)
break;
else if (tab == table) {
//1 0000 | 0000 0000 0000 0000 1000 0000 0000 0000
int rs = resizeStamp(n);
// sc Less than 0 Indicates that a thread is already expanding
if (sc < 0) {
Node<K,V>[] nt;
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
// cas take sizeCtl Add 1, If it works , execute transfer operation
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
// No threads are expanding , take sizeCtl To change the value of (rs << RESIZE_STAMP_SHIFT) + 2)
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
transfer(tab, null);
}
}
} static final int resizeStamp(int n) {
// Integer.numberOfLeadingZeros(n) It's actually going back to n The leading zeros of , Double the capacity every time , The number will be less 1
// If n = 16 , return 27
// 1 << (RESIZE_STAMP_BITS - 1) 1 Move left 15 Bit mark The first 16 Position as 1, low 15 All the seats are 0
return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
}

resizeStamp(int n) Methods can refer to https://blog.csdn.net/tp7309/article/details/76532366 Parsing .

transfer

This method involves the operation of data migration , Support concurrent execution , The first thread to initiate data migration ,nextTab Parameter passing null, When you call this method later ,nextTab Not for null.

Concurrent execution implementation : Use stride Split a migration task into small tasks , The first thread to initiate data migration will transferIndex Point to the last position of the original array , And then from the back to the front stride Subtasks belong to the first thread , then transferIndex Point to a new location , Further on stride Tasks belong to the second thread , By analogy .

 private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
// In the case of multicore , stride by (n >>> 3) / NCPU , In the case of a single core , That's the capacity of the array
if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
// The minimum is 16
stride = MIN_TRANSFER_STRIDE; // subdivide range
// The first thread to initiate data migration ,nextTab Parameter passing null
if (nextTab == null) { // initiating
try {
// n<<1 Double the capacity
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
sizeCtl = Integer.MAX_VALUE;
return;
}
// by nextTable and transferIndex assignment ,transferIndex From the last
nextTable = nextTab;
transferIndex = n;
}
int nextn = nextTab.length;
// Construct a hash == MOVED The node of , Mark the location that has been migrated
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
// there advance Indicates that a location has been migrated , Ready for the next position
boolean advance = true;
boolean finishing = false; // to ensure sweep before committing nextTab
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
// advance by true I'm ready for the next position
while (advance) { int nextIndex, nextBound;
if (--i >= bound || finishing)
advance = false;
// nextIndex It will be equal to transferIndex
// transferIndex Once less than or equal to 0, Indicates that all the positions of the original array have corresponding threads to process
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
// bound Yes transferIndex-stride
bound = nextBound;
// i Point to transferIndex - 1
// Perform the migration task from the back to the front
i = nextIndex - 1;
advance = false;
}
}
if (i < 0 || i >= n || i + n >= nextn) {
int sc;
// All the migration operations are done
if (finishing) {
nextTable = null;
table = nextTab; // take nextTab Assign a value to table
sizeCtl = (n << 1) - (n >>> 1); // Recalculate sizeCtl
return;
}
// sizeCtl Set to... Before (rs << RESIZE_STAMP_SHIFT) + 2
// After that, every thread participating in the migration will take sizeCtl Add 1 // This can be seen as a reverse operation , Every time -1, The representative completed his task
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
return;
// Come here to show (sc - 2) == resizeStamp(n) << RESIZE_STAMP_SHIFT
// All the tasks are done , The next cycle goes up to finish The branch
finishing = advance = true;
i = n; // recheck before commit
}
}
// Here are the specific migration operations
// If i Location is null, That will just initialize hash=MOVED The node of cas Put in
else if ((f = tabAt(tab, i)) == null)
advance = casTabAt(tab, i, null, fwd);
// If it's already hash==MOVED , It means the location has been moved
else if ((fh = f.hash) == MOVED)
advance = true; // already processed
else {
// Lock this position separately , Handle the migration of the location
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> ln, hn;
// List nodes
if (fh >= 0) {
// Split the list in two
int runBit = fh & n;
Node<K,V> lastRun = f;
// The next few steps are looking for lastRun The location of , Express lastRun The following nodes need to be put together
for (Node<K,V> p = f.next; p != null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
for (Node<K,V> p = f; p != lastRun; p = p.next) {
int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
// One of the linked lists is placed in the i position
setTabAt(nextTab, i, ln);
// Another linked list is placed in the i + n position
setTabAt(nextTab, i + n, hn);
// Put the original array's i The position is set to fwd It means that it has been processed
// there fwd It's something we created before ForwardingNode,
// The next time you make a judgment , Will be advance Set to true 了
setTabAt(tab, i, fwd);
// State that the location has been migrated
advance = true;
}
// Here's the migration of the red black tree
else if (f instanceof TreeBin) {
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
for (Node<K,V> e = t.first; e != null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null, null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
else
hiTail.next = p;
hiTail = p;
++hc;
}
}
// The number of nodes is less than 6 Turn the red black tree into a linked list untreeify
ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
(hc != 0) ? new TreeBin<K,V>(lo) : t;
hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
(lc != 0) ? new TreeBin<K,V>(hi) : t;
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
}
}
}
}
}

get Methods the values

get

get The method is relatively simple , according to key Calculated hash value , Find the corresponding location , Determine whether the header node is the value you want , If not, find it from the red black tree or the linked list .

 public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
// Calculation hash value
int h = spread(key.hashCode());
// Find the corresponding hash Location of barrel
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
// Right on the head node
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
// Tree structure
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
// On the linked list
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}

You can see get The method is unlocked , adopt volatile Embellished next To get the latest value every time .

Again :

volatile It can ensure the visibility of variables between threads , It can be read by multiple threads at the same time and ensure that it will not read expired values , Because based on Java Memory model happen before principle , Yes volatile The write operation of the field precedes the read operation , Even if two threads modify and retrieve at the same time volatile Variable ,get The operation can also get the latest value .

But it can only be written by single thread ,【 There is a case where multithreading can be used , That is, the value written does not depend on the original value 】, Read operations do not require locking .

summary

  • ConcurrentHashMap stay JDK1.7 and JDK1.8 Great changes have taken place in the realization of the idea :

    • JDK1.7 use Segment+HashEntry And Section lock The realization of the concept of .
    • JDK1.8 The concept of lock segmentation is abandoned , Use Node + CAS + Synchronized To achieve concurrency , Use Array + Linked list + Red and black trees Structure .
  • put The operation will be judged as follows :
    • If it's not initialized , So let's do the initialization first [ Lazy initialization ], Default capacity is 16, Also set up SizeCtl.
    • If tab The array corresponds to hash There are no nodes in the slot ,CAS Operation to assign a value to the position , Success is out of the loop .
    • If the current slot node is in the migration state, that is f.hash == MOVED, First, help the node complete the migration operation .
    • happen hash Conflict , First to use synchronized Lock the first node , Next, judge whether it is a linked list node or a red black tree node , Find the right place , Insert or override values .
    • If the number of nodes exceeds the tree threshold 8, And the capacity of the array reaches the threshold of tree 64, Tree it .
  • When the number of nodes at the location of a bucket exceeds 8, But the array size doesn't reach 64 when , We'll do it first Scale operation n*2, perform tryPresize.
  • The expansion operation involves transfer Data migration , Support concurrency , By dividing data migration into stride Small tasks , adopt transferIndex and nextBound Two pointers to assign tasks .
  • get The operation does not need to be locked , according to key Calculated hash value , Find the corresponding location , Determine whether the header node is the value you want , If not, find it from the red black tree or the linked list .

Reference reading

Java And contract source learning series :JDK1.8 Of ConcurrentHashMap Source code analysis of more related articles

  1. Source learning series SpringBoot Automatic configuration ( Article 1 )

    Source learning series SpringBoot Automatic configuration source learning ( Article 1 ) ok, This blog tries to follow Springboot Automatic configuration source code , Take notes , Auto configuration is Springboot A key feature of , A genus that is also easily overlooked ...

  2. 【JDK1.8】 Java Xiaobai's source code learning series :HashMap

    Catalog Java Xiaobai's source code learning series :HashMap Interpretation of official documents Basic data structure Basic source code interpretation Basic member variables Constructors Ingenious tableSizeFor put Method Ingenious hash Method JDK1.8 Of putV ...

  3. JDK Source learning series 05----LinkedList

                                             JDK Source learning series 05----LinkedList 1.LinkedList brief introduction LinkedList It's based on two-way linked list ...

  4. JDK Source learning series 04----ArrayList

                                                                             JDK Source learning series 04----ArrayList 1. ...

  5. JDK Source learning series 03----StringBuffer+StringBuilder

                         JDK Source learning series 03----StringBuffer+StringBuilder Because of the previous study StringBuffer and StringBuilder Parent class of A ...

  6. JDK Source learning series 01----String

                                                     JDK Source learning series 01----String Let me write it first : This is me JDK Source learning series of the first blog , That's true. ...

  7. Source learning series SpringBoot Automatic configuration ( Article 2 )

    Source learning series SpringBoot Automatic configuration ( Article 2 ) And HttpEncodingAutoConfiguration Source code analysis Following the last blog source code learning series SpringBoot Automatic configuration ( Article 1 ) after , This blog continues ...

  8. SpringBoot Source learning series SpringMVC Automatic configuration

    Catalog 1.ContentNegotiatingViewResolver 2. Static resources 3. Automatic registration Converter, GenericConverter, and Formatter beans. ...

  9. SpringBoot Source learning series of exception handling automatic configuration

    SpringBoot Source learning series of exception handling automatic configuration 1. Source code learning Give it first SpringBoot Examples of exceptions in , If you visit a wrong link , Let it go back to 404 page Access in browser : And in other client software , such as postm ...

  10. JDK Source learning series 02----AbstractStringBuilder

     JDK Source learning series 02----AbstractStringBuilder Because look StringBuffer and StringBuilder I found that both of them inherit the source code of AbstractStringBuil ...

Random recommendation

  1. LoadRunner Record a login

    1. Click record script 2. Click the plus sign on the left page

  2. use GruntJS Merge 、 Compress JS file

    Why merge . Compress your JS file ?         When a project is developed, we can always find a bunch of js The papers are very messy .           Usually in a HTML When the document is loaded , The browser will follow HTML Code from top to bottom to read the required add ...

  3. By a LED Flash problem found MTK Of LED driver Problems in

    Today, according to the latest demand, we should be responsible for LED Change the flashing frequency of the light , Change the previous default 2000ms Change it to 10000ms, However, the change did not produce the expected effect , Instead, it becomes a constant , puzzled , In the end read the fuckin ...

  4. gcc Next c++ Object model of (1)

    All the sample code is executed in the following environment ubuntu 16.04.4 (64 position ) gcc version 5.4.0 Turn on std11 gdb version 7.11.1 1. The size of the empty class Define an empty class A, example ...

  5. Android Immersive status bar Introduction Let your status bar change color

    Reprint please indicate the source : http://blog.csdn.net/lmj623565791/article/details/48649563: This article from the :[ Zhang Hongyang's blog ] One . summary Recently I noticed that QQ The new version uses ...

  6. install eclipse scala plug-in unit

    1. install eclipse plug-in unit , In turn, click Help->Eclipse Marketplace 2. Input scala, Click on go, To search 3, There is Scala IDE4.7X, Click... At the bottom right Install Into the ...

  7. The bigger Internet companies are interested in Java The requirements of ( turn )

    Now the major Internet companies , Yes Java The requirements of school enrollment are higher and higher , Many of my friends are confused , Today I share an article about Java The road to advanced learning , Hope to help some people Buddha said that the five elements and six poisons are false , Read cause and effect into homework List the books you have read &l ...

  8. the lime limited error

    Reprinted from :https://blog.csdn.net/MTOY_320/article/details/78363375?locationNum=7&fps=1 This kind of maddening situation happens all the time since ...

  9. Fantacy Team standing meeting on Tuesday

    Word frequency analysis model 1. The meeting was held on Tuesday , But because of my personal negligence , Ah , Don't say the . 2. Meeting time :2016 year 3 month 29 Japan 12:03~12:30. For a long time :27 minute Members of the meeting : group leader : Yang ruoping  http://ww ...

  10. Change the source of the computer npm Domestic mirror cnpm

    (1) adopt config The configuration points to the domestic mirror source npm config set registry http://registry.cnpmjs.org // The configuration points to the source npm info express   ...