Java and contract source code Learning Series: JDK1.8 concurrent HashMap source code analysis

Tianqiao Basha 2021-01-23 18:31:09
java contract source code learning


Serial portal :

notes : This article is based on JDK1.8.

Why use ConcurrentHashMap?

Before thinking about it , We can think about : If not ConcurrentHashMap Words , What other containers are available for us to choose from ? And what are their flaws ?

A hash table can cost O(1) According to the time complexity of key find value value , The container that can meet this need is HashTable and HashMap.

HashTable

HashTable Use synchronized Keywords ensure the security of multithreading environment , But the implementation of locking is exclusive , All visits HashTable Must compete for the same lock , Low performance .

 public synchronized V put(K key, V value) {
// ...
}

HashMap

JDK1.8 Version of HashMap In the reading hash When the slot is opened, it reads the object pointed by the reference in the working memory , In multithreaded environment , The values modified by other threads cannot be read in time .

This leads to some possible problems :

For example, during the insert operation , The first time will be based on key Of hash Value to determine whether the current slot is occupied , If not, it will be inserted value. In a concurrent environment , If A The thread determines that the slot is not occupied , It happens that the time slice runs out while writing , here B The thread happens to do the same thing , The first to insert B Of value, here A By chance CPU Reschedule to continue writing , And then the thread B Of value Cover .

There's another situation where you're in the same hash In the trough ,HashMap Always keep key only , At the time of insertion , If there is key, It will value Cover . Concurrent , If A The thread determines that the last node has not found a duplicate key, The insert operation is performed , If B The thread is in A The same operation is performed between judgment and insertion , Data coverage also happens , The loss of data .

Of course , There are still some concurrency problems like this , I won't go into details here , Just interested in small partners can check the following information .

ConcurrentHashMap Structural characteristics of

Java8 Before

stay Java8 Before , The underlying the Segment+HashEntry The way to achieve .

Using the concept of segment lock , Bottom use Segment Array ,Segment By inheritance ReentrantLock To do the locking , Each operation that needs to be locked will lock one segment, Segmentation ensures that each segment is thread safe .

Picture source :Java7/8 Medium HashMap and ConcurrentHashMap Full resolution

Java8 after

JDK1.8 After using CAS + Synchronized To ensure concurrency security .

use 【Node Array 】 Add 【 Linked list 】 Add 【 Red and black trees 】 Structure , And HashMap similar .

Picture source :Java7/8 Medium HashMap and ConcurrentHashMap Full resolution

Basic constants

Just go through it , Don't worry too much , Some fields may be for compatibility Java8 Previous version .

 /* ---------------- Constants -------------- */
// Maximum capacity allowed
private static final int MAXIMUM_CAPACITY = 1 << 30;
// Default initial value 16, Must be 2 The power of
private static final int DEFAULT_CAPACITY = 16;
// toArray The amount that the relevant method may require
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
// In order to and Java8 Previous segmentation related content is compatible , Not used
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
// Load factor
private static final float LOAD_FACTOR = 0.75f;
// List to red black tree threshold > 8 The linked list is converted to a red-black tree
static final int TREEIFY_THRESHOLD = 8;
// Tree linked list threshold , Less than or equal to 6(tranfer when ,lc、hc=0 The two counters are respectively ++ Records of the original bin、 new binTreeNode Number ,<=UNTREEIFY_THRESHOLD be untreeify(lo))
static final int UNTREEIFY_THRESHOLD = 6;
// The minimum capacity of linked list treeing treeifyBin When , If the capacity is insufficient 64, Will choose to expand to 64
static final int MIN_TREEIFY_CAPACITY = 64;
// Minimum rebinding per step
private static final int MIN_TRANSFER_STRIDE = 16;
// sizeCtl The number of bits used to generate the tag in
private static int RESIZE_STAMP_BITS = 16;
// 2^15-1,help resize Is the maximum number of threads
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
// 32-16=16,sizeCtl It's recorded in size The offset of the size
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
// forwarding nodes Of hash value
static final int MOVED = -1;
// root-nodal hash value
static final int TREEBIN = -2;
// ReservationNode Of hash value
static final int RESERVED = -3;
// For ordinary node node hash use
static final int HASH_BITS = 0x7fffffff;
// Number of processors available
static final int NCPU = Runtime.getRuntime().availableProcessors();

Important member variables

 /* ---------------- Fields -------------- */
// It's what we call the bottom Node Array , Lazy , It is initialized the first time it is inserted , The size needs to be 2 The power of
transient volatile Node<K,V>[] table;
/**
* Capacity expansion resize It is used in table
*/
private transient volatile Node<K,V>[] nextTable;
/**
* Basic counter , It's through CAS To update
*/
private transient volatile long baseCount;
/**
* Table initialization and resizing control.
* If it's negative : Indicates that capacity is being initialized or expanded , As follows 、
* -1 Denotes initialization ,
* -N,N-1 Represents the number of threads being expanded
* The default is 0, After initialization , Save the size of the next expansion
*/
private transient volatile int sizeCtl;
/**
* Split when expanding table The index of
*/
private transient volatile int transferIndex;
/**
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
*/
private transient volatile int cellsBusy;
/**
* Table of counter cells. When non-null, size is a power of 2.
*/
private transient volatile CounterCell[] counterCells;
// View
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;

Construction method

and HashMap equally ,table The initialization of an array is only done when it is first inserted .

 /**
* Create a new , Empty map, The default size is 16
*/
public ConcurrentHashMap() {
}
/**
* Specify initial capacity
*/
public ConcurrentHashMap(int initialCapacity) {
if (initialCapacity < 0)
throw new IllegalArgumentException();
int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
MAXIMUM_CAPACITY :
// hashmap I told you in , It is used to return the smallest value greater than or equal to the passed in value 2 Power square
// https://blog.csdn.net/Sky_QiaoBa_Sum/article/details/104095675#tableSizeFor_105
tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
// sizeCtl Initialize to capacity
this.sizeCtl = cap;
}
/**
* Receive one map object
*/
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
/**
* Specify initial capacity and load factor
* @since 1.6
*/
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
/**
* The most comprehensive : Capacity 、 Load factor 、 Concurrency level
*/
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}

there tableSizeFor Method in HashMap It's been analyzed in :https://blog.csdn.net/Sky_QiaoBa_Sum/article/details/104095675#tableSizeFor_105

tableSizeFor

We can see from the annotation that , This method returns the smallest value greater than or equal to the passed in value 2 Power square ( Pass in 1 when , by 1). How on earth did it come true , Let's look at the specific source code :

 static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
123456789

Tell the truth , I'll see the implementation of this method , With a sigh , Good at math ! I put in the numbers , Read many articles and videos about this part , Through simple examples , Let's summarize .

  • Let's think about it first , We want to be better than n Large minimum 2 The power just needs to be In front of the highest position 1, All in the back 0 Just ok Is that right . Such as 0101 It stands for 5,1000 Just meet our needs for 8.

  • We'll pass in a bigger number , For the convenience of writing , Here is to 8 Position as an example :

  • First step int n = cap -1 This step is actually to prevent cap As such 2 In the case of the power of , Without this step , After an operation , It's going to double . For example, it is 8, It's going to be 16, So subtract... In advance 1, Guaranteed results .

  • Last n<0 The judgment of the situation of , Excluding the incoming capacity of 0 The situation of .

  • n>=MAXIMUM_CAPACITY The judgment of the situation of , After excluding the shift sum or operation, all are 1 The situation of .

put Method store value

putVal

 public V put(K key, V value) {
return putVal(key, value, false);
}
/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
// Yes key Do hash calculations : (h ^ (h >>> 16)) & HASH_BITS;
int hash = spread(key.hashCode());
// Record chain length
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// for the first time put, It's here to initialize
if (tab == null || (n = tab.length) == 0)
tab = initTable();
// To find the hash The index of the array corresponding to the value , I get my first node f,
// Judge here f Is it empty , Is there any node in this position
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// without , use CAS Try putting the value in the , Insert the success , The exit for loop
// If CAS Failure , Indicates that there is concurrent competition , Enter the cycle again
if (casTabAt(tab, i, null,
new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
// hash yes Node node f A property of , be equal to MOVED Indicates that the node is in the migration state
else if ((fh = f.hash) == MOVED)
// Help move 【 The internal calls transfer, Back analysis 】
tab = helpTransfer(tab, f);
else {
// Entering this branch means : according to key Calculated hash, The position we get is there Node Of , Then I will traverse the linked list
V oldVal = null;
// Lock the Node node
synchronized (f) {
// The second check after locking , in the light of tab It may be modified by other threads
if (tabAt(tab, i) == f) {
if (fh >= 0) { // Head of the node hash attribute >= 0
binCount = 1; // Record chain length
// List traversal operation , You'll see
for (Node<K,V> e = f;; ++binCount) {
K ek;
// Find the same key 了
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
// onlyIfAbsent by false Words , This is going to cover , The default is overlay
if (!onlyIfAbsent)
e.val = value;
// And then it's over
break;
}
Node<K,V> pred = e;
// It's the end of the traversal , hold Node Insert the tail
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
else if (f instanceof TreeBin) { // Red and black trees
Node<K,V> p;
binCount = 2;
// Red and black trees putTreeVal
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
if (binCount != 0) {
// Judge whether it is necessary to convert the linked list into a red black tree Number of nodes >= 8 When
if (binCount >= TREEIFY_THRESHOLD)
// The tree, It will be analyzed separately later
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}

You'll find out , If you've seen it HashMap Source code , understand ConcurrentHashMap In fact, the operation is relatively clear , I believe you have a basic understanding of . Next, we will analyze several key methods

initTable

Using delayed initialization , for the first time put When , call initTable() initialization Node Array .

 /**
* Initializes table, using the size recorded in sizeCtl.
*/
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
while ((tab = table) == null || tab.length == 0) {
// If it is less than 0, Indicates that there are other threads scrambling to initialize
if ((sc = sizeCtl) < 0)
Thread.yield(); // lost initialization race; just spin
// Here, try to cas Grab it , If you get it, you will sc Set to -1, Declare sovereignty , If you can't get it, you're in the loop again
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
if ((tab = table) == null || tab.length == 0) {
// Set the initial capacity
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
// Create a capacity of n Array of
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
// Assign a value to volatile Variable table The underlying array
table = tab = nt;
// This is actually sc = n - n/4 = 0.75 * n
sc = n - (n >>> 2);
}
} finally {
// Just as 12 Well
sizeCtl = sc;
}
break;
}
}
return tab;
}

How to solve the concurrency problem of initialization ?

adopt volatile Of sizeCtl Variables , At the first initialization , If there are multiple threads initializing at the same time , They will judge sizeCtl Is less than 0, Less than 0 Indicates that there are other threads initializing .

Because the thread that gets the initialization right has passed cas Operation will sizeCtl To change the value of -1 了 , And volatile This feature ensures the visibility of variables between threads .

next , An array of appropriate capacity will be created , And will sizeCtl Is set to cap*loadFactor.

treeifyBin

This part contains the logic of converting linked list to red black tree , Of course , Some preconditions need to be met :

  1. First, of course, the number of nodes that need to be linked >=TREEIFY_THRESHOLD It's time , The default is 8.
 if (binCount != 0) {
// Judge whether it is necessary to convert the linked list into a red black tree Number of nodes >= 8 When
if (binCount >= TREEIFY_THRESHOLD)
// The tree,
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
  1. In fact, there is another condition , Namely : If the array length < MIN_TREEIFY_CAPACITY When , Will call... First tryPresize Expand the array .
 private final void treeifyBin(Node<K,V>[] tab, int index) {
Node<K,V> b; int n, sc;
if (tab != null) {
// If the array length is less than MIN_TREEIFY_CAPACITY Will give priority to expansion tryPresize
if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
tryPresize(n << 1);
else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
// Lock the head node
synchronized (b) {
if (tabAt(tab, index) == b) {
// Traversing the linked list , Create a red black tree
TreeNode<K,V> hd = null, tl = null;
for (Node<K,V> e = b; e != null; e = e.next) {
TreeNode<K,V> p =
new TreeNode<K,V>(e.hash, e.key, e.val,
null, null);
if ((p.prev = tl) == null)
hd = p;
else
tl.next = p;
tl = p;
}
// Set the red black tree to the corresponding position
setTabAt(tab, index, new TreeBin<K,V>(hd));
}
}
}
}
}

tryPresize

Array expansion operations are generally the core , Take a closer look at .

 private final void tryPresize(int size) {
int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
tableSizeFor(size + (size >>> 1) + 1);// tableSizeFor(1.5*size)
int sc;
while ((sc = sizeCtl) >= 0) {
Node<K,V>[] tab = table; int n;
// This is what I said before initTable Part of the code
if (tab == null || (n = tab.length) == 0) {
n = (sc > c) ? sc : c;
if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
if (table == tab) {
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
table = nt;
sc = n - (n >>> 2);
}
} finally {
sizeCtl = sc;
}
}
}
else if (c <= sc || n >= MAXIMUM_CAPACITY)
break;
else if (tab == table) {
//1 0000 | 0000 0000 0000 0000 1000 0000 0000 0000
int rs = resizeStamp(n);
// sc Less than 0 Indicates that a thread is already expanding
if (sc < 0) {
Node<K,V>[] nt;
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
// cas take sizeCtl Add 1, If it works , execute transfer operation
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
// No threads are expanding , take sizeCtl To change the value of (rs << RESIZE_STAMP_SHIFT) + 2)
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
transfer(tab, null);
}
}
}
static final int resizeStamp(int n) {
// Integer.numberOfLeadingZeros(n) It's actually going back to n The leading zeros of , Double the capacity every time , The number will be less 1
// If n = 16 , return 27
// 1 << (RESIZE_STAMP_BITS - 1) 1 Move left 15 Bit mark The first 16 Position as 1, low 15 All the seats are 0
return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
}

resizeStamp(int n) Methods can refer to https://blog.csdn.net/tp7309/article/details/76532366 Parsing .

transfer

This method involves the operation of data migration , Support concurrent execution , The first thread to initiate data migration ,nextTab Parameter passing null, When you call this method later ,nextTab Not for null.

Concurrent execution implementation : Use stride Split a migration task into small tasks , The first thread to initiate data migration will transferIndex Point to the last position of the original array , And then from the back to the front stride Subtasks belong to the first thread , then transferIndex Point to a new location , Further on stride Tasks belong to the second thread , By analogy .

 private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
// In the case of multicore , stride by (n >>> 3) / NCPU , In the case of a single core , That's the capacity of the array
if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
// The minimum is 16
stride = MIN_TRANSFER_STRIDE; // subdivide range
// The first thread to initiate data migration ,nextTab Parameter passing null
if (nextTab == null) { // initiating
try {
// n<<1 Double the capacity
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
sizeCtl = Integer.MAX_VALUE;
return;
}
// by nextTable and transferIndex assignment ,transferIndex From the last
nextTable = nextTab;
transferIndex = n;
}
int nextn = nextTab.length;
// Construct a hash == MOVED The node of , Mark the location that has been migrated
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
// there advance Indicates that a location has been migrated , Ready for the next position
boolean advance = true;
boolean finishing = false; // to ensure sweep before committing nextTab
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
// advance by true I'm ready for the next position
while (advance) {
int nextIndex, nextBound;
if (--i >= bound || finishing)
advance = false;
// nextIndex It will be equal to transferIndex
// transferIndex Once less than or equal to 0, Indicates that all the positions of the original array have corresponding threads to process
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
// bound Yes transferIndex-stride
bound = nextBound;
// i Point to transferIndex - 1
// Perform the migration task from the back to the front
i = nextIndex - 1;
advance = false;
}
}
if (i < 0 || i >= n || i + n >= nextn) {
int sc;
// All the migration operations are done
if (finishing) {
nextTable = null;
table = nextTab; // take nextTab Assign a value to table
sizeCtl = (n << 1) - (n >>> 1); // Recalculate sizeCtl
return;
}
// sizeCtl Set to... Before (rs << RESIZE_STAMP_SHIFT) + 2
// After that, every thread participating in the migration will take sizeCtl Add 1
// This can be seen as a reverse operation , Every time -1, The representative completed his task
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
return;
// Come here to show (sc - 2) == resizeStamp(n) << RESIZE_STAMP_SHIFT
// All the tasks are done , The next cycle goes up to finish The branch
finishing = advance = true;
i = n; // recheck before commit
}
}
// Here are the specific migration operations
// If i Location is null, That will just initialize hash=MOVED The node of cas Put in
else if ((f = tabAt(tab, i)) == null)
advance = casTabAt(tab, i, null, fwd);
// If it's already hash==MOVED , It means the location has been moved
else if ((fh = f.hash) == MOVED)
advance = true; // already processed
else {
// Lock this position separately , Handle the migration of the location
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> ln, hn;
// List nodes
if (fh >= 0) {
// Split the list in two
int runBit = fh & n;
Node<K,V> lastRun = f;
// The next few steps are looking for lastRun The location of , Express lastRun The following nodes need to be put together
for (Node<K,V> p = f.next; p != null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
for (Node<K,V> p = f; p != lastRun; p = p.next) {
int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
// One of the linked lists is placed in the i position
setTabAt(nextTab, i, ln);
// Another linked list is placed in the i + n position
setTabAt(nextTab, i + n, hn);
// Put the original array's i The position is set to fwd It means that it has been processed
// there fwd It's something we created before ForwardingNode,
// The next time you make a judgment , Will be advance Set to true 了
setTabAt(tab, i, fwd);
// State that the location has been migrated
advance = true;
}
// Here's the migration of the red black tree
else if (f instanceof TreeBin) {
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
for (Node<K,V> e = t.first; e != null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null, null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
else
hiTail.next = p;
hiTail = p;
++hc;
}
}
// The number of nodes is less than 6 Turn the red black tree into a linked list untreeify
ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
(hc != 0) ? new TreeBin<K,V>(lo) : t;
hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
(lc != 0) ? new TreeBin<K,V>(hi) : t;
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
}
}
}
}
}

get Methods the values

get

get The method is relatively simple , according to key Calculated hash value , Find the corresponding location , Determine whether the header node is the value you want , If not, find it from the red black tree or the linked list .

 public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
// Calculation hash value
int h = spread(key.hashCode());
// Find the corresponding hash Location of barrel
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
// Right on the head node
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
// Tree structure
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
// On the linked list
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}

You can see get The method is unlocked , adopt volatile Embellished next To get the latest value every time .

Again :

volatile It can ensure the visibility of variables between threads , It can be read by multiple threads at the same time and ensure that it will not read expired values , Because based on Java Memory model happen before principle , Yes volatile The write operation of the field precedes the read operation , Even if two threads modify and retrieve at the same time volatile Variable ,get The operation can also get the latest value .

But it can only be written by single thread ,【 There is a case where multithreading can be used , That is, the value written does not depend on the original value 】, Read operations do not require locking .

summary

  • ConcurrentHashMap stay JDK1.7 and JDK1.8 Great changes have taken place in the realization of the idea :
    • JDK1.7 use Segment+HashEntry And Section lock The realization of the concept of .
    • JDK1.8 The concept of lock segmentation is abandoned , Use Node + CAS + Synchronized To achieve concurrency , Use Array + Linked list + Red and black trees Structure .
  • put The operation will be judged as follows :
    • If it's not initialized , So let's do the initialization first [ Lazy initialization ], Default capacity is 16, Also set up SizeCtl.
    • If tab The array corresponds to hash There are no nodes in the slot ,CAS Operation to assign a value to the position , Success is out of the loop .
    • If the current slot node is in the migration state, that is f.hash == MOVED, First, help the node complete the migration operation .
    • happen hash Conflict , First to use synchronized Lock the first node , Next, judge whether it is a linked list node or a red black tree node , Find the right place , Insert or override values .
    • If the number of nodes exceeds the tree threshold 8, And the capacity of the array reaches the threshold of tree 64, Tree it .
  • When the number of nodes at the location of a bucket exceeds 8, But the array size doesn't reach 64 when , We'll do it first Scale operation n*2, perform tryPresize.
  • The expansion operation involves transfer Data migration , Support concurrency , By dividing data migration into stride Small tasks , adopt transferIndex and nextBound Two pointers to assign tasks .
  • get The operation does not need to be locked , according to key Calculated hash value , Find the corresponding location , Determine whether the header node is the value you want , If not, find it from the red black tree or the linked list .

Reference reading

版权声明
本文为[Tianqiao Basha]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210123182902517E.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云