today , We began to Java High concurrency and multithreading , lock .
The last three , Basically speaking about some conceptual and basic things , It's a bit fragmented , But like liberal arts subjects , Just remember .
But this is the real cornerstone of high concurrency , It takes a lot of understanding and practice , One button one ring , Interlocking , It's not hard to , But it needs to be read carefully .
Okay , now .
-------------- The first part , Let's talk about it java The two keywords used to ensure the order between threads --------------
synchronized yes Java One of the most common ways to solve concurrency problems in , It's also the simplest way .
synchronized Can guarantee java In the code block Atomicity , Visibility and orderliness .
- Java Visibility in the memory model 、 Atomicity and orderliness
It refers to the visibility between threads , A thread modified The state is visible to another thread Of .
The atom is the smallest unit in the world , have Indivisibility
java Language provides volatile and synchronized Two keywords to ensure the order of operations between threads ;
synchronized You can put Any one that is not null Object as " lock ".
synchronized There are three ways of using ：
- When synchronized It works on Example method when , Monitor lock （monitor） That is Object instances （this）;
- When synchronized It works on Static methods when , Monitor lock （monitor） That is Object's Class example , because Class Data exists in the permanent generation , So a static method lock is equivalent to a global lock of this class ;
- When synchronized It works on An object instance when , Monitor lock （monitor） It's in brackets Object instances ;
In general ,synchronized The scope of synchronization is as small as possible .
Because if it takes a long time , Then other threads must wait until the lock holding thread is finished .
if synchronized Method throws an exception ,JVM Will automatically release the lock , It doesn't cause deadlock problems .
When locking objects , You can't use String Constant , And basic types and encapsulation classes of basic types .
When objects are locked , To add final, Otherwise, the lock object changes , It could cause problems .
Java Language provides a weak synchronization mechanism , namely volatile Variable , Used to ensure that updates to variables are notified to other threads .
volatile Variable It ensures the visibility of different threads when they operate on this variable , That is, a thread changes the value of a variable , this The new value is immediately visible to other threads Of .
from volatile When modifying variables, you will use the shared variables CPU Provided Lock Prefix instruction .
- Put the current processor The data of the cache line is written back to the system memory ;
- This write back operation will Tell me in other CPU The variables you get are invalid , The next time you use it, take it from the shared memory again .
When right and wrong volatile When variables are read and written , Each thread copies variables from memory to CPU In cache . If the computer has Multiple CPU, Each thread may be in a different CPU Be dealt with on , This means that each thread can be copied to Different CPU cache in .
When variables are declared as volatile After the type , Both compiler and runtime will notice that this variable is shared , therefore Operations on this variable are not reordered with other memory operations .
volatile Variable Will not be cached In registers or places not visible to other processors ,JVM Ensure that every time the variable is read from memory , skip CPU cache This step , therefore In the reading volatile A variable of type always returns the most recently written value .
During a visit to volatile The lock operation is not performed on a variable , So it's just Does not block the execution thread , therefore volatile Variable is a kind of ratio sychronized Keyword more lightweight synchronization mechanism .
volatile Of Read performance consumption is almost the same as ordinary variables , But the write operation is slower , Because it needs to be in local code Insert many memory barrier instructions To ensure that the processor does not run out of order .
By volatile Modify the variable in the write operation , Will generate a special assembly instruction , The order Will trigger mesi agreement 「 Cache consistency protocol 」, There will be a bus sniffer mechanism Things that are , In short, this is The mechanism will constantly detect the change of the variable in the bus , If the variable changes , Because of this sniffing mechanism , Other cpu The value of the variable will be changed immediately cpu The cache data is emptied , Get this data from main memory again .
volitale Try to decorate simple types , Don't embellish reference types , because volatile Keyword changes to the basic type can be read consistently to multiple threads later , But for reference types such as arrays , Entity bean, Just make sure the visibility of references , But it doesn't guarantee the visibility of references .
Ensure thread visibility ;（MESI/CPU Cache consistency protocol for ）
Disable instruction reordering .
laodfence Source language instruction
storefence Source language instruction
Double check in singleton mode , To add volatile.（ The main reason is that the initialization order of the instance may be changed , In this way, the second thread may access the uninitialized instance ）.
However, the above situation can only appear in the case of super high concurrency , In general, it is very difficult to appear , It's written here , It's for the students preparing for the interview .
- synchronized and volatile
volatile The essence is to tell jvm Current variable in register （ The working memory ） The value in is uncertain , Need to read from main memory ;
synchronized It is Lock the current variable , Only the current thread can access this variable , Other threads blocked .
volatile Can only be used at variable level ;
synchronized You can use the 、 Method 、 And class level .
volatile Only variable modification visibility can be achieved , Atomicity is not guaranteed ; and synchronized It can ensure the visibility and atomicity of the variables .
volatile No thread blocking ;synchronized May cause thread blocking .
volatile Tagged variables are not optimized by the compiler ;synchronized Tagged variables can be optimized by the compiler .
-------------- The second part , Let's talk about all kinds of locks that are full of blogs --------------
【 Pessimistic locking · Optimism lock 】
- Pessimistic locking
Always assume the worst , Every time you use data, you think someone else will modify it , therefore I lock it every time I get the data .
So people who want to take this data will block it until it gets the lock （ Shared resources are used by only one thread at a time , Other threads are blocking , Transfer resources to other threads after use ）.
Traditional relational database A lot of this locking mechanism is used in it , such as Row lock , Table lock, etc. , Read the lock , Write lock etc. , It's all locked before the operation .
Java in synchronized and ReentrantLock etc. An exclusive lock It's the realization of pessimism .
- Optimism lock
Always assume the best , Every time I go to get the data, I think other people won't modify it , So it won't lock , But in When updating, we will judge whether other people have updated this data during this period , have access to Version number mechanism and CAS Algorithm Realization .
Optimistic lock is suitable for multi read applications , This can Increase throughput , Similar to what the database provides write_condition Mechanism , In fact, they are all optimistic locks .
stay Java in java.util.concurrent.atomic package The following atomic variable class is an implementation of optimistic locking CAS Realized .
The optimistic lock applies to Write less about , Because it saves a lot of lock overhead , But if you write more , May cause frequent retry operation , Instead, it reduces performance .
ABA problem , After two revisions , The actual value has been modified , But it didn't recognize .「 Also consider the citation , Although the pointer doesn't change , But the citation has changed 」
ABA problem （ The main problem is that there will be problems with the object , The basic type doesn't matter , You can add Version solve ）
- What is? CAS？
Compare and Swap, That is, compare and then exchange
Yes CAS The understanding of the ,CAS It's a lock free algorithm ,CAS Yes 3 Operands , Memory value V, Old expectations A, New value to modify B.
If and only if Expected value A And memory values V identical when , Put the memory value V It is amended as follows B, Otherwise, do nothing .
CAS The realization of depends on CPU The primitive language supports （ There's no interruption ）.
If it's not equal , The shared data has been modified , Give up what you have done , then Re execution The operation just now .
When there are few opportunities for synchronization conflicts , This assumption can lead to a large performance improvement .
Let's talk about this class briefly .
Java No direct access to underlying operating system , But through local （native） Method to access . But that's all ,JVM Or a back door , That's it Unsafe class , It mentions Provides hardware level atomic operations .
be-all CAS It's all used Unsafe To achieve the .
CAS Direct manipulation java The memory in the virtual machine .（ You can go directly through the offset , Locate the value of a variable in memory , And then modify ）「 That's why cas We can think of it as atomic 」
【 spinlocks 】
Thread blocking and wakeup needs CPU From user state to nuclear state , Frequent blocking and waking up to CPU It's a very heavy job , It will bring a lot of pressure to the concurrent performance of the system .
At the same time, we found that in many applications , The lock state of an object lock lasts only a short period of time , It's not worth it to block and wake up threads frequently for a short period of time .
When a thread tries to acquire a lock , If the lock is occupied by another thread , Just loop through the lock to see if it's released , Instead of going into thread suspend or sleep state .
Spin lock occupied CPU, But no access to the operating system （ Without going through the kernel state ）, So it's always user mode .
Spin lock is suitable for The critical area of lock protection is very small The situation of , If the critical area is small , The lock takes a short time .
The default number of spins is 10 Time , You can use the parameter -XX:PreBlockSpin To adjust .
【 Adaptive spin lock 】
If the thread spins successfully , So the next spin will be more , Because the virtual machine thinks that since it succeeded last time , So this spin is likely to succeed again , So it allows the spin wait to last a lot more .
conversely , If for a lock , Very few spins succeed , Then the number of spins will be Reduce or even omit the spin process , To avoid wasting processor resources .
-------------- The third part , Let's elaborate on developers and JVM For the lock of some optimization design and ideas --------------
【 Lock refinement 】
The refinement of locks is mainly divided into two aspects .
First of all ,synchronized The less content locked, the better , So in some cases , It may take some （ for instance Section lock ,HASH lock , Weak reference lock etc. ） measures , Improve synchronization efficiency .
second ,synchronized The less code locked, the better , The less code , The longer the critical area is , The less time the lock will wait .
【 Lock coarsening 】
Normally speaking , For developers , The smaller the scope of the locked code, the better , But sometimes , We need to coarsen the lock .
It means that you will Multiple successive locks 、 Unlock operations are linked together , Expand to a larger range of locks .
A series of continuous lock and unlock operations , May cause unnecessary performance loss .
【 Lock elimination 】
In order to ensure the integrity of the data , During the operation, it is necessary to control this part of the operation synchronously , But in some cases ,JVM No shared data race detected , This is a JVM Meeting Lock elimination of these synchronous locks .
The basis of lock elimination is Data support for escape analysis .
Such as ：StringBuffer Of append Method .
StringBuffer.append() The lock is sb object . Virtual machine observation variables sb, It will soon be found that its dynamic scope is limited to concatString() Methods the internal .
That is to say sb All references to will never " The escape " To concatString() Out of the way , Other threads can't access it , So although there's a lock here , But it can be safely removed , After instant compilation , This code will ignore all synchronization and execute directly .
After the escape analysis , All objects will be allocated on the stack , Not from JVM In the memory model .
Escape analysis and lock elimination can use parameters respectively -XX:+DoEscapeAnalysis and -XX:+EliminateLocks( Lock removal must be in -server In mode ) Turn on .
【 Lock escalation 】
For locks , There are four states , No lock state , Biased lock state , Lightweight lock state , Heavyweight lock state .
1. When there is no competition , Bias lock is used by default .
JVM Make use of CAS operation , stay On the head of the object Mark Word Partial setup thread ID, To indicate that this object is biased towards the current thread , So there's no real mutex involved .
2. If another thread tries to lock an object that has been biased ,JVM It needs to be withdrawn （revoke） Biased locking , And switch to lightweight lock implementation .
Lightweight lock rely on CAS operation Mark Word To attempt to acquire a lock , If retry is successful , Just use lightweight locks ;
3. Otherwise, in the Spin a certain number of times and then further upgrade For heavyweight locks .
- Biased locking
Why introduce biased locks ？
Because after HotSpot The author of a large number of studies found that , Most of the time there is no lock competition , Often a thread gets the same lock multiple times , So if you have to compete for locks every time, it will increase a lot of unnecessary costs , To reduce the cost of acquiring locks , The biased lock introduced .
The deflection lock is in Single threaded execution of code blocks The mechanism used in this process , If in a multithreaded concurrent environment （ Threads A The synchronization code block has not been executed yet , Threads B Initiated a lock application ）, It must be converted to a lightweight lock or a heavyweight lock .
Now? Almost all locks are re entrant Of , namely Threads that have acquired locks can lock many times / Unlock monitored objects .
According to the previous HotSpot Design , Lock every time / Unlocking involves a lot of CAS operation （ For example, the waiting queue CAS operation ）,CAS Operations delay local calls ;
So the idea of biased locking is that once the thread gets the monitored object for the first time , after Let the surveillance object " deviation " This thread , Subsequent calls can be avoided CAS operation , To put it bluntly Set a variable 「 On the head of the object Mark Word Partial setup thread ID」, If it's found to be true You don't have to go through all kinds of locking / Unlock process .
The assumption to do this is based on the fact that in many scenarios , Most objects are locked by at most one thread in their life cycle , Using biased locks can reduce the non contention overhead .
The deflection lock is opened by default , And the start time is usually a few seconds slower than the application starts , If you don't want this delay , Then you can use -XX:BiasedLockingStartUpDelay=0;
If you don't want biased locks , So you can go through -XX:-UseBiasedLocking = false To set up ;
- Lightweight lock
For most locks , There is no contention throughout the synchronization cycle , Lightweight lock usage CAS operation , Avoid using mutex .
If there is competition , Besides the cost of mutex , also CAS The operation of , Not only is there no promotion , On the contrary, the performance will decline
The scenario for lightweight locks is threads Execute synchronization blocks alternately The situation of , If there is Access the same lock at the same time The situation of , It will inevitably lead to the expansion of lightweight locks into heavyweight locks .
- Heavyweight lock
Synchronized It's through an internal object called a monitor lock （Monitor） To achieve .
Heavyweight locks don't take CPU.
But the essence of monitor lock depends on the underlying operating system Mutex Lock To achieve .
And the operating system to achieve switching between threads, which requires From user state to nuclear state , The cost is very high , The transition between States takes a relatively long time , That's why Synchronized The reason for low efficiency .
therefore , This depends on the operating system Mutex Lock The implemented lock is called " Heavyweight lock ".
notes ： Locks can only be upgraded, not demoted .
Lock and unlock do not need extra cost , It's almost as efficient as executing asynchronous methods
If there is lock contention between threads , Will cause additional lock revocation cost .
It is applicable to the scenario where only one thread accesses the synchronization block
Competing threads don't block , Improve the response speed of the program .
If you can't always get the thread of lock contention , It will consume all the time CPU.
Pursue response time ;
Thread contention doesn't consume CPU
Thread blocking , Slow response time .
Pursue throughput ;
-------------- The third part , Let's add some other concepts of locks that we didn't talk about , Complete the contents of the lock --------------
【 Other lock concepts 】
- Section lock
Section lock （SegmentLock） It's simply fine-grained locking , take A lock is divided into two or more segments , The thread locks and unlocks according to its own operation segment .
This avoids meaningless waiting between threads , Reduce the waiting time of threads . Common applications are ConcurrentHashMap, It implements internally Segment<K,V> Inherited ReentrantLock, Divided into 16 paragraph .
Of course , Actually in jdk1.8 in ,ConcurrentHashMap And start using CAS The way .
- Exclusive lock - Shared lock
Exclusive locks are also called mutexes 、 An exclusive lock 、 Write lock , A lock can only be held by one thread at a time , Other threads must wait for the lock to be released before they can acquire it . Such as ReentrantLock.
Shared lock is also called read lock , Namely Allow multiple threads to acquire a lock at the same time , A lock can be owned by multiple threads at the same time . for instance ReadWriteLock.
- Fair lock
Fair lock is to follow First come first served basis Principles , Multiple threads acquire locks in the order in which they apply for locks .
- Reentrant lock
Re entrant lock means , Add method A After the lock is locked and locked, the method is called B, The method B You need locks, too , This can lead to deadlock , A reentrant lock causes the method to be called B The lock is automatically acquired when it is locked .
Java Medium is adopt lockedBy Field to determine whether the locked thread is the same .
synchronized It's a reentrant lock , It must also be a re entrant lock , otherwise There is no way for a subclass to call a parent's method .
There are many concepts , But remember , These not only need to be familiar with in the process of work , And I will be asked in the interview , So the students who are looking for a job recently , The content of the lock is the most important , We must read more, practice more and remember more .
In this article, we talk about the lock in multithreading , Almost all the lock related things are mentioned , But the depth is not enough , We'll talk about it in the next article JUC All kinds of synchronization lock under the , Later on , If you have the time , Will try to add some source code related analysis , Let's take a look at how these locks play from the implementation level .
As a programmer , No interest or enthusiasm , It's too hard to keep going , I hope the future of this series , Try to see the essence through the phenomenon as much as possible , Now write these trivial things , All floating on the surface , Dealing with jobs and interviews , Boring and boring .