编程知识 cdmana.com

Ali P8 talked for half an hour: what's the use of Java's lock interface?

What's the key to concurrent programming , Do you know? ?

I have a faint smile , Fortunately, I usually play high concurrency architecture design , Or I'll be scared by you !

  • Mutually exclusive
    At the same time , Only one thread is allowed to access shared resources
  • Sync
    Interthread communication 、 Collaboration

And these two problems , It can be solved in the tube process .JUC adopt Lock and Condition The interface realizes the pipe process

  • Lock Solve mutual exclusion
  • Condition Solve synchronization

See only P8 In no hurry , He began to ask again :

synchronized It's also the realization of tube process , since Java Already in SDK The tube side is realized , Why provide another implementation ? Don't JDK The authors like “ Repeat the wheel ”?

There is a big difference between them . stay JDK 1.5,synchronized The performance is inferior to Lock, but 1.6 after ,synchronized Be optimized , Improve performance , therefore 1.6 Later, it is recommended to use synchronized. But the performance problem just needs to be optimized , It is not necessary to “ Repeat the wheel ”.

Deadlock problem discussion , I recommend another article : Process management of operating system

And the key is , The destruction of deadlock problem “ Do not take ” Conditions , This condition synchronized Unable to reach . because synchronized When applying for resources , If the application does not arrive , Threads block directly , Blocked threads do nothing , It can't release the resources already occupied by the thread . But we hope :
about “ Do not take ” Conditions , When a thread occupying some resources further requests other resources , If the application does not arrive , Can actively release the resources it occupies , such “ Do not take ” The conditions are broken .

If we redesign a mutex to solve this problem , What's going on ? The following designs can destroy “ Do not take ” Conditions :

  • Able to respond to interruptions
    Use synchronized hold lock X after , If you try to get lock Y Failure , Then the thread is blocked , In the event of a deadlock , There is no chance to wake up the blocking thread . But if the blocked thread can respond to the interrupt signal , That is, when sending Interrupt signal when , Can wake it up , Then it has a chance to release what it once held lock X.
  • Support timeout
    If the thread is in a period of time , They didn't get the lock , It's not going to be blocked , It returns an error , The thread also has the opportunity to release the lock it once held
  • Non blocking access to locks
    If the attempt to acquire the lock fails , It doesn't get stuck , I'm going straight back , Then this thread also has a chance to release the lock it once held

These designs make up for synchronized The shortcomings of . That is to create Lock The main reason is , In fact, that is Lock Here's how :

  • lockInterruptibly() Support interrupt

  • tryLock(long time, TimeUnit unit) Support timeout

  • tryLock() Support non blocking access lock

Do you know how it ensures visibility ?

Lock The classic case is try/finally, Must be in finally Release the lock in the block .
Java The visibility of multithreading is through Happens-Before The rules guarantee , and Happens-Before Not mentioned Lock lock . that Lock What makes visibility ?


For sure , It's using volatile Of Happens-Before The rules . because ReentrantLock The inner class of inherits AQS, It maintains a volatile Variable state

  • When you get the lock , Can read and write state
  • When the unlock , I can read and write state

therefore , perform value+=1 front , The program reads and writes once volatile state, In execution value+=1 after , Read and write again volatile state. according to Happens-Before According to the following rules :

  • Sequential rules : Threads t1 Of value+=1 Happens-Before Threads t1 Of unlock()
  • volatile Variable rule : Because of this time state by 1, Read first state, therefore Threads t1 Of unlock() Happens-Before Threads t2 Of lock()
  • Transitive rules : Threads t Of value+=1 Happens-Before Threads t2 Of lock()

What is a re entrant lock ?

Reentrant lock , That is, threads can acquire the same lock repeatedly , Examples are as follows :

Ever heard of the reentrant method ?
orz, What the hell is this ?P8 Look at me for a moment , I understand. , Speaking of : No problem , Just ask , Look at your knowledge .

In fact, multiple threads can call this method at the same time , Every thread gets the right result ; At the same time, support thread switching within a thread , No matter how many times it is switched , The results are all right . Multithreading can execute at the same time , It also supports thread switching . therefore , Reentrant methods are thread safe .

Let's talk about fair lock and unfair lock ?

such as ReentrantLock There are two constructors , One is parameterless constructor , One is the introduction fair Parametric .
fair The parameter represents the fairness policy of the lock ,true: We need to construct a fair lock ,false: Construct an unfair lock ( Default ).

Do you know the lock entry waiting queue ?

All locks correspond to a waiting queue , If a thread does not get a lock , It's going to enter the waiting line , When a thread releases a lock , You need to wake up a waiting thread from the waiting queue .
If it's a fair lock , Wake up strategy is who waits long , Wake up who , It's fair
If it's unfair , There is no guarantee of fairness , So the thread with short waiting time may be waked up first . The scenario of an unfair lock should be after the thread releases the lock , If a thread comes to get the lock , He doesn't have to wait in line to get , Not on the team . You can't get it before you join the team .

Talk about some of your best practices for locks

Lock can solve the concurrency problem , But the risk is also high . May cause deadlock , It also affects performance .Doug Lea Three recommended best practices for locking :

  • Always lock when updating the member variables of an object
  • Always lock only when accessing variable member variables
  • Never lock when calling methods of other objects
    Because calling methods of other objects , It's too unsafe , Maybe “ other ” There are threads in the method sleep() Call to , There may also be extremely slow I/O operation , These can seriously affect performance . It's even scarier ,“ other ” Methods of classes may also be locked , Then double locking can lead to deadlock .

There are also some common ones, such as , Reduce lock holding time 、 Reduce lock granularity . In essence , That is to say, lock only in the place where it should be locked .

Last , Here's a little code , Do you see any problem ?


A deadlock will not occur , Because there is no blocking . But there will be live locks , The situation of too many threads will cause some threads to fail to obtain the lock all the time , Leading to a live lock . That is, they may hold their own locks to each other , Find that the other party's locks are held by the other party , Will release the lock currently held , So that everyone keeps holding the lock , Release the lock , But it's not done yet . Of course, there will still be successful transfer scenarios , But it's inefficient .

The successful transfer should jump out of the cycle , But with break, There's also a problem with the lock , No, it's a live lock , Because locks are released . So add a random retry time to avoid livelocks .

Livelocks are easier to solve than deadlocks , Add a random waiting time or the client can manually try again .
The optimized code is as follows :

also notifyAll() In the face of fair lock and unfair lock , The effect is the same . All threads in the waiting queue are awakened , Queue up in the entrance queue ? These awakened threads do not need to queue according to the waiting time and then put into the entry waiting queue ?
All awakened . In theory, it's going to enter the waiting queue at the same time , The waiting time is the same .

CPU The atomicity of the plane is single cpu Instructions .Java The level of mutual exclusion ( Tube side ) It guarantees atomicity .
These two atoms have different meanings .cpu The atomicity of is not affected by thread scheduling , The instructions will not be executed , Or not implemented . and Java The atomicity of layer is to ensure that only one thread is executed under the mechanism of lock , The rest is waiting , here cpu You can still schedule threads , Make the running thread give way cpu Time , Of course, the thread still holds the lock .

版权声明
本文为[The official account -JavaEdge]所创,转载请带上原文链接,感谢
https://cdmana.com/2021/04/20210421231755181r.html

Scroll to Top