编程知识 cdmana.com

16 questions about Java foundation Life Killing series of "I want to enter a big factory"

It's over the interview series , It turns out that it's still really fragrant , Um. , Thought I found mine Java The foundation didn't write , So this is a sequel , Please take in the first part of the sequel .

Tell me about the difference between the program and the thread ?

A program is a program run , It is an independent unit of resource allocation and scheduling , His role is that programs can run concurrently to improve resource utilization and throughput .

Since the program is the basic unit of resource allocation and scheduling , Because of the establishment of the program 、 Destruction 、 Switching costs a lot of time and space , The number of programs can't be too much , Thread is a smaller basic unit that can be executed independently than a program , He is an entity of the procedure , It can reduce the time and space overhead of concurrent execution of the program , Make the operating system have better concurrency .

Threads basically don't have system resources , There are only some resources that are essential for execution , For example, program counter 、 Registers and stacks , The program takes up the heap 、 Stack .

know synchronized How is it ?

synchronized yes java Atomic built in lock provided , This built-in lock that the user cannot see is also called a monitor lock , Use synchronized After that , Add... Before and after compilation to synchronized code blocks monitorenter and monitorexit Bit group code instructions , He relies on the implementation of the mutex at the bottom of the operating system . His main role is to implement atomic operations and solve the memory visibility problem of shared variables .

Execute monitorenter Command will attempt to acquire an object lock , If the object is not locked or already has a lock , Lock counter +1. At this point, threads of other competing locks will enter the wait queue .

Execute monitorexit When the command is given, the counter will be set -1, When the counter value is 0 When , Then the lock is released , Threads in wait queue to continue competing locks .

synchronized It's an exclusive lock , When a thread acquires a lock , Other threads must wait for the thread to release the lock before it can acquire the lock , And because of Java There is a one-to-one correspondence between threads in and native threads in the operating system , When a thread is blocked or awakened, it switches from user state to kernel state , This conversion is very cost-effective .

From the aspect of memory body meaning , The locking process clears shared variables from working memory , Read from main memory , The process of releasing the lock is to write the shared variables in working memory back to main memory .

In fact, most of the time I think it's about monitorenter That's it , But for a clearer description , Be more specific .

If you go deeper into the source code ,synchronized There are actually two queues waitSet and entryList.

  1. When multiple threads enter the synchronized block , First enter entryList
  2. A thread gets monitor After locking , Assign a value to the current thread , And the counter +1
  3. If the thread calls wait Method , The lock will be released , The current thread is set to null, Counter -1, Enter at the same time waitSet Waiting to be awakened , call notify perhaps notifyAll And then it goes into entryList Competitive lock
  4. If the thread is finished , Release the lock as well , Counter -1, The current thread is set to null

Do you understand the optimization mechanism of the lock ?

From JDK1.6 After the version ,synchronized It's also constantly optimizing the lock mechanism , In some cases, it's not a very heavy lock . The optimization mechanism includes adaptive locking 、 Spin lock 、 Lock removal 、 Lock coarsening 、 Lightweight locks and biased locks .

The state of lock from low to high is no lock -> Bias lock -> Lightweight lock -> Heavyweight locks , The process of upgrading is from low to high , Degradation is also possible under certain conditions .

Spin lock : Because most of the time , The lock is occupied for a short time , The locking time of shared variables is also very short , So there's no need to suspend threads , Back and forth context switching between user state and kernel state seriously affects performance . The concept of spin is to let the thread execute a busy loop , It can be understood as doing nothing , To prevent the transition from user state to nuclear mentality , Spin lock can be set by -XX:+UseSpining To open , The preset number of spins is 10 Time , have access to -XX:PreBlockSpin Set .

Adaptive lock : Adaptive lock is an adaptive spin lock , Spin time is not fixed time , It is determined by the previous spin time on the same lock and the state of the lock holder .

Lock removal : Lock elimination means JVM Some synchronized code blocks have been detected , There is no data competition scenario at all , That is, there is no need to lock , The lock will be removed .

Lock coarsening : Lock coarsening means that there are many operations that lock the same object , It will extend the synchronization scope of the lock out of the whole operation sequence .

Bias lock : When a thread accesses a synchronization block to acquire a lock , Lock biased threads are stored in lock records in object headers and stack frames ID, After that, the thread does not need to enter the synchronization block again CAS To lock and unlock , Biased locking always favors the thread that first acquired the lock , If no other thread has ever acquired the lock , The thread holding the lock never needs to synchronize , conversely , When there are other threads competing for lock bias , Threads holding a bias lock release the bias lock . You can use the settings -XX:+UseBiasedLocking Open the deflection lock .

Lightweight lock :JVM The object header of the object contains some flag bits for locks , When code enters the sync block ,JVM Will use CAS Way to try to get the lock , If the update is successful, the status bits in the object header are marked as lightweight locks , If the update fails , The current thread tries to spin to get the lock .

The whole process of lock upgrade is very complicated , I try to get rid of some useless links , Simply describe the whole upgrade mechanism .

In a nutshell , A bias lock is a thread of bias through the object head ID To contrast , You don't even need to CAS 了 , And lightweight lock is mainly through CAS Modify the object's head lock record and spin to achieve , Heavyweight locks block all but the thread that owns the lock .

What exactly does the object header contain ?

In our common use of Hotspot In a virtual machine , The object layout in memory actually contains 3 Parts of :

  1. The head of the object
  2. Example information
  3. Align fill

The object header contains two parts ,Mark Word The contents of the will change with the lock flag bit , So just the storage structure .

  1. The data required for the execution of the object itself , Also known as Mark Word, That's the key point for lightweight locks and biased locks . The specific content contains the object of hashcode、 Generational age 、 Lightweight lock metrics 、 Heavyweight lock metrics 、GC Mark 、 Biased lock thread ID、 Biased lock timestamp .
  2. Storage type indicator , That is, the index to the metadata of the class , This indicator is used to determine which class the object belongs to .

If it's an array , The length of the array is also included

For locking , Let's talk about it ReentrantLock principle ? He and synchronized What's the difference ?

Compared with synchronized,ReentrantLock You need to explicitly acquire and release locks , It is basically used now JDK7 and JDK8 Version of ,ReentrantLock Efficiency and synchronized The difference can be basically balanced . The main differences between them are as follows :

  1. Waiting can be interrupted , When the thread holding the lock does not release the lock for a long time , The thread in wait can choose to give up waiting , Turn to other tasks .
  2. Fair lock :synchronized and ReentrantLock The presuppositions are all unfair locks , however ReentrantLock It can be changed by constructing function parameters . It's just that fair locks can lead to a sharp drop in performance .
  3. Binding multiple conditions :ReentrantLock You can bind more than one at the same time Condition Condition object .

ReentrantLock Based on AQS(AbstractQueuedSynchronizer Abstract queue synchronizer ) Realize . Stop it , I know the problem ,AQS I'll tell you the principle .

AQS Internal maintenance of a state Status bit , When you try to lock, go through CAS(CompareAndSwap) Modified value , If successfully set to 1, And put the current thread ID Assignment , It means the lock is successful , Once you get the lock , Other threads will be blocked and spin into the blocking queue , The thread that acquired the lock will wake up the thread in the blocking queue when it releases the lock , When you release the lock, you put state Reset to 0, At the same time, the current thread ID Set to empty .

CAS How about the principle of ?

CAS be called CompareAndSwap, Compare and exchange , Mainly through the instructions of the processor to ensure the atomicity of the operation , It contains three operands :

  1. Variable memory address ,V Express
  2. Old expectations ,A Express
  3. Ready to set the new value ,B Express

When executed CAS When instructed , Only when V Equal to A When , I'll use it B To update V Value , Otherwise, the update operation will not be performed .

So CAS What's wrong with it ?

CAS The main disadvantages are 3 Point :

ABA The problem is :ABA The problem is that in CAS In the process of updating , When the read value is A, And then when you're ready to assign, it's still A, But it's actually possible A The value of has been changed to B, And then it was changed back to A, This CAS An update vulnerability is called ABA. It's just ABA Most scenarios do not affect the final effect of concurrency .

Java There is AtomicStampedReference To solve this problem , He added the expected sign and the updated logo fields , It's not just checking values when updating , Also check whether the current flag is equal to the expected flag , If all of them are equal, they will be updated .

Loop back time is long and cost is large : The spin CAS If it doesn't work for a long time , Will give CPU It's a big expense .

Only one atomic operation with shared variables can be guaranteed : Only one shared variable operation can guarantee atomicity , But not more than one , More than one can go through AtomicReference To handle or use locks synchronized Realize .

good , Talk about it HashMap Principle ?

HashMap It is mainly composed of array and concatenation , He's not thread safe . The core point is put The process of inserting data ,get How to query information and expand the capacity .JDK1.7 and 1.8 The main difference is in the modification of the head insertion and the tail insertion , It's easy to get stuck in the head HashMap Link chain dead circle , And 1.8 After joining the red black tree, the efficiency will be improved .

put Insert data flow

Go to map When you insert an element, you first insert it through the key hash And then with the array length -1 Operation and operation ((n-1)&hash), All are 2 So it's the same as taking modules , But bit operations are more efficient . After finding the position in the array , If there are no elements in the array that are stored directly in , On the contrary, judge key Are they the same? ,key The same covers , Otherwise, it will be inserted into the end of the linked column , If the length of the linked string exceeds 8, It will be converted into a red black tree , Finally, we can judge whether the array length exceeds the preset length * The load factor is 12, If it exceeds, it will be expanded .

get Search information

It's relatively easy to query information , First calculate hash value , Then go to the array and query , If it's a red black tree, go to the red and black tree to check , Link string is traversal of the link column query .

resize Expansion process

The process of expansion is to key Recalculate hash, Then copy the data to the new array .

How to use the multithreaded environment Map Well ?ConcurrentHashmap Do you understand ?

Multithreaded environments can use Collections.synchronizedMap Synchronous locking mode , You can also use HashTable, But the synchronization method is obviously not up to standard , and ConurrentHashMap It is more suitable for high concurrency scenarios .

ConcurrentHashmap stay JDK1.7 and 1.8 The version of is changed a lot ,1.7 Use Segment+HashEntry The method of segmented lock is realized ,1.8 And abandoned Segment, Use... Instead CAS+synchronized+Node Realize , Red and black trees are also added , Avoid performance problems caused by too long link strings .

1.7 Segmented lock

Structurally ,1.7 Version of ConcurrentHashMap It adopts a segmented lock mechanism , It contains a Segment Array ,Segment Inheritance and ReentrantLock,Segment It includes HashEntry Array of ,HashEntry It is a concatenated structure in itself , With storage key、value The ability to point to the next node .

In fact, it is equivalent to each of them Segment It's all one HashMap, Preset Segment The length is 16, That's support 16 Concurrent writing of threads ,Segment They don't affect each other .

put technological process

Discovery and the whole process HashMap Very similar , It's just positioning to the specific Segment, And then through ReentrantLock To operate , I've simplified the process , Because and HashMap It's basically the same .

  1. Calculation hash, Locate the segment,segment If it is empty, initialize it first
  2. Use ReentrantLock Lock , If lock acquisition fails, try spinning , Spin more than a few times and block the acquisition , Make sure you get the lock successfully
  3. Traversal HashEntry, That's right. HashMap The same thing , In the array key and hash Just replace it directly , If it does not exist, insert the linked column again , The same is true for linked columns

get technological process

get It's also very simple ,key Through hash Locate the segment, Then traverse the link string to locate the specific element , It should be noted that value yes volatile Of , therefore get It doesn't need to be locked .

1.8CAS+synchronized

1.8 Discard the block lock , Turn to use CAS+synchronized To achieve , Again HashEntry Change it to Node, Also added to the implementation of the red black tree . The main thing is to look at put The process of .

put technological process

  1. First calculate hash, Traversal node Array , If node If it's empty , Just through CAS+ Spin way initialization
  2. If the current array position is empty, it passes directly through CAS Spin write data
  3. If hash==MOVED, Explain the need for expansion , Perform expansion
  4. If they are not satisfied , Just use synchronized Write data , Writing data also determines the concatenation 、 Red and black trees , Concatenate write and HashMap In the same way ,key hash Cover the same , On the contrary, the tail insertion method , The length of the connection string exceeds 8 It's a red black tree

get Inquire about

get It's simple , Through key Calculation hash, If key hash The same goes back to , If it is a red black tree, get it according to the red black tree , It's not about traversing the linked columns to get .

volatile Do you know the principle ?

comparison synchronized To solve the memory visibility problem of shared variables ,volatile It's a lighter choice , He has no extra cost of context switching . Use volatile Declared variables , This ensures that the value is immediately visible to other threads when it is updated .volatile To ensure that memory barrier does not occur , Solved the problem of memory visibility .

We know , The thread reads shared variables from main memory to working memory , Write the result to main memory after completion , But that brings visibility issues . For example , Let's assume that we're a two-level cache with dual cores CPU Architecture , contain L1、L2 Two level cache .

  1. Thread A First get the variable X Value , Since the first two levels of cache are empty , So read directly from main memory X, Suppose X The initial value is 0, Thread A After reading, put X Values are changed to 1, Write back to main memory at the same time . The cache and main memory are shown in the following figure .

  1. Thread B Also read variables X Value , Because of L2 Cache already has cache X=1, So directly from L2 Cache read , Then the thread B hold X It is amended as follows 2, At the same time, write back L2 And main memory . At this time X The value is shown in the figure below .

    So the thread A If you want to get variables again X Value , Because L1 The cache already has x=1 了 , So the variable memory is invisible at this time ,B It is amended as follows 2 The value of is right A There's no perception .

    image-20201111171451466

So , If X Variable with volatile Embellished words , When the thread A Read the variable again X And then ,CPU The thread is enforced according to the cache consistency agreement A Reload the latest values from main memory to your working memory , Instead of just using the values in the cache .

Let's talk about the memory barrier ,volatile After modification, different memory barriers will be added to ensure that the visibility problem can be executed correctly . The barrier here is based on what is provided in the book , But actually because of CPU The architecture is different , The strategy for reordering is different , The memory barrier provided is also different , such as x86 On the platform , Only StoreLoad A memory barrier .

  1. StoreStore barrier , Make sure the common writing on it doesn't match volatile The writes are reordered
  2. StoreLoad barrier , Guarantee volatile Write with the following possible volatile There is no reordering between read and write
  3. LoadLoad barrier , prohibit volatile The read is reordered with the following normal read
  4. LoadStore barrier , prohibit volatile Read and common write reorder after

Well, tell me about you JMM Memory model understanding ? Why do you need JMM?

It follows CPU And the difference between the speed of development of memory , Lead to CPU Is much faster than memory , So now CPU Added cache memory , Cache memory can be divided into L1、L2、L3 Level three cache . Based on the above example, we know that this leads to cache consistency problems , So we add cache consistency protocol , It also leads to memory visibility problems , And compilers and CPU The reordering results in the problems of atomicity and orderliness ,JMM Memory model is a series of specification constraints for multithreaded operations , Because it is impossible for Chen's code to be compatible with all of them CPU, Through JMM We've obscured access differences between different hardware and operating system memory , This ensures that Java The program achieves consistent memory access on different platforms , At the same time, it is also to ensure that the program can run correctly when it is efficient and concurrent .

Atomicity :Java Memory model through read、load、assign、use、store、write To ensure atomic operation , Besides, there is lock and unlock, It corresponds directly to synchronized Keywords monitorenter and monitorexit Bit group code instructions .

Visibility : The question of visibility has been said in the above answer ,Java Ensuring visibility can be considered through volatile、synchronized、final To achieve .

Orderliness : Ordering problems due to processor and compiler reordering ,Java Through volatile、synchronized To guarantee .

happen-before Rules

Although instruction rearrangement improves concurrency performance , however Java The virtual machine will make some rule restrictions on instruction rearrangement , It is not possible to change the execution position of all instructions at will , There are mainly the following points :

  1. Single thread per operation ,happen-before Any subsequent operation in the thread
  2. volatile Write happen-before And the subsequent reading of this variable
  3. synchronized unlock happen-before Lock this lock later
  4. final Variable writing happen-before On final Read domain objects ,happen-before Follow up on final Variable reading
  5. Transitive rules ,A Before B,B Before C, So A Must precede C It happened

For a long time , What are the working and main memory ?

The main memory can be considered as entity memory ,Java Memory model is actually a part of virtual machine memory . And working memory is CPU Get it , It could be a register, or it could be L1\L2\L3 Get it , It's all possible .

Talk about it ThreadLocal principle ?

ThreadLocal It can be understood as thread local variables , He will create a copy of each thread , Then it's OK to access internal replica variables between threads , The threads are isolated from each other , Compared with synchronized The idea is to trade space for time .

ThreadLocal There is a static inner class ThreadLocalMap,ThreadLocalMap There's another Entry Array ,Entry Itself is a weak reference , His key It's pointing ThreadLocal The weak references ,Entry With storage key value The ability of key value pairs .

The purpose of weak references is to prevent memory leaks , If it's a strong reference, then ThreadLocal Objects cannot be recycled until the thread ends , A weak reference will be the next time GC When it comes to recycling .

But there is still a memory leak problem , If key and ThreadLocal After the object is recycled ,entry Exist in key For null, however value Valuable entry thing , But it's never going to be accessible , Again, unless the thread finishes executing .

But as long as ThreadLocal Use it properly , Call after use remove Method delete Entry thing , In fact, it won't happen .

What are the reference types ? What's the difference ?

There are four types of reference types: strong, weak and virtual :

  1. Strong references refer to the common way of assigning values in code , such as A a = new A() This . Strongly reference the associated object , Never be GC Recycling .
  2. Soft references can be used with SoftReference To describe , It refers to things that are useful but not necessary . The system will recycle such referenced objects before a memory overflow occurs .
  3. Weak references can be used with WeakReference To describe , His intensity is a little lower than the soft quote , Weakly referenced objects next time GC When it comes to recycling , And whether there's enough memory .
  4. Virtual references are also called phantom references , It's the weakest reference relationship , It can be used PhantomReference To describe , He has to talk to ReferenceQueue Use it together , The same thing happens when GC When , Virtual references are also recycled . Virtual references can be used to manage out of heap memory .

Do you know the thread pool principle ?

First, thread pool has several core arguments :

  1. Maximum threads maximumPoolSize

  2. Number of core threads corePoolSize

  3. Active time keepAliveTime

  4. Blocking queues workQueue

  5. Rejection strategy RejectedExecutionHandler

When a new task is submitted to the thread pool , The specific implementation process is as follows :

  1. When we submit tasks , The thread pool is based on corePoolSize Size creates a number of tasks, threads execute tasks
  2. When the number of tasks exceeds corePoolSize Quantity , Subsequent tasks will enter the blocking queue
  3. When the blocking queue is full , Then it will continue to build (maximumPoolSize-corePoolSize) Number of threads to perform tasks , If the task is done ,maximumPoolSize-corePoolSize Additional threads are created waiting for keepAliveTime And then it's automatically destroyed
  4. If we reach maximumPoolSize, Blocked queue or full state , Then it will be handled according to different rejection policies

What are the rejection strategies ?

There are mainly 4 A refusal strategy :

  1. AbortPolicy: Discard tasks directly , Throw exception , This is the default strategy
  2. CallerRunsPolicy: Only the caller's thread is used to handle tasks
  3. DiscardOldestPolicy: Discard the most recent task in the wait queue , And perform the current task
  4. DiscardPolicy: Discard tasks directly , And don't throw exception

- END -

版权声明
本文为[itread01]所创,转载请带上原文链接,感谢

Scroll to Top