It's very similar to the distributed model , In the concurrency model, it's
Threads Communicate with each other , And in the distributed system model, it's ?
process ? Communicate with each other . But in essence , Processes and threads are very similar . That's why the concurrent model is very similar to the distributed model .
Distributed systems usually face more challenges and problems than concurrent systems, such as process communication 、 The network may be abnormal , Or the remote machine goes down and so on . But a concurrency model also faces such problems as CPU fault 、 There is a problem with the network card 、 Hard disk problems, etc .
Because the concurrent model is very similar to the distributed model , So they can learn from each other , For example, the model for thread allocation is similar to the load balancing model in a distributed system environment .
In fact, it's plain , The idea of distributed model is derived from the concurrent model .
An important aspect of the concurrency model is , Whether threads should
Sharing status , Yes.
Sharing status still
Independent state . Sharing state means sharing certain states between different threads
The state is actually
data , Like one or more objects . When threads want to share data , Will cause ?
Race condition ? perhaps ?
Deadlock ? Other questions . Of course , These problems are just possible , The specific implementation depends on whether you use and access the shared object safely .
Independent states indicate that states are not shared among multiple threads , If you need to communicate between threads , They can access immutable objects to implement , This is the most effective way to avoid concurrency problems , As shown in the figure below
Using independent states makes our design easier , Because only one thread can access the object , Even if you exchange objects , It's also an immutable object .
The first concurrency model is parallel worker Model , The client will give the task to ?
The agent (Delegator), Then the agent assigns the work to different ?
Worker (worker). As shown in the figure below
parallel worker The core idea of , It has two main processes: agent and worker ,Delegator Responsible for receiving the task from the client and sending the task , Give it to the specific Worker To deal with ,Worker After processing, return the result to Delegator, stay Delegator Received Worker The results of processing are summarized , And then give it to the client .
parallel Worker The model is Java A very common model in concurrency models . many ?
java.util.concurrent? The concurrent tools under the package use this model .
parallel Worker A very obvious feature of the model is that it is easy to understand , In order to improve the parallelism of the system, you can add multiple Worker To complete the task .
parallel Worker Another benefit of the model is , It splits a task into smaller tasks , Concurrent execution ,Delegator On receiving Worker The result will be returned to Client, Whole Worker -> Delegator -> Client The process is
asynchronous Of .
alike , parallel Worker Patterns also have some hidden drawbacks
Sharing state can be complicated
Actual parallelism Worker It's more complicated than what we've shown in the picture , It's mainly parallel Worker It usually accesses some shared data in memory or shared database .
These shared states may use some work queues to hold business data 、 Data caching 、 Database connection pool, etc . In thread communication , The thread needs to ensure that the shared state can be shared by other threads , Instead of just staying at CPU Make yourself available in the cache , Of course, these are the problems that programmers need to consider when designing . Threads need to avoid ?
Race condition ,
Deadlock ? And many other concurrency problems caused by shared state .
Multithreading when accessing shared data , You lose concurrency , Because the operating system must ensure that only one thread can access the data , This can lead to contention and preemption of shared data . Threads that do not preempt resources will ?
Modern non blocking concurrent algorithms can reduce contention and improve performance , But non blocking algorithm is difficult to implement .
Persistent data structures (Persistent data structures)? It's another option . Persistent data structures always retain the previous version after modification . therefore , If multiple threads modify a persistent data structure at the same time , And a thread modifies it , The modified thread gets a reference to the new data structure .
Although persistent data structures are a new solution , But there are some problems with this method , such as , A persistent list adds new elements to the beginning of the list , And return the reference of the new element added , But other threads still hold only the reference to the first element in the list , They don't see the newly added elements .
Persistent data structures such as ?
Linked list (LinkedList)? Poor performance in hardware performance . Each element in the list is an object , These objects are scattered in computer memory . modern CPU Sequential access is often much faster , Therefore, the use of array and other sequential access data structure can get higher performance .CPU The cache can load a large matrix block into the cache , And let CPU Access directly after loading CPU Data in the cache . For linked lists , Spread the elements throughout RAM On , It's actually impossible .
Shared state can be modified by other threads , therefore ,worker Must be reread every time the shared state is operated , To make sure it works correctly on the replica . Not holding state inside a thread worker Become stateless worker.
The order of work is uncertain
Another disadvantage of the parallel working model is that the order of jobs is uncertain , There is no guarantee of which jobs will be performed first or last . Mission A On mission B Previously assigned to worker, But the task B Maybe on a mission A Before execution .
The second concurrency model is what we often encounter in the production workshop ?
Pipeline concurrency model , The following is the flow chart of the pipeline design model
This organizational structure is like the assembly line in a factory worker, Every worker Only part of the whole job is done , After finishing a part of it ,worker Will forward the work to the next worker.
Each program runs in its own thread , They don't share state with each other , This model is also known as the shared free concurrency model .
Using pipelined concurrency models is usually designed as
Non blocking I/O, in other words , When not given worker When assigning tasks ,worker Will do other work . Non blocking I/O It means when worker Start I/O operation , For example, reading files from the network ,worker Don't wait for I/O Call complete . because I/O Slow operation , So wait I/O It's very time consuming . Waiting for the I/O At the same time ,CPU You can do something else ,I/O The result of the operation is passed on to the next worker. Here's nonblocking I/O Flow chart of
In practice , Tasks usually don't flow along an assembly line , Because most programs need to do a lot of things , Therefore, it is necessary to work in different worker Flow between , As shown in the figure below
More than one task may be required worker Work together to complete
Systems that use a pipeline model are sometimes referred to as ?
Response type ? perhaps ?
Event driven systems , This model responds to external events , It could be something HTTP Request or a file is finished loading into memory .
stay Actor In the model , every last Actor It's really just a Worker, every last Actor Can handle tasks .
Simply speaking ,Actor The model is a concurrency model , It defines a set of general rules for how system components should act and interact , The most famous programming language to use this set of rules is Erlang. A participant
Actor Respond to received messages , And then you can create more Actor Or send more messages , And prepare to receive the next message .
stay Channel In the model ,worker It doesn't usually communicate directly , In contrast to , They usually send events to different ?
passageway (Channel) On , Then the other worker You can get messages on these channels , Here is Channel Model diagram
sometimes worker There's no need to know exactly what's next worker Who is it? , They just need to write the author to the channel , monitor Channel Of worker You can subscribe or unsubscribe , This way it reduces worker and worker The coupling between .
Compared with concurrent design model , The pipeline model has some advantages , The specific advantages are as follows
There will be no shared state
Because the pipeline design can guarantee worker Pass it on to the next one after processing worker, therefore worker And worker There is no need to share any state between , That is, there is no need to consider the concurrency problem caused by concurrency . You can even implement each of these worker As a kind of single thread .
A stateful worker
because worker Knowing that no other thread has modified its own data , So in the pipeline design worker It's stateful , Stateful means that they can keep the data they need to operate in memory , Stateful is usually faster than stateless .
Better hardware integration
Because you can think of the pipeline as a single thread , The advantage of single thread is that it can work in the same way as hardware . Because of the state of worker Usually in CPU Cache data in , This allows faster access to cached data .
Make the task more effective
You can sort tasks in a stream line concurrency model , It is generally used for log writing and recovery .
The disadvantage of pipelined concurrency model is that the task involves more than one worker, Therefore, it may be scattered in multiple classes of the project code . So it's hard to identify each worker What kind of mission are you doing . Pipeline code writing is also more difficult , The code that designs many nested callback handlers is often referred to as ?
Back to hell . It's hard to trace back to hell debug.
Functional parallel model is a kind of concurrency model proposed recently , Its basic idea is to use function calls to implement . The transmission of a message is equivalent to a function call . The parameters passed to the function are copied , Therefore, no entity outside the function can manipulate the data within the function . This makes the function perform like
atom operation . Each function call can be executed independently of any other function call .
When each function call is executed independently , Each function can be in a separate CPU On the implementation . That means , Functional parallelism is equivalent to each other CPU Carry out their own tasks alone .
JDK 1.7 Medium ?
ForkAndJoinPool? Class implements the function of functional parallelism .Java 8 Put forward stream The concept of , It is also possible to iterate over a large number of collections using parallel streams .
The difficulty of functional parallelism is to know the calling process of functions and what CPU Which functions are executed , Span CPU Function calls bring extra overhead .
Generally, large enterprises like this have several rounds of interviews , So I must take some time to collect and sort out the background of the company , The corporate culture of the company , It is said that 「 Know yourself and your enemy in a hundred battles 」, Don't go to the interview blindly , There are many people who care about how to talk to HR Talk about pay .
Here's a suggestion , If your ideal salary is 30K, You can talk to me HR To talk about 33~35K, Instead of exposing your cards all at once , But I can't say it so directly , For example, your company is 25K, You can talk to HR Talk about the original salary , How much can you give me ？ You said I hope to have one 20% Salary increase .
Finally, say a few words about the recruitment platform , All in all , Before sending your resume to the company , Please confirm how the company is , Go to Baidu first to understand , Don't get trapped , Every platform has some unscrupulous advertising parties waiting for you to take the bait , Don't be fooled ！！！
Java Structure learning materials , Learning techniques include ：Spring,Dubbo,MyBatis, RPC, Source code analysis , High concurrency 、 High performance 、 Distributed , performance optimization , Microservices Advanced architecture development and so on .
also Java Core knowledge points + Full set of architect learning materials and videos + Interview guide for first-line large factories + Interview resume template can be obtained + Ali Kwai group NetEase Tencent millet Iqiyi fast beep Li Li Li interview questions +Spring Source code collection +Java Build a practical e-book .
本文为[Programmer Xiao Gang]所创，转载请带上原文链接，感谢