stay 《React Optimization skills in Web Applications in ray tracing ( On )》 in , We introduced JS Operator overloading scheme in ; stay 《React Optimization skills in Web Applications in ray tracing ( in )》 in , We introduced Time Slicing and Streaming Rendering Optimization strategy .
The formula in the code is no longer ugly ,UI The thread is no longer stuck . First rendering in a progressive way , Users see content faster . These are good optimizations .
however , We can do more .
Advanced program ：Schedule
React Upcoming Concurrency Mode characteristic , It contains Suspense and Schedule The function of . Their function is , Give Way UI The rendering depends on the trigger source and the importance of the module , Prioritize .
A page , Not all modules are equally important . There are always modules , As the title , The first figure , Price etc. , Compared with other modules （ Like the sidebar , advertisement ） important .
The source of the trigger update in the page , It's not as urgent . Like responding to user input , It should be more important than any other rendering request . So ,Facebook Engineers also specialize in Chrome The team , Contributed to isInputPending This API, You can know if there is current user input . If there is ,React Can cut off the current task in time , Handling user requests .
For our ray tracing scene , This kind of priority relationship can also be divided into . For example, compared to the background , object , Especially focused objects , Obviously more important . We can calculate more about the light sampling in the pixel where the object is located .
Special , A lot of times , The background doesn't have to calculate the light over and over again , It may calculate every time , All get the same value . We can detect this , Just skip them , Put computing resources in more important pixel locations .
Priority division strategy , It varies according to different scenarios and needs . Here it is , I've adopted a relatively general approach , It's the mean square error of the pixels of the two pictures before and after the comparison （Mean Squared Error）, Sort by the size of the error , The ones with big errors are in the front , The ones with small errors are in the back . Every time before taking 20000 To compute ray tracing .
such , We add an optimization algorithm to ray tracing . Multiple random rays are emitted in ray tracing , In itself is to use the Monte Carlo method to fit the rendering equation , It's using simulated statistical averages , Constantly approaching the theoretical value calculated by the rendering equation （ Students interested in Monte Carlo method , You can click on the 《40+ That's ok JS Code makes your 2048 game AI》 Understand it in the game AI The application in the scene ）.
We arrange our Monte Carlo sampling locations by pixel error , It can approach the theoretical value more efficiently .
First , We can't be in a render The function calculates all the points , We need to refine one renderByPosition function , As shown above . So we can use this function , Ray tracing the pixels according to their priority , Not necessarily in for No brain in the circulation follows .
And then we add 3 An array ,renderCount ,prevImageData and currImageData.
No brain before rendering , A numeric variable innerCount Common to all pixels , Divide by one to get the average . Now , The number of times each pixel is rendered , Because of the priority , It may be inconsistent . So we need special records .
We use it prevImageData Remember the last color value , use currImageData Remember the current color value , It is convenient to calculate the error .
The error calculation is very simple , The mean square function is an implementation error , And then according to the last picture , The current picture , Calculate the error value of the two colors before and after each position （ Each color value contains RGBA Four Numbers ）. According to the error from large to small .
We're on the data consumer side , It also achieved a renderByPosition function , Ray tracing ray.renderByPosition Result , Follow renderCount, prevImageData, currImageData and imageData.data Data logging and synchronization .
We are render The function still uses innerCount, To record the total rendering times . When it is greater than 2 when , That means we have at least two pictures , You can compare the errors . therefore , We don't go recursively anymore render, It's about switching to scheduleRender function , Trace rays according to priority .
scheduleRender function , To get the error list first , Take before 20000 Pixels with the largest error , Render... In turn . At the same time, it does Time Slicing and Streaming Rendering Handle , Give Way UI Keep it flowing .
stay scheduleRender The bottom of the function , We go through count Record scheduleRender The number of times , Greater than 5 Next time , Switch rendering mode back to render as a whole render function .
This is because , There's randomness in Monte Carlo simulation , Just rely on the first and second image error to rank the priority , There will be probabilistic neglect , The phenomenon is that the picture becomes uneven , There seems to be a lot of bad things . According to a certain frequency , Rendering as a whole , It can give bad luck pixels a chance to reevaluate .
By switching scheduleRender and render, We eliminated statistical bias as much as possible . The priority division is realized , It also keeps the overall smooth effect of rendering .
flowers 1000 Second time , The rendering results are as follows . The top half is the rendered image , The lower part is the number of times each pixel is rendered , The more times , The whiter the color . We can see it very intuitively , Where do we allocate our computing resources .
We can see from the picture , The light situation in the junction and shadow is relatively complicated , So we focused on fitting the light in these places （ Whiter ）. And the background and the sky , The color is single , The input of light computing resources should be relatively small （ Blacker ）.
After many repetitions , We can see , In the error list value value , Getting closer to 0. It means that the fitting of the theoretical value becomes better .
As shown above , adopt Schedule + MSE Optimization strategy , We use less time , You get a partial HD image . And you don't have to wait a long time , To get a global HD image .
Advanced program ： Psychological acceleration
Optimization of rendering performance , It's not the only way to optimize .
Fast in the physical sense , With the speed of human psychological feelings , Sometimes it's not consistent .
The reason why we are in the update phase Schedule Rendering , Because we need at least two images to calculate the error gradient .
however , Think carefully . First render , Can't you tell priorities ？
Based on our browsing experience of human created images , It's easy to conclude that ： The content in the center of the picture , Than the content on the edge of the picture , It's often more important . And ours Streaming Rendering, The model is React SSR Rendering HTML The pattern of , From top to bottom .
about HTML document , Rendering from top to bottom , it . But we are pictures , We should start in the middle , Spread it up and down .
As shown above , We just changed the starting position and orientation of the rendering , Users are more likely to see what they are more interested in . Instead of seeing the empty sky first .
Besides , Human vision has the ability to capture patterns automatically . We don't have to render it all , By the pattern matching of human eyes to objects , It can also let users know what is in the picture . therefore , We can follow a certain gap , Render some of the coarser images .
The image above is only half the pixel , It took only half the time . We can actually identify the general content of the picture . such , We can quickly generate rough maps , Let users have visual space occupying ; Combined with the previous one , Expand pixels from the middle , Refine the content , For a better visual experience .
As shown above , Now users are not only faster at seeing objects in the center of their vision , Also have a certain grasp of the overall picture . our Schedule Priority policy , It has been successfully applied in the first rendering phase .
Take a look back. , We can see ,React/Vue Optimization strategy in rendering , It's also true in other places .
+-*/ Compile into function calls , Follow JSX Translate it into React.createElement Function calls are the same .
Long time rendering stuck UI The main thread , Can use Time Slicing How to do it .
Waiting a long time for a full rendering , Both can be used Streaming Rendering How to do it .
There is a priority division between modules , Both can be used Schedule How to do it .
That's what I learned about ray tracing , A problem to solve . They haven't been made open source yet babel Plans for plug-ins and Libraries , So let's share some ideas , Hope to help some students .
It is worth mentioning that , We haven't exhausted the available optimizations ; The optimization measures presented here , Just a small part of it . such as , Considering that the ray tracing of each pixel is independent , Parallelize the process (Parallelization), Put it in Web Worker perhaps GPU In the calculation , That is, a significant improvement in efficiency can be achieved . Interested students , You can explore on your own .
Click to see the picture above On-line DEMO. Mobile phone can also experience oh .