Oracle Over the years , Provides a variety of memory management , From the earliest SGA、PGA Manual management , To 9I Version of PGA Automatic management of , To 10G Version of SGA Automatic management （ASMM), Until then 11G Version of memory Automatic management （AMM),Oracle It's basically moving towards intelligence 、 Foolishness 、 The direction of automation is advancing steadily , For beginners Oracle Of DBA Come on , Seeing these different ways of memory management, I must have some doubts in my heart , for example ：
Oracle There are so many ways to manage memory allocation , Which one should I use ？ It's using 11G Version launched AMM Management style , Or use 10G Version of ASMM Management style ？ Or simply use the oldest manual way to manage memory ？
I should be proud of my example SGA,PGA How big are they ?
What should I do for you buffer cache,shared pool How much memory space is allocated ？
When should I use big pages ？ Why is the word big page so easy to hear in recent years ？
Is there any simple and crude algorithm to deal with all this ？
The original intention of writing this article also comes from these very normal questions , If this article can give people who have these doubts DBA Some guidance and help from friends , I've really made a great contribution to the shredded meat .
In a running Oracle On the dedicated server of the database , Memory is basically occupied by ：
OS Page Table
file system Cache
Other processes （RMAN, Not Oracle And so on ）
Strictly speaking OS Page Table It is Kernel Memory Partial content , Because this article will focus on the content of the large page , So the OS Page Table This part is independent in the diagram , To draw the reader's attention . It must be emphasized that ,Oracle You shouldn't use all the memory on the host , Excessive memory allocation can cause the operating system to SWAP The birth of , Lead to Oracle Serious performance degradation , stay RAC In the environment , Lack of memory can also easily lead to RAC The expulsion of nodes . about Oracle Two blocks of memory ：SGA and PGA,DBA Be careful according to the characteristics of the system 、 Business use characteristics, good planning and design , In case OS Not enough memory .
3 The history of memory Automation
Let's first get to know Oracle The history of memory development , Basically Oracle In the evolution of version, memory management is becoming more and more intelligent 、 automation 、 Move in a fool's direction . Come into contact with MYSQL isodatabase DBA Friends should know better , Basically, these databases still need DBA To determine the size of each memory component , and Oracle Has gone from 9I From the very beginning, the version is going to be intelligent 、 It's an automated process . The picture below is Oracle An evolution of memory management ：
4PGA Automatic management
Oracle It's a multiprocess architecture , This is different from MYSQL,MYSQL Is a single process multithreading architecture ,Oracle A separate operating system process will be created for each user connection to serve the user , This process is called server process or shadow process , It's like a proxy for the user , To manipulate data files or SGA Memory , Because the server process code is Oracle company-developed , therefore Oracle The company fully believes that these processes or code are safe , So these processes can directly manipulate data files and SGA Memory , These processes accept instructions from user processes , And complete the relevant operation , And return the results to the user process as needed .
Because the server process is a process on the operating system , So it needs to take up some operating system memory itself , besides , The process is reading data 、 Sort 、hash In the process , It also takes up a certain amount of memory , stay Oracle 9I Before the release , The memory management of server process is controlled by some parameters , The following parameters represent the memory size of different areas that each server process can use （ All are private memory areas of the process ）：
for example ,SORT_AREA_SIZE It controls the size of sorting area available for each process ,HASH_AREA_SIZE Parameters control what each process can use hash Area size , These parameters all have default values , But is the default appropriate , You need a big question mark , Because different tasks are important for PGA Different areas of memory have different requirements , for example , If it's a sort operation , It requires a lot of memory in the sorting area , And yes hash I don't have any requirements . Of course, if the default is not appropriate ,DBA You can size these areas manually .
Oracle 9I The version appears PGA Automatic management of , No longer need to be like 9I Previous versions needed to set a series of parameters to control PGA Use , Just set up PGA_AGGREGATE_TARGET For a value , You can control all the server processes PGA Usage quantity , As for how many sort zones are used by each server process ,hash District , All to Oracle To control the .
Generally speaking, for PGA There are several operations in the extensive use of ：
hash about hash join operation ,hash The memory used by the bucket is in the private domain of the process PGA In the memory , Instead of sharing memory SGA in , If you use PGA Manual management , Can pass HASH_AREA_SIZE Parameters to dynamically adjust the session hash The amount of memory the operation can use .
sort For sort operations , For example, in a query statement order by、 The memory occupied by the sort operation of creating index is also in PGA in , If you use PGA Manual management , Can pass SORT_AREA_SIZE Parameters dynamically adjust the amount of memory that can be used by the session sort operation .
parallel Parallel operation can be said to be PGA Memory killer , Every parallel process can use up to 2G Of PGA Memory , Of course Oracle Will make sure that all the parallel slave The use of PGA Memory cannot exceed PGA_AGGREGATE_TARGET Half of .
Now? Oracle The version of has come out 12C,PGA Automatic management of the Internet has been developed for many years , If it's a person , He should also be a very mature young man , Even an uncle , There is no need to manually adjust most database operations PGA Some parameters of .
however , We can still get it from the Internet 、 There are a lot of DBA For this kind of manual adjustment PGA There's a lot of adoration for technology （ I used to be ）, It's true that in some cases , By manual adjustment PGA Related memory area of , It can speed up the sorting and other operations , But if the amount of data that needs to be operated is very large , This kind of adjustment is often time-consuming and laborious , Even in vain , Because for the privacy of a process PGA Memory , image sort,hash etc. P The memory allocation of a region is limited , Now? 11GR2 Version of for each process PGA The maximum memory limit is... By default 2 individual G, And the sorting area can only use 1 individual G, If you need more memory for sorting and other operations 1,2 individual G, So this kind of optimization is very futile , It could even slow down （ Shredded meat has experienced slow down cases ）.
Whether to use PGA Automatic management by parameters WORKAREA_SIZE_POLICY control , Its value can be auto and manual, seeing the name of a thing one thinks of its function ,auto by PGA Automatic management ,manual by PGA Manual management , go back to 9I Previous usage .
Here are some hidden parameters that you may be interested in , For example, I mentioned above the maximum usage of each process PGA No more than 2 individual G, This limitation can be broken through by modifying the implied parameters .> Shredded meat is here to warn you , If these parameters are to be used in a production environment , Please test your database version （ Include the same OS And version ）.
The following is a 11GR2 Version , Other versions have not been tested ：
_PGA_MAX_SIZE Of each process PGA Maximum memory size of . The default value is ：2147483648,2 individual G, Unit is B.
_SMM_MAX_SIZE The size of the workspace for each process , The default value is 1/2 _PGA_MAX_SIZE,1048576 , Company KB,1GB, Sorting area 、hash All areas are within the scope of the work area .64 The real memory used in the sort area under the bit system cannot exceed 4GB.
_SMM_PX_MAX_SIZE For all parallel queries SLAVE Processes can use PGA Total amount . The default value is 1/2 pga_aggregate_target, Unit is KB,RAC In the environment , Every node can use so much memory .
All of the above are dynamic parameters , Can be in session/system Level to modify online .
After the above parameters are adjusted , Be sure to set the corresponding pga_aggregate_target, Otherwise, the above adjustments may not work , It is suggested to set it to the modified _SMM_PX_MAX_SIZE Twice the value of .
By default , Each process cannot use more than 1G. By the parameter _SMM_MAX_SIZE（ Company KB） control , The default is _PGA_MAX_SIZE（ Company B） Half of . for example , Parallelism 20 Create index , The total size of sorting area that can be used is 20*1G=20G, But it's also affected by parameters _SMM_PX_MAX_SIZE The control of , be-all slave You can't use more memory than _SMM_PX_MAX_SIZE Value （ Unit is KB）, The default is pga_aggregate_target Half of . meanwhile 64 A system. , The sorting space that each process can use cannot exceed 4 individual G. So even if you put _SMM_MAX_SIZE Adjustment is greater than 4 individual G It doesn't work ._SMM_PX_MAX_SIZE, For all parallel queries SLAVE Processes can use PGA Total amount . Every RAC So many nodes can be used , The limitation is that all the nodes are parallel slave What can be consumed PGA.
How to PGA_AGGREGATE_TARGET Set a reasonable value ？
PGA_AGGREGATE_TARGET It's often a groping process to set up the system , Here's an official distribution guideline
In the formula above TOTAL_MEM * 80% Represents the Oracle All available memory is the operating system's 80%, According to the characteristics of different types of business ,OLTP System , You can allocate... On this basis 20% The memory for PGA,DSS Analytical systems , Can give the remaining memory 50%. This is just a guiding opinion , The specific situation should be analyzed . for example , Yours OLTP There are thousands of connections on the system , So you can roughly take up per process 10M I'll give you a general calculation PGA Memory space required , also , If your system has a lot of connections , And the number of active connections is also very large , Then you can follow each process at least 12M To estimate , More important , If there are many temporary computing tasks in the system , Well, it's about PGA More memory is reserved . for example , The parallelism is set to 5 Create index , Each parallel process takes PGA Memory is close to 1 individual G：
So you're planning for the system PGA Don't forget the memory occupied by these temporary tasks . They may be a big piece .
How much... Should be allocated to the database PGA Memory , In addition to taking the number of connections as a factor of consideration , We also need to pay attention to the number of active connections , This is because the number of system connections is very large , But go to the database and find out , Most of the connections haven't been active for hours or even days , This is often the result of the application's connection pool not being managed （ It could be something else ）. For inactive connections ,Oracle Of each process PGA It won't take too much , according to 10M Calculation is a reasonably safe value .AIX It can be bigger next time , According to each 15M-20M Calculation .
Here is another way to estimate the occupation of each idle server process OS Memory method . First, through the operating system command free -m Take a look at the current OS Remaining memory ,69590M
Look again at the remaining operating system memory , By subtracting the remaining memory before the connection is created from the created memory 2000 The remaining memory after the connection can be estimated that the memory occupied by each process is about （59332-44548）/ 2000=7M
Readers need to remember , Reduced 7M In the memory , Most of it is occupied by the process itself , Only 1-2M The memory is PGA The amount of . So the recommended value above is 10-12M, That is to reserve some for the process PGA The memory comes from .
Confirm the real... Occupied by an idle process PGA How much memory there is , Can pass v$process In the view PGA_ALLOC_MEM Field to get , as follows ： Only 1.49M. But as mentioned above , At the operating system level, this process takes up 7M Left and right memory .
pga_aggregate_target The value specified by the parameter is not a hard limit , until Oracle 12C Only a parameter is provided to enforce the restriction PGA Usage of . If readers don't know what to do for their own system PGA What kind of value to set , You can go through the view v$pgastat Medium maximumPGAallocated,totalPGAallocated Value as a reference , The former represents the largest since the instance started PGA Use peak value , The latter represents the present PGA Usage of , However, the biggest drawback of this view is that you can't see the time periods PGA Overall usage , Here is a better view of shredded meat for your reference vsysmetric_history
The output of the above query represents the ,PGA Memory usage , The result set is not completely listed , Readers can base on PGA The usage of each period of time to more accurately for their own system PGA How to make settings, how to make decisions . Set up PGA It's a gradual process . Again , The amount of memory a process occupies except PGA outside , The process itself takes up memory , We have already discussed this point .
according to Oracle 2015 year OOW Last one PPT mention ,12C The previous version ,PGA The maximum available memory is PGA_AGGREGATE_TARGET Three times the set value , Just listen here , Not really .
If the production environment really suffers PGA Serious overuse , Can pass Event 10261 To limit something / All processes PGA Use ,level The unit of the following value is KB.
Once the process goes beyond PGA Set quotas for , Will be killed by the background process and report an error , Different versions may report different errors ：
For 188.8.131.52 and 184.108.40.206+, ORA-10260
For 11.1 - 220.127.116.11, ORA-600 
12C The previous version is for PGA There is no hard limit to the use of , This may lead to some problems , For example, without restriction, it may lead to OS SWAP The problem of , Once it appears SWAP It can lead to Oracle A sharp decline in performance , Even lead to DOWN machine , One of the cases I have encountered is the emergence of SWAP after ,LGWR The memory of the process itself appears SWAP, The performance of database system is almost complete HANG die , In the end, there's no way but to restart and solve the problem .
The use of this new feature is through parameters PGA_AGGREGATE_LIMIT To limit PGA The use of online , It's a derived parameter , In the use of ASMM Under the circumstances , Take the maximum of
In the use of AMM Under the circumstances , The shredded meat hasn't been found yet PGA_AGGREGATE_LIMIT The law of values , If you know , Please tell me .
10G front SGA The management of the system is realized by manually setting a series of parameters , For example, the important parameters are as follows ：
SGA It's a block of memory , Made up of the components listed above ,10G Previous version ,DBA Need to be right SGA Manual setting of each memory area of , It could be good for a DBA The requirements become very high , Need to understand the business , Understand the amount of hot data , understand SQL How to use , Is there a lot of different text SQL And so on to decide how much memory to allocate to each area . Used this version of DBA Basically, they have encountered a classic error report ：ORA-04031, Generally due to shared pool Not enough memory leads to , There are many reasons why this is not enough , Severe fragmentation of shared pool memory , A lot of hard parsing , tremendous PL/SQL Text and so on can cause this problem . So let's see , Manual management SGA Under the circumstances , How to consider for buffer cache and shared pool These two most important memory components allocate memory .
Buffer Cache Commonly known as block cache , It's used to cache copies of blocks of data ,Oracle Through the improved LRU Algorithm to expect these data blocks to be reused efficiently . The data that the server process reads from the disk is put in this memory , When modifying data , It's also in this area to modify the data . None of them give buffer cache The golden rule of memory allocation , It depends on the memory size of the operating system , Hot data volume of business 、 The type of business, etc buffer cache How much memory is allocated . Let's first see how to decide the whole SGA Size , about SGA The principle of distribution in China , The official suggestion is as follows ：
The formula format above follows PGA The formula is similar , Dig out 20% After the memory is given to the operating system , Divided into different system types , Is basically OLTP System , Most of the memory goes to buffer cache,DSS Class system ,buffer cache It's not that important , Can be given 50% Even lower . as for SGA How much of it can be allocated to buffer_cache, It depends on the actual situation .DBA By looking at buffer cache To get a rough idea of whether to give buffer cache Enough memory allocated . But friends who know me well know , I am a OWI Practitioners of methodology , The worship of hit rate is in Oracle It's no longer supposed to exist , But hit rate as an auxiliary means of performance diagnosis still has its value . If the system SQL It's all optimized , The hit rate is not high , To some extent, it means that buffer cache The data for small or your business access is very discrete , Too few hotspots .
The optimization idea given here is top-down , Low hit rate is a symptom , Does the top application call a large number of unnecessary calls SQL, these SQL Is there a lack of index ,SQL It's all optimized ,DBA Should we consider increasing buffer cache To improve performance , It's not going to work after the increase , Should we consider increasing system memory to further improve buffer cache Size , Finally, do you need to add more disks or use SSD To improve performance .
The top-down analysis method is helpful to find out the problem and solve it at the lowest cost , If it's just one SQL The whole database system is slow due to the lack of index ,DBA Go directly to add disk and expand capacity IO, So you'll find out in the end that it cost so much , Treat the symptoms, not the root cause , The problem is still unsolved . The formula given above does not apply to all cases , especially OLTP System , If there are too many processes , So it needs to be further reduced SGA The percentage of memory used , To reserve more memory for PGA Use .
Oracle The two most expensive pieces of memory except buffer cache District is Shared Pool The memory of the , Its structure is very complicated , And because of caching SQL Code, this nonstandard size text , It often generates a lot of fragmented memory ,shared pool In general, it contains two parts , One is library cache District , Used to cache SQL、PL/SQL Code , Save their execution plans , Improve SQL The analytic efficiency of , If your application code never uses bound variables , So this piece of memory is a big burden for you , however Oracle There's no way to close it library cache District , So for OLTP System , Please make sure SQL All use bound variables . The second big area is row cache District , The data dictionary used to cache the database , Because the information stored in it is in the form of lines , So it's called row cache.
For this piece of memory , According to the meta information in the database （metadata） It depends on how much , If there are hundreds of thousands of objects in the database , Then this block of memory will take up a lot of space , At the same time, many columns in the table have histogram information , It will also lead to a large memory consumption in this area . In a stable system , The memory in this area is basically static ,Oracle There are few operations that are frequently modified in row cache District . One exception is that there is no cache Attribute sequence, If this sequence Frequent calls , It triggers frequent changes sequence The attribute value , And then there may be row cache lock Some of the waiting , The way to optimize is for every sequence Set up enough cache value . > If the application does not use bound variables , And it's hard to modify , Can be set by cursor_sharing by force To try to solve the problem .
Here is another case of my own experience , The old DBA Well , Maybe not many skills , But there are stories , I once helped a customer solve a free buffer waits The case of , The appearance of this waiting event generally indicates buffer cache Too small , Or full table scan 、 More write operations 、 It's slow to write dirty data , from AWR Look at the report ,free buffer waits be ranked at TOP EVENT First place of , The amount of DB TIME It has reached 78 percent , And from awr The report found that the customer's shared pool Memory usage has reached close to 50G, And analyzing the database a month ago AWR The report ,shared pool The only memory available is 3 individual G The size of the left and right , Basically, it can be identified as shared pool Too much memory , And lead to buffer cache Not enough use , And then came the free buffer waits Events wait , After communicating with customers, I know , It used to be stable , The big change I've made recently is for the database oracle Of flahcache, adopt MOS On the search flashcache The keyword finally found , stay 18.104.22.168,Oracle RAC If used flash cache, Then there will be shared pool Allocate extra memory storage in GCS resources . The extra memory is ：208 bytes * the total number of (flash) buffer blocks. Because it reminds us that DBA friend , If you plan to use Oracle Of FlashCache Then please reserve enough for it shared pool Memory .
10G edition Oracle Launched ASMM, Automatically SGA management , It helps to some extent DBA Solved Management SGA The problem of , By setting parameters sga_target For the wrong 0 Value to enable ASMM. But in 10GR1, Include 10GR2 Early versions ,ASMM It's not mature enough, there are more BUG, It leads to a lot of problems , So then DBA In some core production environments , I still use 9I The handiwork of the time SGA management . Automatically SGA management , You no longer need to set values for each memory component , Of course, if you set up sga_target At the same time , Set up db_cache_size These parameters , that db_cache_size These parameter values are required as minimum values .
If sga_target Set to 0,memory_target Also for the 0, Return to the traditional MSMM management . By using ASMM It can solve the problems mentioned above to a large extent ORA-04031 Error of . In the use of the ASMM after , To view the spfile file , You will find that there are more memory parameters starting with double underscores, such as __db_cache_size, These memory parameters are Oracle In the process of instance running, it is generated dynamically and solidified to spfile Medium , If the instance runs long enough , After the values of these parameters are fixed , It's equivalent to having a best practice parameter based on your own environment , After the database instance restarts , Will be solidified in spfile These parameter values in . > 11G Even if sga_target Set to 0,memory_target It's also set to 0, You may also find SGA Of pool Between Granule The movement of the , This is a 11G The new Linux characteristic , Through parameters _memory_imm_mode_without_autosga To control . >
ASMM Implementation of technology , Inside is through Granule Move between memory components to achieve . Here to Granule Make an explanation ,Oracle 10G The introduction of automatic shared memory management , The purpose is to solve the following problems ,“ I should db_cache_size Set to what size ? I should shared_pool_size How big is it? ？ that 10G in the future ,Oracle The answer is ,“ You don't have to think about it anymore , I'll help you with the algorithm “.
Simply speaking , It's by comparing the number of disks that are reduced by increasing the block cache IO Time vs. time saved by increasing shared pool optimization , By comparison , To see if you need to move memory between the two . In order to make inner existence db_cache_size and shared_pool_size Efficient movement between ,Oracle stay 9I It's rebuilt in version SGA, That is, using fixed size memory blocks -Granule,Granule With the operating system 、Oracle Version, and SGA It's different in size , Readers can go through the following SQL Statement to see the Granule Size ：
in Oracle Database 22.214.171.124 indicates that the granule size increases from 64MB to 128MB when the SGA_TARGET parameter is set to 1 byte greater than 32G, and jumps to 256MB when the SGA_TARGET parameter is set to 1 byte greater than 64G. A granule size of 32MB is possible when the SGA_TARGET was set to a value between 8G + 1 byte to 16G.
With PGA Automatic management of ,SGA Automatic management of ,Oracle This obsessive-compulsive patient is finally 11G This version comes out with AMM, Automatic memory management , Due to the use of AMM You can't use large pages after , Therefore, this function has not been widely used , Through this function DBA Just set up memory_target One parameter can complete the memory configuration of the entire database instance , It's like ASMM This leads to the generation of some double underline implicit parameters ,AMM It also leads to the generation of some double underline implicit parameters , for example ：__pga_aggregate_target,__sga_target.
And in 126.96.36.199 Before the release ,DBCA The default is AMM management （ Sometimes Oracle I have a lot of guts ）,188.8.131.52 From now on DBCA The memory size of the operating system will be detected when the database is built , Greater than 4G The default memory is ASMM, Less than 4G The default memory is AMM. Again, if you set AMM after , Also set up SGA_TARGET Equal parameter , Then these parameters will be required as minimum values .
AMM The biggest problem is that you can't use large page memory . The importance of using large pages will be explained in detail later in this article . >Doc 749851.1 > > Doc 1453227.1.
11G Later, due to the emergence of AMM, If you do DBA It's long enough , You must have met the following mistake
This mistake gives people a sense of inexplicability ,“MEMORY_TARGET Feature not supported ”, It's not that features are not supported , This is because AMM Using the operating system's shared memory file system , be located /dev/shm Next , If the configured memory file system is small , Less than memory_target Will report an error , Generally in Linux On mainstream operating systems , The value of this shared memory file system is half the size of the memory , If readers encounter this problem , Or turn it down memory_target Parameter values for , Or modify the size of the shared memory file system as follows ：
Here's a point ,11G ASM By default AMM, Officials strongly recommend that this default behavior not be changed .ASM Li's request MEMORY_TARGET The minimum value of is 256M. If it is set below 256M Value ,Oracle Will automatically increase memory to 256M. about ASM The release of ：184.108.40.206/220.127.116.11/12.1,Oracle It is strongly recommended that memory_target Set to 1536M, This value proved to be sufficient in most environments .
Used ASM after , Database instance shared_pool Memory needs to be reevaluated , The following formula is in normal shared pool Based on the need for additional size ：
If ASM The disk set is external redundancy, Need to be in 2MB On the basis of , Every time 100G Space , increase 1MB.
If ASM The disk set is normal redundancy, Need to be in 4MB On the basis of , Every time 50G Space , increase 1MB.
If ASM The disk set is high redundancy, Need to be in 6MB On the basis of , Every time 33G Space , increase 1MB.
ASM & Shared Pool (ORA-4031) ( file ID 437924.1)
9 Big page
For classes Linux System ,CPU You have to translate the virtual address into the physical memory address to actually access the memory . In order to improve the conversion efficiency ,CPU It will cache the mapping relationship between the nearest virtual memory address and the physical memory address , And saved in a by CPU In the maintained mapping table , In order to maximize the memory access speed , You need to save as many mapping relationships as possible in the mapping table . The mapping table is in Linux Each process in the process has to hold a share of , If the mapping table is too large , It will greatly reduce CPU Of TLB shooting , Mainstream Linux operating system , The default page size is 4K, For large memory , It's going to produce a lot of page table entries, As mentioned above ,Linux The table on the next page is not shared , Each process has its own page table , Now the memory of any host is configured with dozens of G, Hundreds G, Even on T, If you run on it Oracle Don't use big pages , It's basically about death , because Oracle It's a multiprocess architecture , Every connection is an exclusive process , Large memory + Multi process + Don't use big pages = disaster , The shredded meat is 8 Year of DBA In my life , At least it can't help 5 Customers have dealt with system failures caused by not using large pages , And this 5 All of them have met in the past three or four years , Why is the word big page not mentioned frequently three or five years in advance , And now , The word big page is used in various technical conferences , Become a hot word in best practice ？
This is because the last few years have been Linux The system has been widely used for several years , It's also a few years of big memory blooming everywhere , And now a database system is always hundreds of thousands of connections , All of these contribute to more and more attention of big pages .
The benefits of big pages
Let's look at the benefits of using large pages ：
Less page table entries, Reduce page table memory
pin live SGA, No, page out
Improve TLB shooting , Reduce the kernel cpu Consume
On systems that don't use large pages , Often you may find dozens or hundreds of G Page table of , In serious cases , System CPU Of sys Part of it is very expensive , These are some of the symptoms of not using a large page .
The characteristics of big pages / shortcoming
To be preassigned
inflexible , Even need to restart the host
If too much is allocated , It will cause waste , Can't be used by other programs .
How to allocate large pages
Through the /etc/sysctl.cnf add vm.nr_hugepages Parameter to set a reasonable value for the large page , The unit of the value is 2MB. Or by echo One is worth /proc/sys/vm/nr_hugepages You can also temporarily set large pages in . As for how large a value should be set for a large page , It depends on the system SGA It depends on the configuration of , It is generally recommended that the total usage of large pages is greater than that of all pages on the system SGA The sum of the +2GB.
HugePages on Oracle Linux 64-bit ( file ID 361468.1),AIX Page table sharing , Generally, you don't need to set large pages .
The principle of big page
The following is based on 32 A system of ,4K The size of the memory page ： 1） A table of contents , The location where the page table is stored , contain 1024 A catalog entry, Every directory entry Point to a page table location , Every directory entry,4b size , The table of contents contains 4b*1024=4K size 2） A page table , It is used to store the starting address of the physical address page , Each page table entry It's also 4b size , Each page table has 1024 Page tables entry, So the size of a page table is also 4K, common 1024 Page tables , So the maximum size of the page table is 1024*4K=4M size 3） Each page table entry Pointing to 4K Physical memory page of , Therefore, the total physical memory size that the page table can point to is ：1024（ Number of page tables ）*1024（ Of each page table entry Count ）*4K（ A page table entry Represents the page size ）=4G 4） When the operating system maps a virtual address to a physical address , Put the virtual address of 31-22 this 10 Bits are used to index from the table of contents to 1024 One of the page tables , Put the virtual address of 12-21 this 10 Bits are used to index from the page table to 1024 Page tables entry One of them . From this page table entry Get the starting address of the physical memory page in , Then the virtual address of 0-12 Bit is used as 4KB Offset in memory page , Then the starting address of the physical memory page plus the offset is the physical memory address to access .
because 32 Bit operating systems don't appear 4M Page table of , Because a process cannot use 4GB Of memory space , Some of the space is reserved for use , For example, the memory used to make the operating system kernel . And page tables entry When a process accesses a block of memory , Instead of creating it in the first place .
Page table memory calculation
stay 32 A system. , A process accesses 1GB Of memory , Will produce 1M Page table of , If it's in 64 Bit system , It will increase to 2M. It's easy to calculate , If one SGA Set to 60G, Yes 1500 individual Oracle User process ,64 position Linux On the system , The maximum page table footprint is ：60*2*1500/1024=175G, Yes , You read that right , yes 175G！ But the actual situation may not be so large , Give me a forty fifty percent discount , This is because only the server process can access SGA After a specific area of , The process needs to add the corresponding page table entry to its own page table .
18.104.22.168 Before the release , If the number of large pages allocated is insufficient , that Oracle Large pages are not used during startup , Turn to small pages , But in 22.214.171.124 After version ,Oracle You will also try to use large pages at startup , If the number of large pages is not enough , Then the current configuration of the large page will be used up , The insufficient parts can be obtained from small pages , This behavior is actually through Oracle To control USE_LARGE_PAGES, This parameter will be explained in detail later . Through the database instance of alert The document can clearly see this happening ：
Total Shared Global Region in Large Pages = 1024 MB (85%), It means only 85% Of SGA It's on the big page .RECOMMENDATION part , I suggest you at least increase 89 A big page for SGA Put it all on the big page .
This parameter is used to control when the database instance starts , For large page memory usage behavior . Yes 3 The value is 126.96.36.199 Before the release ,188.8.131.52 Version has one more value auto：
true The default value is , Use large pages as much as possible , There may be some pages in the big pages , Some of them are in small pages , In this case, through the operating system command ipcs -ma You might see memory fragmentation （ Memory fragmentation can lead to multiple situations , For example, turn on numa It can also cause memory fragmentation ）
false Don't use big pages
only Option to force large pages , Otherwise it won't start
auto （184.108.40.206） When the instance starts , Through the background process dism echo xxx > /proc/sys/vm/nr_hugepages To use as many pages as possible
The following code is used when the parameter is set to auto Under the circumstances alert Output ：
You can see that during instance startup , Priority started DISM process , Through this process to automatically complete the configuration of large pages .$Oracle_HOME/bin/oradism So is the authority of root jurisdiction , Because if it is grid Permissions cannot complete the configuration of large pages echo xxx > /proc/sys/vm/nr_hugepages.
from RedHat6, RedHat7, OL6, OL7 SLES11 and UEK2 kernels Start transparent hugepage It is turned on by default , It allows dynamic allocation of large pages , Instead of being allocated after the system starts up , according to Oracle MOS DOC：1557478.1,transparent hugepage It leads to a lot of problems , It is suggested that it should be closed See if... Is enabled transparent hugepage cat /sys/kernel/mm/transparent_hugepage/enabled [always] never  The value in is the currently enabled value , The output above shows that transparent hugepage. Can pass /etc/grub.conf File to close transparent hugepage.
By adding keywords transparent_hugepage=never Speaking of transparent hugepage close . It can also be started by starting the machine ,echo The corresponding value to the corresponding file to dynamically close transparent hugepage.
OS View the usage of large pages in the layer
Hugepagesize: 2048 kB
HugePages_Total Number of pages allocated to , and Hugepagesize Multiply to get the allocated memory size .43000*2/1024 about 84GB.
HugePages_Free For something that has never been used Hugepages number . Even if Oracle sga This part of memory has been allocated , But if it's not actually written , So what you see is Free Of . It's easy to misunderstand .
HugePages_Rsvd Reserved for those that have been allocated but not yet used page number . stay Oracle Just started , Most of the memory should be Reserved also Free Of , With Oracle SGA Use ,Reserved and Free Will continue to reduce .
HugePages_Free – HugePages_Rsvd This part is unused memory , If nothing else Oracle instance, This part of memory may never be used , That's to say, it's wasted . There are... On the system 11.5GB My memory is wasted .
10 Best practices
about Oracle Come on , It's an age of best practices , All best practices actually have specific application scenarios , It's not universal , But in the age of information explosion （ The era of database explosion ）, quite a lot DBA Not so much patience to focus on learning a database , It's just to find out some best practices from the Internet , No matter three or seven or twenty-one, use it in your own environment （ Even in the production environment ）, To a certain extent , Worship of best practices is the result of laziness .
Aim at Oracle How to allocate memory , Shredded meat is recommended to you on the core system ：
1. Using manual SGA management , This kind of management has been very mature , But it's good for DBA The demand is higher , To a certain extent, we need to understand the characteristics of the business .
2. If the first method you feel is more difficult , So use ASMM, But for buffer cache,shared pool How to set the minimum value , This is the way I highly recommend .
as for AMM This kind of memory allocation , Because you can't use large pages , Don't use it .
How to allocate the memory , In view of OLTP scene , still OLAP scene , There are different memory allocation principles , Even if it's OLTP scene , For the number of processes , The number of concurrent ,SQL Characteristics of sentences , Will lead to different memory allocation methods . As DBA To master the basic Oracle The principle of memory usage , Then according to the actual situation to allocate memory according to the characteristics of the business itself .
Whether it's OLTP System or OLAP System , It is generally recommended that at least... Be reserved for the operating system 20% Of memory space , This memory space is used for kernal、page table And some other processes need to , for example rman Backup , File system caching . This is also Oracle Official advice .
After planing out this part of the memory , in the light of OLTP System , Consider trying to reserve 20% The memory for PGA Use ,80% The memory for SGA Use . If there are many processes on the system , In particular, the number of processes that are active at the same time is relatively large , Need to give PGA Set aside more memory , You can follow each process 12M Of PGA The occupation is calculated .
in the light of OLAP System , After scraping the reserved memory of the operating system , Consider reserving 50% The memory for PGA Use ,50% The memory for SGA Use . After all, in pure OLAP Next ,buffer cache It doesn't need that much memory .
SGA、PGA After distribution , Consider whether it is necessary to buffer cache、shared pool To manually set a value , As mentioned above , You can refer to buffer cache shooting ,Library Hit As an auxiliary reference . If the hit rate is low , Maybe the memory allocation is not reasonable enough .
Of course, you also need to rely on things like AWR Tools like reporting , according to SQL The implementation plan and other contents to further make diagnosis , For example, the reason for the low hit rate may not be allocated buffer cache Small , It's a lot of inefficient full table scans , Which leads to a low hit rate , In this case , What needs to be optimized is SQL, Instead of increasing buffer cache. Another example , If you find that Library cache Hit A relatively low , It may not be shared pool The relatively small , It could also be systematic SQL There are a lot of situations where bound variables are not used , This leads to heavy hard parsing .
Optimization is often a systems engineering , It can't be done overnight , Special like Oracle It's a huge complex system , For the emergence of problems , It's also a matter of careful deliberation , It's not easy to come to a conclusion . As you do DBA More and more time , You're going to understand more and more TOM A word of ：It depends.
本文为[Wheat seedling best]所创，转载请带上原文链接，感谢