Library cache and Shared pool latches The first resource to reduce contention on this latch is to ensure that the application is reusing as mush as possible SQL statement representation. If the application is already tuned the SHARED_POOL_SIZE can be increased. Be aware that if the application is not using appropriately the library cache the contention might be worst with a larger structure to be handled.
Pmon Unable To Acquire Latch Child Library Cache
In 10g, mutexes were introduced for certain operations in the library cache. Mutexes are a lighter-weight and more granular concurrency mechanism than latches and take advantage of CPU architectures that offer the compare and swap instructions (or similar). Mutexes like latches ensure that certain operations are properly managed for concurrency. E.g., if one session is changing a data structure in memory, then another session must wait to acquire the mutex before it can make a similar change - this prevents unintended changes that would lead to corruptions or crashes if not serialized.
Less potential for false contention.there are only 67 library cache latches available to protect 131,072 buckets created by default in the library cache. So single latch covers multiple buckets. This creates a false contention, meaning if 2 processes are trying to access 2 different buckets protected by the same latch, one of them has to wait until the other completes traversing its bucket. So even though they are not colliding on the same bucket they still will be blocked on each other because the latch mechanism
A process may request a latch in the willing-to-wait or no-wait (immediate) mode. The no-wait mode is used on only a handful of latches. Latches that are acquired in the no-wait mode have statistics in the IMMEDIATE_GETS and IMMEDIATE_MISSES columns. These columns are part of the latch family of views: V$LATCH, V$LATCH_PARENT, and V$LATCH_CHILDREN. Generally , a no-wait mode is first used on latches with multiple children such as the redo copy latches. If a process fails to get one of the child latches in the first attempt, it will ask for the next , also with the no-wait mode. The willing-to-wait mode is used only on the last latch when all no-wait requests against other child latches have failed.
The Oracle shared pool consists of many structures. The prominent ones are the dictionary cache, SQL area, and library cache. You can see other structures by querying the V$SGASTAT view. The shared pool latch protects the shared pool structures, and it is taken when allocating and freeing memory heaps. For example, it is taken when allocating space for new SQL statements (hard parsing), PL/SQL procedures, functions, packages, and triggers as well as when it is aging or freeing chunks of space to make room for new objects.
Prior to Oracle9 i Database, the shared pool memory structures were protected by a solitary shared pool latch. Beginning with Oracle9 i Database, up to seven child shared pool latches can be used to protect the shared pool structures. This is possible because Oracle9 i Database can break the shared pool into multiple subpools if the server has at least four CPUs and the SHARED_POOL_SIZE is greater than 250MB. The number of subpools can be manually adjusted by the _KGHDSIDX_COUNT initialization parameter, which also supplies the appropriate number of child shared pool latches. If you manually increase the number of subpools, you should also increase the SHARED_POOL_SIZE because each subpool has its own structure, LRU list, and shared pool latch. Otherwise, the instance may not start due to the error ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool"," unknown object","sga heap(5,0)","fixed allocation callback").
The library cache structures are the home for cursors , SQL statements, execution plans, and parsed trees among other things. The structures are protected by the library cache latch. Oracle processes acquire the library cache latch when modifying, inspecting, pinning, locking, loading, or executing objects in the library cache structures. The number of child library cache latches that exist in an instance can be discovered by querying the V$LATCH_CHILDREN view as shown next. The number is usually the smallest prime number that is greater than the CPU_COUNT. This number can be modified by the _KGL_LATCH_COUNT parameter. Starting in Oracle9 i Database, the V$SQLAREA has a CHILD_LATCH column, which allows you to see how cursors are distributed across the library cache latches.
Contentions for the shared pool and library cache latches are mainly due to intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed. Excessive hard parsing is common among applications that primarily use SQL statements with literal values. A hard parse is a very expensive operation, and a child library cache latch must be held for the duration of the parse.
You should see significant reduction in the contention for the shared pool and library cache latches when you convert literal SQL statements to use bind variables. The conversion is best done in the application. The workaround is to set the initialization parameter CURSOR_SHARING to FORCE. This allows statements that differ in literal values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and hard parse.
Oracle uses multiple child cursors to distinguish SQL statements that are identical in their characters but cannot be shared because they refer to different underlying objects. For example, if there are three CUSTOMER tables in the database and each table is owned by a different schema, the SELECT * FROM CUSTOMER issued by each owner will have the same hash value but different child numbers. When a hash value of a statement being parsed matches the hash value of a statement with a high number of children or version count, Oracle has to compare the statement with all the existing versions. The library cache latch must be held for the duration of the inspection, and this may cause other processes to miss on their library cache latch requests. You can minimize this problem by having unique object names in the database. The following query lists all SQL statements in the V$SQLAREA with version counts greater than 20:
When data blocks are read into the SGA, their buffer headers are placed on linked lists (hash chains) that hang off hash buckets. This memory structure is protected by a number of child cache buffers chains latches (also known as hash latches or CBC latches). Figure 6-1 illustrates the relationships between hash latches, hash buckets, buffer headers, and hash chains.
A process that wants to add, remove, search, inspect, read, or modify a block on a hash chain must first acquire the cache buffers chains latch that protects the buffers on the chain. This guarantees exclusive access and prevents other processes from reading or changing the same chain from beneath . Concurrency is hereby sacrificed for the sake of integrity.
Hot blocks are another common cause of cache buffers chains latch contention. This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch. This is mostly an application issue. In most cases, increasing the number of cache buffers chains latches will do little to improve performance. This is because blocks are hashed to hash buckets and chains based on the block address and the number of hash buckets, and not the number of cache buffers chains latches. If the block address and the number of hash buckets remain the same, chances are those few hot blocks will still be covered by one cache buffers chains latch, unless the number of latches is drastically increased.
Typically, foreground processes access the LRU lists when looking for free buffers. The DBWR background process accesses the LRU lists to move cleaned buffers from the LRUW and move the dirty ones to the LRUW list. All processes must acquire the cache buffers lru chain latch before performing any kind of operation on a working set.
The row cache objects latch protects the Oracle data dictionary (or row cache because information is stored as rows instead of blocks). Processes must obtain this latch when loading, referencing, or freeing entries in the dictionary. This latch is a solitary latch up to Oracle8 i Database. With the new multiple subpools architecture in Oracle9 i Database, multiple copies of child row cache objects latches exist. Oracle Database 10 g has a separate wait event for this latch known as latch: row cache objects .
A hard parse rate of more than 100 a second indicates that there is a very high amount of hard parsing on the system. High hard parse rates cause serious performance issues and must be investigated. Usually, a high hard parse rate is accompanied by latch contention on the shared pool and library cache latches.
Check whether the sum of the wait times for library cache and shared pool latch events (latch: library cache, latch: library cache pin, latch: library cache lock and latch: shared pool) is significant compared to statistic DB time found in V$SYSSTAT. If so, examine the SQL ordered by Parse Calls section of the Automatic Workload Repository report.
There is a dedicated latch-related wait event for the more popular latches that often generate significant contention. For those events, the name of the latch appears in the name of the wait event, such as latch: library cache or latch: cache buffers chains. This enables you to quickly figure out if a particular type of latch is responsible for most of the latch-related contention. Waits for all other latches are grouped in the generic latch free wait event. 2ff7e9595c
Commentaires