E.2 Statistics Descriptions
V$SESSTAT
and V$SYSSTAT
views. The statistics are listed in alphabetical order.
The CLASS
column contains a number representing one or more statistics classes. The following class numbers are additive:
-
1, User
-
2, Redo
-
4, Enqueue
-
8, Cache
-
16, OS
-
32, Real Application Clusters
-
64, SQL
-
128, Debug
-
256, Instance level
For example, a class value of 72 represents a statistic that relates to SQL statements and caching.
Some statistics are populated only if the TIMED_STATISTICS
initialization parameter is set to true
. Those statistics are flagged with a Y
in the right-hand column.
Table E-1 Database Statistics Descriptions
Name | Class | Description | TIMED_STATISTICS |
---|---|---|---|
1 |
The total wait time (in centiseconds) for waits that belong to the Application wait class |
|
|
8 |
Number of checkpoints completed by the background process. This statistic is incremented when the background process successfully advances the thread checkpoint. |
|
|
8 |
Number of checkpoints started by the background process. This statistic can be larger than "background checkpoints completed" if a new checkpoint overrides an incomplete checkpoint or if a checkpoint is currently under way. This statistic includes only checkpoints of the redo thread. It does not include:
|
|
|
128 |
This is a count of the times where a background process has set an alarm for itself and the alarm has timed out rather than the background process being posted by another process to do some work. |
|
|
128 |
Number of times an index branch block was split because of the insertion of an additional value |
|
|
72 |
Number of times a buffer was free when visited. Useful only for internal debugging purposes. |
|
|
72 |
Number of times a buffer was pinned when visited. Useful only for internal debugging purposes. |
|
|
1 |
Total number of bytes received from the client over Oracle Net Services |
|
|
1 |
Total number of bytes received from a database link over Oracle Net Services |
|
|
1 |
Total number of bytes sent to the client from the foreground processes |
|
|
1 |
Total number of bytes sent over a database link |
|
|
128 |
Useful only for internal debugging purposes |
|
|
32 |
Number of times a snapshot system change number (SCN) was allocated. The SCN is allocated at the start of a transaction. |
|
|
128 |
Number of calls to routine kcmgas to get a new SCN |
|
|
128 |
Number of calls to routine kcmgcs to get a current SCN |
|
|
128 |
Number of calls to routine kcsgrs to get a recent SCN |
|
|
8 |
Elapsed redo write time for changes made to |
Y |
|
128 |
Number of consistent gets that require both block rollbacks and block cleanouts. See Also: "consistent gets" |
|
|
128 |
Number of consistent gets that require only block cleanouts, no rollbacks. See Also: "consistent gets" |
|
|
64 |
Number of blocks obtained in a cluster scan |
|
|
64 |
Number of cluster scans that were started |
|
|
1 |
The total wait time (in centiseconds) for waits that belong to the Cluster wait class |
|
|
8 |
Number of buffers that were read through the least recently used end of the recycle cache with fast aging strategy |
|
|
8 |
Number of times Oracle attempted a cleanout at commit but could not find the correct block due to forced write, replacement, or switch |
|
|
8 |
Number of times Oracle attempted a cleanout at commit, but the buffer was currently being written |
|
|
8 |
Number of times the cleanout callback function returns |
|
|
8 |
Total number of times a commit cleanout was performed but failed because the block could not be pinned |
|
|
8 |
Number of times Oracle attempted block cleanout at commit during hot backup. The image of the block must be logged before the buffer can be made dirty. |
|
|
264 |
Number of time Oracle attempted a cleanout at commit, but could not modify PMEM buffers directly without a DRAM version |
||
8 |
Number of times a cleanout block at commit was performed but the writes to the database had been temporarily disabled |
|
|
8 |
Total number of times the cleanout block at commit function was performed |
|
|
8 |
Number of times the cleanout block at commit function completed successfully |
|
|
1 |
The number of asynchronous commits that were actually performed. These commits did not wait for the commit redo to be flushed and be present on disk before returning. |
|
|
1 |
The number of no-wait commit or asynchronous commit requests that were made either using SQL or the OCI transaction control API |
|
|
128 |
Number of times the system change number of a commit operation was cached |
|
|
1 |
The number of asynchronous/synchronous commits that were actually performed |
|
|
1 |
The number of no-wait or wait commits that were made either using SQL or the OCI transaction control API |
|
|
1 |
The number of synchronous commits that were actually performed. These commits waited for the commit redo to be flushed and be present on disk before returning. |
|
|
1 |
The number of waiting or synchronous commit requests that were made either using SQL or the OCI transaction control API |
|
|
1 |
The total wait time (in centiseconds) for waits that belong to the Concurrency wait class |
|
|
8 |
Number of times a user process has applied rollback entries to perform a consistent read on the block Work loads that produce a great deal of consistent changes can consume a great deal of resources. The value of this statistic should be small in relation to the "consistent gets" statistic. |
|
|
8 |
Number of times a consistent read was requested for a block. See Also: "consistent changes" and "session logical reads" statistics |
|
|
8 |
Number of times a consistent read was requested for a block bypassing the buffer cache (for example, direct load operation). This is a subset of "consistent gets" statistics value. |
|
|
8 |
Number of times a consistent read was requested for a block from buffer cache. This is a subset of "consistent gets" statistics value. |
|
|
8 |
Number of direct-mapped blocks accessed in CR (consistent read) mode from PMEM. This is a subset of "consistent gets" statistics value. |
||
8 |
Number of CR gets from pmem in direct read (e.g. tablescan) code path. This is a subset of "consistent gets from pmem" statistics value. |
||
8 |
Number of direct-mapped blocks accessed and examined in CR mode from PMEM, via regular path. This is a subset of "consistent gets from pmem" statistics value. |
||
8 |
Number of direct-mapped blocks accessed and examined in CR mode from PMEM, via fast path. This is a subset of both "consistent gets from pmem" and "consistent gets pmem examination" statistics values. |
||
8 |
Number of direct-mapped blocks accessed and pinned in CR mode from PMEM, via regular path. This is a subset of "consistent gets from pmem" statistics value. |
||
8 |
Number of direct-mapped blocks accessed and pinned in CR mode from PMEM, via fast path. This is a subset of both "consistent gets from pmem" and "consistent gets pmem pin" statistics values. |
||
1 |
Amount of CPU time (in 10s of milliseconds) used by a session from the time a user call starts until it ends. If a user call completes within 10 milliseconds, the start and end user-call time are the same for purposes of this statistics, and 0 milliseconds are added. A similar problem can exist in the reporting by the operating system, especially on systems that suffer from many context switches. |
Y |
|
128 |
The CPU time used when the call is started See Also: " CPU used by this session" |
Y |
|
8 |
Number of |
|
|
32 |
Total number of begin requests received by the Oracle database allowing the database to detect and process requests |
||
288 |
Total database time DB Time (in microseconds) within requests to the Oracle database, measuring DB time used from the beginning to the end of each request. (See also DB time statistic.) |
||
288 |
Total database time DB Time (in microseconds) within requests to the Oracle database protected by Application Continuity or Transparent Application Continuity, measuring DB time used from the beginning to the end of each request. (See also DB time statistic.) This is a subset of "cumulative DB time in requests" value, which enables you to calculate the ratio of total time in requests to protected time in requests. See also Application Continuity Protection Check (ACCHK). |
||
32 |
Total number of end requests received by the Oracle database allowing the database to detect and process requests |
||
32 |
Total wall clock time (in microseconds) within requests to the Oracle database, measuring time spent from the beginning to the end of each request. This statistic is used by draining and planned failover. (See |
||
32 |
Total number of user calls received by the Oracle database within requests to the Oracle database |
||
32 |
Total number of user calls received by the Oracle database within requests to the Oracle database that were protected by Application Continuity or Transparent Application Continuity |
||
8 |
Number |
|
|
128 |
Number of privilege checks conducted during execution of an operation |
|
|
128 |
Number of undo records applied to data blocks that have been rolled back for consistent read purposes |
|
|
8 |
Number of times that cooling occurred on this instance |
|
|
8 |
Number of times that objects got evicted by automatic big table caching on this instance |
|
|
8 |
Number of times that objects got evicted on this instance due to a cooling action |
|
|
8 |
Number of times that objects got evicted due to caching replacement, that is, when an object is evicted because a hotter object forces it to be evicted from the cache |
|
|
8 |
Number of blocks scanned by automatic big table caching on this instance using parallel query |
|
|
8 |
Number of blocks scanned by automatic big table caching on this instance by direct read from disk |
|
|
8 |
Number of blocks scanned by automatic big table caching on this instance by cache read from memory |
|
|
8 |
Number of blocks scanned by automatic big table caching on this instance by Exadata offloading |
|
|
8 |
Number of times the objects in automatic big table caching are scanned using parallel query |
|
|
8 |
Closely related to "consistent changes", this statistic counts the total number of changes that were part of an update or delete operation that were made to all blocks in the SGA. Such changes generate redo log entries and hence become permanent changes to the database if the transaction is committed. This approximates total database work. This statistic indicates the rate at which buffers are being dirtied (on a per-transaction or per-second basis, for example). |
|
|
8 |
Number of times a See Also: "consistent gets" |
|
|
8 |
Number of times a |
|
|
8 |
Number of times a |
|
|
8 |
Number of direct-mapped blocks accessed in CURRENT mode from PMEM, via regular path |
||
8 |
Number of direct-mapped blocks accessed in CURRENT mode from PMEM, via fast path |
||
8 |
Number of buffers that were written for checkpoints |
|
|
8 |
Number of times the DBWR was asked to scan the cache and write all blocks marked for a checkpoint or the end of recovery. This statistic is always larger than "background checkpoints completed". |
|
|
8 |
Number of times that DBWR scans the LRU queue looking for buffers to write. This count includes scans to fill a batch being written for another purpose (such as a checkpoint). |
|
|
8 |
Number of times that DBWR tried to save a buffer for writing and found that it was already in the write batch. This statistic measures the amount of "useless" work that DBWR had to do in trying to fill the batch. Many sources contribute to a write batch. If the same buffer from different sources is considered for adding to the write batch, then all but the first attempt will be "useless" because the buffer is already marked as being written. |
|
|
8 |
Number of rollback segment headers written by DBWR. This statistic indicates how many "hot" buffers were written, causing a user process to wait while the write completed. |
|
|
8 |
Number of rollback segment blocks written by DBWR |
|
|
32 |
Number of DDL statements that were executed in parallel |
|
|
128 |
Number of times cleanout records are deferred, piggyback with changes, always current get |
|
|
32 |
Number of times a serial execution plan was converted to a parallel plan |
|
|
8 |
Number of dirty buffers found by the user process while it is looking for a buffer to reuse |
|
|
32 |
Number of DML statements that were executed in parallel |
|
|
64 |
When a long-running DML is executing, the cursor may get invalidated due to some concurrent DDL on one of the cursor's dependencies. In this case, an internal ORA-14403 error is thrown and is caught and cleared in one of the calling functions. The current work is rolled back and the DML is restarted without the user being notified of this. The statistic counts the number of times that the thrown, caught, and cleared (ORA-14403) sequence occurred for DML statements. Should a DML vary widely in execution time, check this statistic to see if it increments during the DML execution. If so, then concurrent DDL may be the cause of the extra elapsed time. |
|
|
4 |
Total number of conversions of the state of table or row lock |
|
|
4 |
Total number of deadlocks between table or row locks in different sessions |
|
|
4 |
Total number of table or row locks released |
|
|
4 |
Total number of table or row locks acquired |
|
|
4 |
Total number of table and row locks (acquired and converted) that timed out before they could complete |
|
|
4 |
Total number of waits that occurred during an enqueue convert or get because the enqueue get was deferred |
|
|
8 |
Number of times that a process detected a potential deadlock when exchanging two buffers and raised an internal, restartable error. Index scans are the only operations that perform exchanges. |
|
|
64 |
Total number of calls (user and recursive) that executed SQL statements |
|
|
128 |
Number of times the flashback data archive background process was woken up to do archiving |
|
|
1 |
Total time spent in wait (in microseconds) for I/O to datafiles, excluding the service time for such I/O. This is cumulative for all I/Os for all datafiles. The service time for one I/O operation is estimated as the minimum time spent in the I/O call seen so far. This service time is subtracted from the time spent in each I/O call to get the wait time for that I/O. |
|
|
8 |
Flash cache buffer is aged out of the Database Smart Flash Cache |
|
|
8 |
Database Smart Flash Cache buffer is invalidated due to object or range reuse, and so on. The Database Flash Cache Buffer was in use at the time of eviction. |
|
|
8 |
Database Smart Flash Cache buffer is invalidated due to object or range reuse, and so on. The Database Smart Flash Cache buffer was not in use at the time of eviction. |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because the buffer was corrupted |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because DBWR was busy writing other buffers |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because it was already in the flash cache |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because it was being modified |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because it was not current |
|
|
8 |
In-memory buffer was skipped for insertion into the Database Smart Flash Cache because the type of buffer was not useful to keep |
|
|
8 |
Total number of in-memory buffers inserted into the Database Smart Flash Cache |
|
|
2 |
Total size in bytes of flashback database data written by RVWR to flashback database logs |
|
|
2 |
Total number of writes by RVWR to flashback database logs |
|
|
128 |
Number of transactions modifying tables enabled for flashback data archive which were archived by a foreground process |
|
|
8 |
Number of buffers skipped over from the end of an LRU queue to find a reusable buffer. The difference between this statistic and "dirty buffers inspected" is the number of buffers that could not be used because they had a user, a waiter, or were being read or written, or because they were busy or needed to be written after rapid aging out. |
|
|
8 |
Number of PMEM buffers skipped over from the end of an LRU queue to find a reusable PMEM buffer |
||
8 |
Number of times a reusable buffer or a free buffer was requested to create or load a block |
|
|
8 |
Number of times a reusable PMEM buffer or a free PMEM buffer was requested to directly access a block |
||
40 |
A read wait is when a CR server waits for a disk read to complete before serving a block to another instance. This statistic displays the number of times a read wait ended in failure, that is, after waiting it was unable to serve a block. |
|
|
40 |
A read wait is when a CR server waits for a disk read to complete before serving a block to another instance. This statistic displays the number of times a read wait timed out, that is, the disk read did not complete in time, so the wait was aborted. |
|
|
40 |
The number of times a CR server waited for a disk read, and then successfully served a block |
|
|
32 |
Amount of CPU time (in 10s of milliseconds) used by synchronous and asynchronous global enqueue activity in a session from the time a user call starts until it ends. If a user call completes within 10 milliseconds, the start and end user-call time are the same for purposes of this statistics, and 0 milliseconds are added. |
|
|
32 |
Total elapsed time in 10s of milliseconds of all synchronous and asynchronous global enqueue gets and converts |
|
|
32 |
Total number of asynchronous global enqueue gets and converts |
|
|
32 |
Total number of synchronous global enqueue gets and converts |
|
|
32 |
Total number of synchronous global enqueue releases |
|
|
8 |
When a hot buffer reaches the tail of its replacement list, Oracle moves it back to the head of the list to keep it from being reused. This statistic counts such moves. |
|
|
8 |
When a hot PMEM buffer reaches the tail of its replacement list, Oracle moves it back to the head of the list to keep it from being reused. This statistic counts such moves. |
||
264 |
Number of hot PMEM buffers that were attempted to be exchanged with colder DRAM buffers, to ensure both blocks are still cached |
||
264 |
Number of hot PMEM buffers that were successfully exchanged with colder DRAM buffers, with both blocks still cached |
||
8 |
Number of hot PMEM buffer cache blocks that were attempted to be migrated to DREM to reduce access latency |
||
264 |
Number of hot PMEM buffer cache blocks that were successfully migrated to DRAM to reduce access latency |
||
128 |
Number of times cleanout records are applied immediately during consistent-read requests |
|
|
128 |
Number of times cleanout records are applied immediately during current gets. Compare this statistic with "deferred (CURRENT) block cleanout applications" |
|
|
384 |
Number of in-memory vectorized hash joins attempted |
||
384 |
Number of in-memory vectorized hash joins completed |
||
384 |
Number of blocks on the hybrid In-Memory scan list |
||
384 |
Number of rows on the hybrid In-Memory scan list |
||
128 |
The amount of memory by which the column store got resized |
|
|
128 |
Total amount of DB time (in milliseconds) spent populating CUs into the IM column store due to segment scans |
|
|
128 |
Size in bytes of in-memory EU data populated due to segment scans |
|
|
128 |
Uncompressed size in bytes of in-memory EU data populated due to segment scans |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans using memcompress for capacity high |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans using memcompress for capacity low |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans using memcompress for DML |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans using memcompress for query high |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans using memcompress for query low |
|
|
128 |
Number of CUs populated in the IM column store due to segment scans without compression |
|
|
128 |
Number of CUs requested to be populated due to segment scans |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans |
|
|
128 |
Total amount of DB time (in milliseconds) spent populating EUs into the IM column store due to segment scans |
|
|
128 |
Number of columns populated in EUs due to segment scans |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans at memcompress for capacity high |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans at memcompress for capacity low |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans at memcompress for dml |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans at memcompress for query high |
|
|
128 |
Number of EUs populated in the IM column store due to segment scans at memcompress for query low |
|
|
128 |
Number of EUs populated in the IM column store without compression due to segment scans |
|
|
128 |
Number of EUs requested to be populated in the IM column store due to segment scans |
|
|
128 |
Number of CUs that fail to populate due to lack of contiguous space in In-Memory area |
|
|
128 |
Number of segments populated due to segment scan |
|
|
128 |
Number of segments requested to be populated due to segment scan |
|
|
128 |
Total amount of wall clock time (in milliseconds) spent populating CUs into the IM column store due to segment scans |
|
|
128 |
Total amount of DB time (in milliseconds) spent prepopulating CUs into the IM column store priority |
|
|
128 |
Size in bytes of in-memory EU data populated due to priority |
|
|
128 |
Uncompressed size in bytes of in-memory EU data populated due to priority |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority using memcompress for capacity high |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority using memcompress for capacity low |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority using memcompress for DML |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority using memcompress for query high |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority using memcompress for query low |
|
|
128 |
Number of CUs prepopulated in the IM column store due to priority without compression |
|
|
128 |
Number of CUs requested to be prepopulated due to priority |
|
|
128 |
Number of EUs populated in the IM column store due to priority |
|
|
128 |
Total amount of DB time (in milliseconds) spent populating EUs into the IM column store due to priority |
|
|
128 |
Number of columns populated in EUs due to priority |
|
|
128 |
Number of EUs populated in the IM column store due to priority at memcompress for capacity high |
|
|
128 |
Number of EUs populated in the IM column store due to priority at memcompress for capacity low |
|
|
128 |
Number of EUs populated in the IM column store due to priority at memcompress for dml |
|
|
128 |
Number of EUs populated in the IM column store due to priority at memcompress for query high |
|
|
128 |
Number of EUs populated in the IM column store due to priority at memcompress for query low |
|
|
128 |
Number of EUs populated in the IM column store without compression due to priority |
|
|
128 |
Number of EUs requested to be populated in the IM column store due to priority |
|
|
128 |
Number of segments prepopulated due to priority |
|
|
128 |
Number of segments requested to be prepopulated due to priority |
|
|
128 |
Total amount of DB time (in milliseconds) spent repopulating CUs into the IM column store due to DML changes |
|
|
128 |
Size in bytes of in-memory EU data repopulated due to EU reaching staleness threshold |
|
|
128 |
Total number of CUs requested to be repopulated due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated in the IM column store using memcompress for capacity high due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated in the IM column store using memcompress for capacity low due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated in the IM column store using memcompress for DML due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated in the IM column store using memcompress for query high due to CU reaching staleness threshold |
|
|
128 |
Number of Cus repopulated in the IM column store using memcompress for query low due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated in the IM column store without compression due to CU reaching staleness threshold |
|
|
128 |
Total number of CUs requested to be repopulated due to CU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated with double-buffering enabled on the earlier version of the CUs |
|
|
128 |
Number of CUs requested to be repopulated with double-buffering enabled on the earlier version of the CUs |
|
|
128 |
Number of EUs requested to be repopulated due to EU reaching staleness threshold |
|
|
128 |
Total amount of DB time (in milliseconds) spent repopulating EUs into the IM column store due to DML changes |
|
|
128 |
Number of columns repopulated in EUs due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store at memcompress for capacity high due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store at memcompress for capacity low due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store at memcompress for DML due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store at memcompress for query high due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store at memcompress for query low due to EU reaching staleness threshold |
|
|
128 |
Number of EUs repopulated in the IM column store without compression due to EU reaching staleness threshold |
|
|
128 |
Total number of EUs requested to be repopulated due to EU reaching staleness threshold |
|
|
128 |
Number of CUs repopulated incrementally from earlier versions of the CUs |
|
|
128 |
Number of CUs requested to be repopulated incrementally from earlier versions of the CUs |
|
|
128 |
Number of EUs repopulated using unchanged data from the current EU due to EU reaching staleness threshold |
|
|
128 |
Number of EUs requested to be repopulated using unchanged data from the current EU due to EU reaching staleness threshold |
|
|
128 |
Number of CUs that failed to repopulate due to lack of contigunous space in In-Memory area |
|
|
128 |
Number of CUs repopulated in the IM column store due to scans |
|
|
128 |
Number of CUs requested to be repopulated in the IM column store due to scans |
|
|
128 |
Number of EUs repopulated in the IM column store that were triggered by scans on the EU |
|
|
128 |
Number of EUs requested for repopulation in the IM column store that were triggered by scans on the EU |
|
|
128 |
Number of segments repopulated |
|
|
128 |
Indicates the number of segments requested to be repopulated |
|
|
128 |
Total amount of DB time (in milliseconds) spent trickle repopulating CUs into the IM column store due to DML changes |
|
|
128 |
Size in bytes of in-memory EU data repopulated due to DML changes |
|
|
128 |
Uncompressed size in bytes of in-memory EU data repopulated due to EU reaching staleness threshold |
|
|
128 |
Number of CUs trickle repopulated in the IM column store due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store using memcompress for capacity high due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store using memcompress for capacity low due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store using memcompress for DML due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store using memcompress for query high due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store using memcompress for query low due to DML changes |
|
|
128 |
Number of CUs trickle repopulated in the IM column store without compression due to DML changes |
|
|
128 |
Total number of CUs requested to be trickle repopulated due to DML changes |
|
|
128 |
Number of CUs trickle repopulate tasks submitted |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes |
|
|
128 |
Total amount of DB time (in milliseconds) spent trickle repopulating EUs into the IM column store due to DML changes |
|
|
128 |
Number of columns repopulated in EUs due to DML changes |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes at memcompress for capacity high |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes at memcompress for capacity low |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes at memcompress for dml |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes at memcompress for query high |
|
|
128 |
Number of EUs trickle repopulated in the IM column store due to DML changes at memcompress for query low |
|
|
128 |
Number of EUs trickle repopulated in the IM column store without compression due to DML changes |
|
|
128 |
Number of EUs requested to be trickle repopulated in the IM column store due to DML changes |
|
|
128 |
Number of extents that could not be read from the IM column store because one of the columns required was not in memory |
|
|
128 |
Number of extents where no IMCU exists |
|
|
128 |
Number of memcompress for query high CUs scanned in the IM column store |
|
|
128 |
Number of memcompress for query high CUs scanned in the IM column store |
|
|
128 |
Number of memcompress for capacity low CUs scanned in the IM column store |
|
|
128 |
Number of memcompress for capacity high CUs scanned in the IM column store |
|
|
128 |
Number of memcompress for DML CUs scanned in the IM column store |
|
|
128 |
Number of where clause predicates applied to the In-Memory storage index |
|
|
128 |
Number of where clause predicates applied to the IM column store for which either all rows pass min/max pruning via an In-Memory storage index or no rows pass min/max pruning |
|
|
128 |
Number of CUs pruned by the storage index |
|
|
128 |
Number of In-Memory table scans which benefited from In-Memory dynamic scans |
|
|
128 |
Number of IMCUs processed normally because of Resource Manager limit |
|
|
128 |
Number of IMCUs processed in parallel by a worker thread |
|
|
128 |
Number of rows processed by In-Memory dynamic scans |
|
|
128 |
Size in bytes of in-memory EU data accessed by scans |
|
|
128 |
Uncompressed size in bytes of in-memory EU data accessed by scans |
|
|
128 |
Number of rows scanned from EUs in the IM column store before where clause predicate applied |
|
|
128 |
Number of columns in the EUs accessed by scans |
|
|
128 |
Number of columns in the EUs decompressed by scans |
|
|
128 |
Number of columns that would have been accessed from the EU if the scans looked at all columns |
|
|
128 |
Number of memcompress for capacity high EUs scanned in the IM column store |
|
|
128 |
Number of memcompress for capacity low EUs scanned in the IM column store |
|
|
128 |
Number of memcompress for DML EUs scanned in the IM column store |
|
|
128 |
Number of memcompress for query high EUs scanned in the IM column store |
|
|
128 |
Number of memcompress for query low EUs scanned in the IM column store |
|
|
128 |
Number of uncompressed EUs scanned in the IM column store |
|
|
128 |
Number of split EU pieces among all IM EUs |
|
|
128 |
Number of rows in scanned In-Memory Compression Units (IMCUs) |
|
|
128 |
Number of rows that were not scanned in the IM column store as they were pruned via a number of optimizations such as min/max pruning via In-Memory storage indexes |
|
|
128 |
Number of rows returned from the IM column store |
|
|
128 |
Number of rows scanned from the IM column store after applying valid vector |
|
|
128 |
Number of CUs that are eligible for min/max pruning via storage index |
|
|
128 |
Number of In-Memory bytes allocated |
|
|
128 |
Number of space requests for CUs |
|
|
128 |
Number of In-Memory extents allocated |
|
|
128 |
Number of snapshot segments created |
|
|
128 |
Number of snapshot segments deleted |
|
|
128 |
Number of transactions that triggered data to be journaled in the IM column store |
|
|
128 |
Number of CUs in the IM column store invalidated by transactions |
|
|
128 |
Number of rows in the IM column store invalidated by transactions |
|
|
128 |
Number of rows logged in the transaction journal |
|
|
1 |
The total wait time (in microseconds) for waits that belong to the Idle wait class. See Also: "non-idle wait count" and "non-idle wait time" |
|
|
128 |
Number of CUs created with uncompressed sentinels |
|
|
128 |
Number of times CU lock structure expanded |
|
|
128 |
Number of times a row migrated from a CU |
|
|
128 |
Number of times a CU row was unpurged during insert |
|
|
128 |
Number of times dummy CU purged from leaf block |
|
|
128 |
Number of times leaf block split for CU lock expansion |
|
|
128 |
Number of leaf block splits due to CU row migration |
|
|
128 |
Number of times load created a well sized CU, no space for uncompressed rows |
|
|
128 |
Number of times load created a well sized CU, with space for uncompressed rows |
|
|
128 |
Number of times load CU gave negative compression |
|
|
128 |
Number of times load created an oversized CU |
|
|
128 |
Number of times load created a small CU |
|
|
128 |
Number of times shrink CU attempts resulted in uncompressed rows |
|
|
128 |
Number of leaf blocks flushed by load |
|
|
128 |
Number of leaf blocks flushed with small CU |
|
|
128 |
Number of leaf blocks flushed without a CU |
|
|
128 |
Number of leaf blocks flushed without uncompressed rows |
|
|
128 |
Number of times CU was resized after creating an oversized CU |
|
|
128 |
Number of rows compressed by load |
|
|
128 |
Number of rows left uncompressed by load |
|
|
128 |
Number of blocks decompressed for fast scan |
|
|
128 |
Number of times decompressed CU buffer was reused by fast scan |
|
|
128 |
Number of rows in decompressed buffer for fast scan |
|
|
128 |
Number of rows used from decompressed buffer for fast scan |
|
|
128 |
Number of time decompress CU was not possible for fast scan |
|
|
128 |
Number of times 90-10 leaf block CU splits were made 50-50 |
|
|
128 |
Number of times leaf block recompression reached the recompression limit |
|
|
128 |
Number of times leaf block recompression returned negative compression |
|
|
128 |
Number of times leaf block recompression returned not enough space |
|
|
128 |
Number of times leaf block recompression not attempted |
|
|
128 |
Number of times leaf block recompression returned an oversized CU |
|
|
128 |
Number of times leaf block recompression returned a CU with 0 rows |
|
|
128 |
Number of CUs created with reduced number of sentinels |
|
|
128 |
Number of CUs created with zero sentinels |
|
|
128 |
Number of times a leaf block CU was recompressed |
|
|
128 |
Number of times reorg found a neighboring block prefix count mismatch |
|
|
128 |
Number of times prefix compression was not applied to avoid negative compression |
|
|
128 |
Number of times prefix count was changed to an optimal value |
|
|
128 |
Number of times prefix count was already the optimal value |
|
|
128 |
Number of times a block reorg avoided a new block being created during load |
|
|
128 |
Number of times a block reorg avoided a block split during DML |
|
|
64 |
Number of fast full scans initiated using direct read |
|
|
64 |
Number of fast full scans initiated for full segments |
|
|
64 |
Number of fast full scans initiated with rowid endpoints specified |
|
|
128 |
For tables tracked by flashback data archive, the number of transactions modifying rows in those tables which are large in terms of size or number of changes |
|
|
128 |
Number of times an index leaf node was split because of the insertion of an additional value |
|
|
8 |
Number of LOB API read operations performed in the session/system. A single LOB API read may correspond to multiple physical/logical disk block reads. |
|
|
8 |
Number of LOB API write operations performed in the session/system. A single LOB API write may correspond to multiple physical/logical disk block writes. |
|
|
8 |
Number of LOB API write operations whose start offset or buffer size is not aligned to the internal chunk size of the LOB. Writes aligned to chunk boundaries are the most efficient write operations. The internal chunk size of a LOB is available through the LOB API (for example, DBMS_LOB.GETCHUNKSIZE()). |
|
|
1 |
Total number of logons since the instance started. Useful only in V$SYSSTAT. It gives an instance overview of all processes that logged on. |
|
|
1 |
Total number of current logons. Useful only in V$SYSSTAT. |
|
|
128 |
Total failed puts on hash index |
|
|
128 |
Total lookup failures due to read failure on blocks because of concurrent changes |
|
|
128 |
Total lookup failures due to concurrent hash bucket changes |
|
|
128 |
Total hits on hash index – primary key found |
|
|
128 |
Total lookup failures due to block pointed to by hash index being no longer the current version |
|
|
128 |
Total number of lookups on hash index |
|
|
128 |
Total misses on hash index due to primary key not found |
|
|
128 |
Total puts on hash index |
|
|
128 |
Total successful puts on hash index |
|
|
128 |
Number of messages sent and received between background processes |
|
|
128 |
Number of messages sent and received between background processes |
|
|
72 |
Number of times a visit to a buffer attempted, but the buffer was not found where expected. Like "buffer is not pinned count" and "buffer is pinned count", this statistic is useful only for internal debugging purposes. |
|
|
128 |
Number consistent gets that require neither block cleanouts nor rollbacks. See Also: "consistent gets" |
|
|
1 |
The total number of waits performed with wait events that were not part of the Idle wait class. See Also: "in call idle wait time" and "non-idle wait time" |
|
|
1 |
The total wait time (in microseconds) for waits that do not belong to the Idle wait class. See Also: "in call idle wait time" and "non-idle wait count" |
|
|
64 |
The number of times the |
|
|
64 |
The number of times an |
|
|
64 |
The number of times the |
|
|
64 |
The number of times an OLAP table function issues a custom member limit |
|
|
64 |
The total number of OLAP transactions executed within the session. This value provides a general indication of the level of OLAP activity in the session. |
|
|
64 |
The number of times an OLAP table function issues a fast limit |
|
|
64 |
The number of times an OLAP table function issues a full limit |
|
|
64 |
The number of times an OLAP table function issues a Cube Grouping ID (CGID) limit. Typically, this type of limit occurs for query rewrite transformations that resolve to a cube organized materialized view. |
|
|
64 |
The number of OLAP import rows loaded. This statistic provides the number of rows of the source cursor that are actually loaded into an Analytic Workspace (AW). The difference between the OLAP Import Rows Pushed and OLAP Import Rows Loaded provides the number of rejected rows. |
|
|
64 |
The number of OLAP import rows pushed. This statistic refers to the number of rows encountered from a source cursor and is useful during cube build operations. |
|
|
64 |
The number of times an OLAP table function issues an in-hierarchy limit. This type of limit can occur when you use cube dimension hierarchy views. |
|
|
64 |
The total time taken by all the OLAP Limit operations that were performed during the last call to the OLAP table function |
|
|
64 |
The number of times the OLAP page pool is changed for any attached AW. |
|
|
64 |
The number of times a requested page is found in the OLAP page pool. Use this statistic in conjunction with OLAP Paging Manager Cache Miss to determine the OLAP page pool efficiency ratio. |
|
|
64 |
The number of times a requested page is not found in the OLAP page pool. Use this statistic in conjunction with OLAP Paging Manager Cache Hit to determine the OLAP page pool efficiency ratio. |
|
|
64 |
The number of times the OLAP paging manager writes to a page in the OLAP page pool. |
|
|
64 |
The number of newly-created pages in the OLAP page pool that have not yet been written to the workspace LOB |
|
|
64 |
Size, in bytes, of the OLAP page pool allocated to a session and the sum of all OLAP page pools in the system. |
|
|
64 |
The number of times data was read from the table where the AW is stored. These are permanent LOB reads. |
|
|
64 |
The number of times an OLAP table function issues a row Id limit. |
|
|
64 |
The total time spent loading rows into an AW during cube build and OLAP SQL import operations. Use this statistic along with the OLAP engine elapsed time to measure time spent running OLAP engine routines that involve loading data into AWs from a SQL source. This statistic has the following levels of precision:
|
|
|
64 |
The number of rows processed by the OLAP row source |
|
|
64 |
The number of times the requested, dynamically-aggregated value of an AW object, was found in the OLAP session cache. |
|
|
64 |
The number of times the requested, dynamically-aggregated value of an AW object, was not found in the OLAP session cache. |
|
|
64 |
The number of times data was read from a temporary segment and not from the OLAP page pool |
|
|
64 |
The number of OLAP pages stored in temporary segments for analytic workspaces |
|
|
64 |
The number of times an OLAP table function issues a unique key attribute limit |
|
|
1 |
In V$SYSSTAT: Total number of cursors opened since the instance started. In V$SESSTAT: Total number of cursors opened since the start of the session. |
|
|
1 |
Total number of current open cursors |
|
|
1 |
The time a session spends on the CPU run queue (in microseconds), waiting to get the CPU to run |
|
|
16 |
Number of context switches that were enforced by the operating system |
|
|
16 |
Number of signals received |
|
|
16 |
Number of swap pages |
|
|
16 |
Number of voluntary context switches (for example, when a process gives up the CPU by a SLEEP() system call) |
|
|
32 |
Number of times parallel execution was requested and the degree of parallelism was reduced because of insufficient parallel execution servers |
|
|
32 |
Number of times parallel execution was requested and the degree of parallelism was reduced because of insufficient parallel execution servers |
|
|
32 |
Number of times parallel execution was requested and the degree of parallelism was reduced because of insufficient parallel execution servers |
|
|
32 |
Number of times parallel execution was requested and the degree of parallelism was reduced because of insufficient parallel execution servers |
|
|
32 |
Number of times parallel execution was requested but execution was serial because of insufficient parallel execution servers |
|
|
32 |
Number of times parallel execution was executed at the requested degree of parallelism |
|
|
64 |
Total number of parse calls on a describe cursor. This operation is a less expensive than a hard parse and more expensive than a soft parse. |
|
|
64 |
Total number of parse calls (real parses). A hard parse is a very expensive operation in terms of memory use, because it requires Oracle to allocate a workheap and other memory structures and then build a parse tree. |
|
|
64 |
Total number of parse calls (hard, soft, and describe). A soft parse is a check on an object already in the shared pool, to verify that the permissions on the underlying object have not changed. |
|
|
64 |
Total CPU time used for parsing (hard and soft) in 10s of milliseconds |
Y |
|
64 |
Total elapsed time for parsing, in 10s of milliseconds. Subtract "parse time cpu" from this statistic to determine the total waiting time for parse resources. |
Y |
|
264 |
Number of direct-mapped references acquired from FsDirect. Note that in steady state this is approximately the same as the number of unmaps, because each map requires an unmap (e.g. reuses a PMEM buffer). |
||
8 |
Total size in bytes of all disk reads by application activity (and not other instance activity) only. |
|
|
8 |
Total number of reads from flash cache instead of disk |
|
|
8 |
Number of read requests for application activity (mainly buffer cache and direct load operation) which read one or more database blocks per request. This is a subset of "physical read total IO requests" statistic. |
|
|
8 |
Number of read requests that read one or more database blocks from the Database Smart Flash Cache or the Exadata Smart Flash Cache. |
|
|
8 |
Total size in bytes of disk reads by all database instance activity including application reads, backup and recovery, and other utilities. The difference between this value and "physical read bytes" gives the total read size in bytes by non-application workload. |
|
|
8 |
Number of read requests which read one or more database blocks for all instance activity including application, backup and recovery, and other utilities. The difference between this value and "physical read total multi block requests" gives the total number of small I/O requests which are less than 128 kilobytes down to single block read requests. |
|
|
8 |
Total number of Oracle instance read requests which read 128 kilobytes or more in two or more database blocks per request for all instance activity including application, backup and recovery, and other utilities. |
|
|
8 |
Total number of data blocks read from disk. This value can be greater than the value of "physical reads direct" plus "physical reads cache" as reads into process private buffers also included in this statistic. |
|
|
8 |
Total number of data blocks read from disk into the buffer cache. This is a subset of "physical reads" statistic. |
|
|
8 |
Number of contiguous and noncontiguous blocks that were prefetched. |
|
|
8 |
Number of reads directly from disk, bypassing the buffer cache. For example, in high bandwidth, data-intensive operations such as parallel query, reads of disk blocks bypass the buffer cache to maximize transfer rates and to prevent the premature aging of shared data blocks resident in the buffer cache. |
|
|
8 |
Number of buffers that were read directly for LOBs |
|
|
8 |
Number of buffers that were read directly from temporary tablespaces |
|
|
8 |
Number of blocks read for newing (that is, preparing a data block for a completely new change) blocks while flashback database is enabled |
|
|
8 |
Number of blocks read/copied from PMEM into DRAM cache |
||
8 |
Number of blocks read/copied from PMEM into DRAM cache due to the need for decryption. This is a subset of "physical reads pmem" statistics value. |
||
8 |
Number of blocks read from PMEM into PGA memory due to direct path access. This is a subset of "physical reads pmem" statistics value. |
||
8 |
Number of blocks read/copied from PMEM into DRAM cache due to exclusive access requests, which prevents other processes from accessing the block on PMEM directly. This is a subset of "physical reads pmem" statistics value. |
||
8 |
Number of blocks read/copied from PMEM into DRAM cache due to modification. This is a subset of "physical reads pmem" statistics value. |
||
8 |
Number of blocks read/copied from PMEM into DRAM cache due to hot block promotion. This is a subset of "physical reads pmem" statistics value. This statistic is further categorized into "hot pmem block migration to dram successes" and "hot pmem block exchange with dram successes" statistics. |
||
8 |
Number of blocks read/copied from PMEM into DRAM cache due to CR rollback. This is a subset of "physical reads pmem" statistics value. |
||
8 |
Number of data blocks that were read from the disk during the automatic prewarming of the buffer cache. |
|
|
264 |
Number of forfeited direct-mapped references requested by FsDirect due to PMEM memory pressure |
||
8 |
Total size in bytes of all disk writes from the database application activity (and not other kinds of instance activity). |
|
|
8 |
Number of write requests for application activity (mainly buffer cache and direct load operation) which wrote one or more database blocks per request. |
|
|
8 |
Total size in bytes of all disk writes for the database instance including application activity, backup and recovery, and other utilities. The difference between this value and "physical write bytes" gives the total write size in bytes by non-application workload. |
|
|
8 |
Number of write requests which wrote one or more database blocks from all instance activity including application activity, backup and recovery, and other utilities. The difference between this stat and "physical write total multi block requests" gives the number of single block write requests. |
|
|
8 |
Total number of Oracle instance write requests which wrote two or more blocks per request to the disk for all instance activity including application activity, recovery and backup, and other utilities. |
|
|
8 |
Total number of data blocks written to disk. This statistics value equals the sum of "physical writes direct" and "physical writes from cache" values. |
|
|
8 |
Number of writes directly to disk, bypassing the buffer cache (as in a direct load operation) |
|
|
8 |
Number of buffers that were directly written for LOBs |
|
|
8 |
Number of buffers that were directly written for temporary tablespaces |
|
|
8 |
Total number of data blocks written to disk from the buffer cache. This is a subset of "physical writes" statistic. |
|
|
8 |
Number of times a buffer is written for reasons other than advancement of the checkpoint. Used as a metric for determining the I/O overhead imposed by setting the |
|
|
8 |
Number of times a user process, when scanning the tail of the replacement list looking for a buffer to reuse, encountered a cold buffer that was pinned or had a waiter that was about to pin it. This occurrence is uncommon, because a cold buffer should not be pinned very often. |
|
|
8 |
Number of times a user process, when scanning the tail of the replacement list looking for a PMEM buffer to reuse in DRAM, encountered a cold buffer that was pinned or had a waiter that was about to pin it. This occurrence is uncommon, because a cold buffer should not be pinned very often. |
||
8 |
Number of contiguous and noncontiguous blocks that were prefetched but aged out before use |
|
|
128 |
The last time this process executed |
Y |
|
32 |
Number of local messages received for parallel execution within the instance local to the current session |
|
|
32 |
Number of local messages sent for parallel execution within the instance local to the current session |
|
|
32 |
Number of remote messages received for parallel execution within the instance local to the current session |
|
|
32 |
Number of remote messages sent for parallel execution within the instance local to the current session |
|
|
32 |
Number of SELECT statements executed in parallel |
|
|
8 |
Elapsed time of I/O during recovery |
|
|
8 |
Number of reads performed during recovery |
|
|
8 |
Number of blocks read during recovery |
|
|
8 |
Number of blocks read for lost write checks during recovery. |
|
|
8 |
Number of Block Read Records that skipped the lost write check during recovery. |
|
|
1 |
Number of recursive calls generated at both the user and system level. Oracle maintains tables used for internal processing. When Oracle needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call. |
|
|
1 |
Total CPU time used by non-user calls (recursive calls). Subtract this value from "CPU used by this session" to determine how much CPU time was used by the user calls. |
|
|
2 |
Number of exclusive redo blocks that were checksummed by the generating foreground processes. An exclusive redo block is the one whose entire redo content belongs to a single redo entry. |
|
|
2 |
Number of redo blocks that were checksummed by the LGWR. |
|
|
2 |
Total number of redo blocks written. This statistic divided by "redo writes" equals number of blocks per write. |
|
|
2 |
Total number of retries necessary to allocate space in the redo buffer. Retries are needed either because the redo writer has fallen behind or because an event such as a log switch is occurring. |
|
|
2 |
Number of times a redo entry is copied into the redo log buffer |
|
|
2 |
Number of times a Block Read Record is copied into the log buffer. |
|
|
2 |
Number of times the active log file is full and Oracle must wait for disk space to be allocated for the redo log entries. Such space is created by performing a log switch. Log files that are small in relation to the size of the SGA or the commit rate of the work load can cause problems. When the log switch occurs, Oracle must ensure that all committed dirty buffers are written to disk before switching to a new log file. If you have a large SGA full of dirty buffers and small redo log files, a log switch must wait for DBWR to write dirty buffers to disk before continuing. Also examine the log file space and log file space switch wait events in |
|
|
2 |
Total time waited in centiseconds for available space in the redo log buffer. See also"redo log space requests" |
Y |
|
2 |
Number of times that a system change number was allocated to force a redo record to have a higher SCN than a record generated in another thread using the same block |
|
|
2 |
Total amount of redo generated in bytes |
|
|
2 |
Total amount of Block Read Records generated in bytes. |
|
|
8 |
Elapsed time of all "redo synch writes" calls in 10s of milliseconds |
Y |
|
8 |
Number of times the redo is forced to disk, usually for a transaction commit. The log buffer is a circular buffer that LGWR periodically flushes. Usually, redo that is generated and copied into the log buffer need not be flushed out to disk immediately. |
|
|
2 |
Number of bytes wasted because redo blocks needed to be written before they are completely full. Early writing may be needed to commit transactions, to be able to write a database buffer, or to switch logs. |
|
|
2 |
Number of times a commit broadcast acknowledgment has not been received by the time when the corresponding log write is completed. This is only for Oracle RAC. |
|
|
2 |
Total amount of the latency associated with broadcast on commit beyond the latency of the log write (in microseconds). This is only for Oracle RAC. |
Y |
|
2 |
Total elapsed time of the write from the redo log buffer to the current redo log file in 10s of milliseconds |
Y |
|
2 |
Total number of writes by LGWR to the redo log files. "redo blocks written" divided by this statistic equals the number of blocks per write |
|
|
128 |
Number of undo records applied to user-requested rollback changes (not consistent-read rollbacks) |
|
|
128 |
Number of consistent gets that require only block rollbacks, no block cleanouts. See Also: "consistent gets" |
|
|
64 |
Rows fetched via callback. Useful primarily for internal debugging purposes. |
|
|
1 |
The total wait time (in microseconds) for waits that belong to the Scheduler wait class |
|
|
128 |
SCN increments due to communication with another database |
|
|
1 |
Number of times a SQL statement in a serializable isolation level had to abort |
|
|
1 |
The connect time for the session in 10s of milliseconds. This value is useful only in |
Y |
|
64 |
Total number of cursors cached. This statistic is incremented only if |
|
|
64 |
Number of hits in the session cursor cache. A hit means that the SQL (including recursive SQL) or PL/SQL statement did not have to be reparsed. Subtract this statistic from "parse count (total)" to determine the real number of parses that occurred. |
|
|
1 |
The sum of "db block gets" plus "consistent gets". This includes logical reads of database blocks from either the buffer cache or process private memory. |
|
|
128 |
Number of database blocks read from the IM column store (number of blocks in IMCU - number of blocks with invalid rows) |
|
|
1 |
Current PGA size for the session. Useful only in |
|
|
1 |
Peak PGA size for the session. Useful only in |
|
|
1 |
Amount of memory this session is using for stored procedures |
|
|
1 |
Current UGA size for the session. Useful only in |
|
|
1 |
Peak UGA size for a session. Useful only in |
|
|
8 |
A shared hash latch upgrade is when a hash latch is upgraded from shared mode to exclusive mode. This statistic displays the number of times the upgrade completed immediately. |
|
|
8 |
A shared hash latch upgrade is when a hash latch is upgraded from shared mode to exclusive mode. This statistics displays the number of times the upgrade did not complete immediately. |
|
|
128 |
Number of unsuccessful buffer gets from the shared I/O pool from instance startup time. |
|
|
128 |
Number of successful buffer gets from the shared I/O pool from instance startup time. |
|
|
128 |
Number of transactions modifying tables enabled for flashback data archive which were archived by a slave process |
|
|
64 |
Number of sort operations that required at least one disk write Sorts that require I/O to disk are quite resource intensive. Try increasing the size of the initialization parameter |
|
|
64 |
Number of sort operations that were performed completely in memory and did not require any disk writes You cannot do much better than memory sorts, except maybe no sorts at all. Sorting is usually caused by selection criteria specifications within table join SQL operations. |
|
|
64 |
Total number of rows sorted |
|
|
1 |
Total number of Oracle Net Services messages sent to and received from the client |
|
|
1 |
Total number of Oracle Net Services messages sent over and received from a database link |
|
|
8 |
The sum of the dirty LRU queue length after every write request. Divide by write requests to get the average queue length after write completion. |
|
|
8 |
Number of times the CURRENT PMEM block moved to a DRAM buffer, leaving a CR block in the original PMEM buffer. This happens when one Oracle process pins a CURRENT PMEM buffer for read, and another Oracle process wants to pin the same buffer for modification. The latter process clones the PMEM buffer into a new DRAM buffer for modification, while converting the original PMEM current buffer into a PMEM CR buffer. |
||
8 |
Number of times the CURRENT block moved to a different buffer, leaving a CR block in the original buffer |
|
|
64 |
Number of rows that are fetched using a ROWID (usually recovered from an index) This occurrence of table scans usually indicates either non-optimal queries or tables without indexes. Therefore, this statistic should increase as you optimize queries and provide indexes in the application. |
|
|
64 |
Number of times a chained or migrated row is encountered during a fetch Retrieving rows that span more than one block increases the logical I/O by a factor that corresponds to the number of blocks than need to be accessed. Exporting and re-importing may eliminate this problem. Evaluate the settings for the storage parameters PCTFREE and PCTUSED. This problem cannot be fixed if rows are larger than database blocks (for example, if the |
|
|
64 |
During scanning operations, each row is retrieved sequentially by Oracle. This statistic counts the number of blocks encountered during the scan. This statistic tells you the number of database blocks that you had to get from the buffer cache for the purpose of scanning. Compare this value with the value of "consistent gets" to determine how much of the consistent read activity can be attributed to scanning. |
|
|
128 |
Number of rows fetched from the buffer cache because they were not present in the IM column store (in a scan that was otherwise performed in memory) |
|
|
128 |
Number of rows fetched during non-In-Memory scan |
|
|
64 |
Number of rows that are processed during scanning operations |
|
|
64 |
Number of range scans performed on tables that have the CACHE option enabled |
|
|
64 |
Number of table scans performed with direct read (bypassing the buffer cache) |
|
|
128 |
Number of segments / granules scanned using In-Memory |
|
|
64 |
Long (or conversely short) tables can be defined as tables that do not meet the short table criteria as described in "table scans (short tables)" |
|
|
64 |
During parallel query, the number of table scans conducted with specified ROWID ranges |
|
|
64 |
Long (or conversely short) tables can be defined by optimizer hints coming down into the row source access layer of Oracle. The table must have the CACHE option set. |
|
|
128 |
Number of rows modified in tables enabled for flashback data archive |
|
|
128 |
Number of transactions which modified a table enabled for flashback data archive |
|
|
128 |
Useful only for internal debugging purposes |
|
|
128 |
Useful only for internal debugging purposes |
|
|
128 |
Useful only for internal debugging purposes |
|
|
128 |
Useful only for internal debugging purposes |
|
|
128 |
Number of transactions being successfully rolled back |
|
|
128 |
Number of times rollback segment headers are rolled back to create consistent read blocks |
|
|
128 |
Number of undo records applied to transaction tables that have been rolled back for consistent read purposes |
|
|
1 |
Number of user calls such as login, parse, fetch, or execute When determining activity, the ratio of user calls to RPI calls, give you an indication of how much internal work gets generated because of the type of requests the user is sending to Oracle. |
|
|
1 |
Number of user commits. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate. |
|
|
1 |
The total wait time (in centiseconds) for waits that belong to the User I/O wait class |
|
|
1 |
Number of times users manually issue the |
|
|
128 |
For tables tracked by flashback data archive, number of transactions modifying those tables which are very large in terms of size or number of changes |
|
|
8 |
Number of times a background or foreground process clones a |
|
|
8 |
Number of times a background or foreground process clones a |
|