In Oracle, Block buffers and DB cache means the same thing, we will be using these them interchangeably. The instance cache should be able to accommodate a working set of the database - the most frequently accessed blocks. This may not be feasible for data warehouse DB because it has very large tables, which may be routinely accessed in big portions. Even largest SGA will not be able to cache them entirely. Meanwhile, even data warehouse databases contain certain data of relatively small size, which is accessed very frequently. This is what should be sitting in the cache. For OLTP systems, it is even more critical. It is desirable, that almost all blocks accessed in small, client-facing queries can be found in the cache. Even despite the fact, that some OLTP databases can be fairly large - in many TB range. "Almost all" means that a small percentage of small queries still has to go very wide, may be accessing historical records, which inevitably have to be read from the disk. We don't want to cache all these historical rows, because they accessed very sparsely. (Although, the disk system should be properly sized to be able to serve I/O requests, but this is a subject of another discussion.)
The goal of tuning / troubleshooting of block buffers is to achieve a match between the working set and the size of DB cache. This can be reached from two opposite directions: reducing working dataset; increasing DB cache. The Buffer Explorer helps to find ways to reduce the working set or to provide a justification for increasing SGA / memory on the host. The Buffer Explorer exposes the largest segments consuming instance cache. You should scrutinize if these large consumers are legitimate. For example, if there are full scans of even a small table (say, 10,000 blocks) but data needed is only in few recently added blocks, then this is obviously wasteful. Even worse, because all blocks of this table are accessed very often, Oracle will retain them in the cache, potentially starving other queries. Having all blocks in the cache, the query probably can execute fast, effectively masking the problem. But you see - it is a problem. The much better solution would be to create or enable using index to go directly to these recent blocks. This will spare block buffers for other queries and, by the way, may use much less CPU.
The Buffer Explorer window shows what is in the instance cache (block buffers). To make information easy to digest, data from v$bh view is aggregated using different dimensions and the result is presented as a tree. In this window, you can see a distribution of buffers between different type of objects, tablespaces / datafiles, object owners, and the objects themselves. The buffers' distribution per block status and various buffer flags can be viewed as well. You can also observe changes in the buffers' distribution from one v$bh snapshot to the other.
The idea behind the Buffer Explorer is to capture a snapshot of v$bh view and store it locally in Lab128. The snapshot is then aggregated, producing a smaller dataset (cube) that can be efficiently presented to the user. As the user defines and changes the set of dimensions, Lab128 re-aggregates data internally without querying Oracle. To make things even more interesting, the aggregated snapshots of v$bh taken at different time are stored. You can see changes in the distribution, so both the absolute and delta values can be calculated. Using other information stored in Lab128 (such as segment extents info, objects info, etc.) and joining that info with v$bh, some additional dimensions (for example, object name) can be used, dramatically increasing the usability of generated cubes.
In order to make this concept feasible, efficient compression should be used to store snapshots, and high-performance algorithms should be used to aggregate data. The practical results achieved in Lab128 show that the implemented engine scales well for large SGAs up to several dozen GB.
The information about block buffers is presented in the form of a tree, where every level of the tree corresponds to a dimension from the set of selected dimensions. Each node of the tree contains summarized information per dimension (distribution per dimension) sorted in descending order. Summarized information includes the number of buffers and corresponding memory size. If two snapshots are taken and the selected dimensions do not change in between, there will also be a delta value - the change in the number of buffers from the previous snapshot. As the next level is expanded, the distribution per next selected dimension is presented. The set of selected dimensions is shown in 4 boxes - the number of dimensions that can be chosen at a time is limited to four. The dimensions in the boxes can be changed in two ways: by selecting each dimension individually in its box using the drop-down list or by selecting from the pre-defined list in the "Stored Combinations" box. Either way, the values in the four boxes show the dimensions effective at the moment.
As soon as dimensions are changed, Lab128 re-aggregates data and refreshes the tree view automatically. The status of this window is shown in the bottom bar.
Some levels of the tree view may have long lists. In order to limit the number of records shown, two options can be used: limiting the number of records in the "Top, Rows" box or defining the cumulative percentage threshold in the "Top, %" box. The number of records is limited by whichever criterion shows the fewest rows. The records that are filtered out and not shown in the tree view are represented by the summary entry "OTHER".
It is advisable to make x$bh fixed table accessible to Lab128 - more specifically, to the Oracle account used to login from Lab128. This will avoid the unnecessary decoding of values so less CPU resources are used on the Oracle side. Lab128 will automatically sense if x$bh is accessible.
In order to see the breakdown by buffer pools, another fixed view, x$kcbwds, should be accessible as well. This is because there is no way to connect pool names from v$buffer_pool to v$bh (even in Oracle version 10.1). This can be done by using the SET_DS column in x$bh to join x$bh to v$buffer_pool through x$kcbwds. This may change in future versions.
See Installation and Setup help on the subject of creating x$ suggested views.
Status - Status of the buffer: FREE - not currently in use; XCUR - exclusive; SCUR - shared current; CR - consistent read; READ - being read from disk; MREC - media recovery mode; IREC - In instance recovery mode (v$bh.STATUS or derived from x$bh.STATE);
Cache Pool - Name of the buffer pool: DEFAULT; KEEP; RECYCLE (v$buffer_pool.name joined to x$bh though x$kcbwds, see also X$ view above);
Object Type - Object type (dba_objects.OBJECT_TYPE);
Owner - Owner of the object (dba_objects.OWNER);
Object - Object name (combo of all_objects.OBJECT_NAME and all_objects.SUBOBJECT_NAME);
Tablespace - Tablespace name where object's block is stored (v$tablespace.NAME);
Datafile - Datafile where the object's block is stored (dba_data_files.FILE_NAME);
LRU Flag - LRU Flag (x$bh.LRU_FLAG);
Class - Class number (v$bh.CLASS# or x$bh.CLASS#);
Dirty- Y - block modified (v$bh.DIRTY or derived from x$bh.FLAG);
Temp - Y - temporary block (v$bh.TEMP or derived from x$bh.FLAG);
Ping - Y - block pinged (v$bh.PING or derived from x$bh.FLAG);
Stale - Y - block is stale (v$bh.STALE or derived from x$bh.FLAG);
Direct - Y - direct block (v$bh.DIRECT or derived from x$bh.FLAG);