Fire 7 Kids Edition, starting at $99.99 Kindle Paperwhite Game of Thrones: The Complete Seasons 1-7 now available on Blu-ray and DVD Shop now TG18_gno Lyft Vehicles Review Oprah's Book Club Red Shoes We Love nav_sap_plcc_ascpsc August Greene All-New Fire 7, starting at $49.99 Starting at $39.99 Grocery Handmade Gift Shop Home and Garden Book a house cleaner for 2 or more hours on Amazon TheGrandTour TheGrandTour TheGrandTour  Three new members of the Echo family

Customer Review

on September 10, 2011
I bought the alpha copy of this book from the Apress site in May 2011 when I heard about the book's pending release, and was able to obtain the final version of the book at the end of August 2011. I have read a couple of the other "Recipe" books from Apress, so I was excited to see how the recipe format could be applied to the task of Oracle Database performance tuning, specifically in versions 11.1 and above.

The authors of this book all have at least 10-15 years of experience with Oracle Database (and likely several years of IT related experience prior to that), are accomplished writers of several books (at least three of the "Recipes" books were co-authored by this book's authors), and at least two of the authors teach at the university level. The technical reviewer of the book is an Oracle Ace with an OCP certificate and also has an Oracle Certified RAC Expert certificate. Apress, based on my past experience, publishes some of the best Oracle Database material on the market. All of the critical recipe elements for a very successful, extremely useful book were initially in place for this book.

So, how well did the book's final contents meet the level of expectations provided by the book's front cover and the publisher's description of the book? Simply put, some of the chapters appeared as though someone tried to hammer a square cut nail into a finely machined and tapped machine-screw hole. The end result shows that the topic of performance tuning Oracle Databases simply does not fit well into a recipe format. The number of technical/logical errors per page in some of the chapters makes for a difficult challenge to find any useful/reliable information in those chapters without spending significant time verifying that the material presented by the book is accurate. Some of the other chapters did provide useful information that was mostly correct, but that information sometimes lacked sufficient depth of the information presented, likely constrained by the recipe format of the book. The topic selection for the book is reasonably good, although in execution the book sometimes drifted far away from performance tuning, into a lesson of what to do if a user cannot connect to the database, how to write basic SQL statements, how to perform basic database administration (such as handling snapshot too old errors), and how to issue operating system commands. Some sections of the book also seemed to be more specific to Oracle Database 9.2 and below than 11.1 and above. In short, there is good material in the book, but it is difficult to determine what is accurate and what is completely wrong.

This review is very long (20+ type-written pages for the first seven chapters in the book), as that level of detail is required to justify the rating attached to this book review. As such, this review will exceed the length limit permitted by Amazon, so see my blog for the full review. I have divided this review into three sections: interesting dictionary views, parameters, functions, hints, and Oracle error messages mentioned in the book; comments, corrections, and problems for the contents of the book; and interesting and/or well-above-average bits of information found in the book.

Data Dictionary Views:
* USER_SEGMENTS (page 12)
* USER_EXTENTS (page 13)
* V$TRANSACTION (page 18)
* DBA_INDEXES (page 29)
* USER_INDEXES (page 55)
* V$OBJECT_USAGE (page 75)
* DBA_HIST_WR_CONTROL (page 116)
* DBA_HIST_BASELINE (page 125)
* DBA_EXTENTS (page 160)
* V$LOCK, V$LOCKED_OBJECT (page 164)
* SYS_TIME_MODEL (page 175)
* V$UNDOSTAT (page 212)

* AUDIT_TRAIL (page 40)
* DDL_LOCK_TIMEOUT (page 80)
* RESULT_CACHE_MODE (page 100)
* LOG_BUFFER (page 111)
* COMMIT_WRITE (page 159)
* LOG_BUFFER (page 160)
* OPEN_CURSORS (page 222)


* APPEND (page 15)
* INDEX (page 70)

Error Numbers:
* Chapter 1: ORA-01578: ORACLE data block corrupted, ORA-26040: Data block was loaded using the NOLOGGING option (page 16); ORA-02266: unique/primary keys in table referenced by enabled foreign keys (page 18)
* Chapter 2: ORA-00001: unique constraint violated (page 58), ORA-02270: no matching unique or primary key for this column-list (page 59), ORA-10635: Invalid segment or tablespace type (page 81)
* Chapter 3: ORA-00838: Specified value of MEMORY_TARGET is too small (page 86)
* Chapter 4: ORA-20200: The instance was shutdown between snapshots (page 120), ORA-13541: system moving window baseline size (page 124), ORA-13539: invalid input for modify baseline window size (page 124)
* Chapter 6: ORA-09817: Write to audit file failed (page 187)
* Chapter 7: ORA-0155: snapshot too old (page 211)


Comments, Corrections, and Problems:
* In recipe 1-2 the following note appears, "As of Oracle Database 11g R2, the EXTENT MANAGEMENT DICTIONARY clause has been deprecated." I could not locate confirmation in the Oracle Database documentation that that clause was deprecated in 11g R2, but the documentation does state that dictionary managed tablespaces were deprecated in Oracle Database 9i R2, even though Oracle Database 11g R2 apparently still supports (to an extent) dictionary managed tablespaces (and the deprecated DICTIONARY keyword) as long as the system tablespace is not locally managed. This recipe also states that the authors recommend using ASSM tablespaces, without describing the potential negative performance impact of such tablespaces. (pages 5-7)
* In recipe 1-4 the authors state "Specify a separate tablespace for the table and indexes" with the justification that doing so "simplifies administration and maintenance" - it would have been helpful for the authors to provide supporting evidence for this claim (pages 9-12). In recipe 2-2 on page 51, the authors suggests that backup and recovery is easier with separate tablespaces for data and indexes.
* In recipe 1-5 (page 13), an apparently unintentional word substitution: "An extent will not be allocated until the initial record is inserted into a given extent" - the second "extent" in the sentence should probably be replaced with the word "segment". The recipe should mention that deferred segment creation is an Enterprise Edition only feature, and explain the potential problems that restriction might cause. (pages 12-14)
* In recipe 1-8 the authors suggest checking the DBA_ADVISOR_EXECUTIONS, DBA_ADVISOR_FINDINGS, and DBA_ADVISOR_OBJECTS views without providing a warning about the licensing requirements associated with the DBA_ADVISOR views. (pages 19-23)
* Recipe 1-11 demonstrates using several approaches to reduce the number of rows that span more than one data block. It would have been helpful if the book differentiated between chained rows and migrated rows, and discussed why the ANALYZE TABLE LIST CHAINED ROWS command might still report a count greater than 0 in the CHAINED_ROWS table (table has more than 255 columns and must be broken into multiple row pieces, table row is larger than a table block, etc.). Since it is the absolute file number that appears most commonly in the various data dictionary views, it probably would have been a better approach to use the DBMS.ROWID_TO_ABSOLUTE_FNO function rather than the ROWID_RELATIVE_FNO function when decoding the ROWID values. (pages 28-32)
* Recipe 2-5 includes a SQL statement that attempts to identify unindexed foreign key columns. This same SQL statement appears on page 243 of the book "Oracle SQL Recipes: A Problem-Solution Approach". While the SQL statement only works for the current user's schema, and A, B, and C generic aliases are used for table aliases, there are more significant problems with the SQL statement and the recipe as a whole. The recipe gives the impression that the primary reason for indexes on the foreign key columns is to improve performance - specifically the performance of SELECT statements where the foreign key columns are included in the WHERE clause. When you consider that the tables with the foreign key columns are likely detail (child) tables, there is a bit of a threat that each of the unique values found in the foreign key columns will be repeated many times through the various rows (scattered among the various rows, thus the CLUSTERING_FACTOR for the index might be close to the number of rows), making it a bit less likely that an index on the foreign key columns will help performance significantly (unless an [bitmap] index join is possible). The more common performance reason for indexing foreign key columns is to reduce the threat of TM enqueue contention on the child table when rows are updated or deleted in the parent table - this reason was not mentioned in this recipe. The recipe also gives the impression that all foreign keys should be indexed, while it is primarily those cases where the unique/primary column(s) in the parent table is subject to change that will benefit the most from indexing the foreign key columns. The problem with the SQL statement found in the recipe appears when concatenated (multi-column) indexes reference the foreign key columns. The SQL statement found in the book does not consider the foreign key column's position in the concatenated index. While the optimizer could select to use an index skip scan (consider what might happen if the foreign key column is the sixth column in a concatenated index) for a SELECT statement that does not reference the leading columns of the index in the WHERE clause, the same is not true for TM enqueues. Thus, the SQL statement misidentifies cases where the foreign key columns should be indexed, but are not. A DISTINCT clause may be necessary to eliminate duplicate rows returned by the query. (pages 59-60)
* In Recipe 2-6 on page 62, the final paragraph states the following, "A concatenated index that is used for skip-scanning is more efficient than a full table scan. However, if you're consistently using only a lagging edge column of a concatenated index, then consider creating a single-column index on the lagging column." This statement along with other statements in this chapter, if generalized as implied, suggests that index access paths are always more efficient than full table scans - this is simply not true often enough to make this statement in such black and white wording (pages 60-62). The ordering of the rows in the table's blocks compared to the sort order of the index (externalized as the CLUSTERING_FACTOR index statistic), the percentage of the table to be retrieved through the index access path, and the cardinality of the leading index column (in the case of the index skip scan) are all factors that must be considered. It is also important to point out that while adding an extra column to a concatenated index may help the performance of one SQL statement, doing so may hurt the performance of other SQL statements that had been using that index due to the likely increased CLUSTERING_FACTOR and the increased index size (number of leaf blocks required). (pages 60-62)
* Recipe 2-11 claims to show a method of adding an index to a database such that adding that index will not affect third party vendor applications. The recipe provides a SQL statement that sets the OPTIMIZER_USE_INVISIBLE_INDEXES parameter to TRUE at the SYSTEM level (ALTER SYSTEM), and then states, "use a hint to tell the optimizer that the index exists". The hint is entirely unnecessary, changing the parameter at the SYSTEM scope is sufficient to allow all sessions to use the invisible index, thus potentially affecting the third part vendor application. Creating a UNIQUE invisible index will still affect the third party application on inserts, updates, and deletes regardless of the setting of the OPTIMIZER_USE_INVISIBLE_INDEXES parameter. Regardless of the setting of the parameter, the invisible indexes may still be used by the optimizer for cardinality calculations (thus adding the index could change execution plans, even if the session cannot use the index), and to prevent TM enqueues on foreign key columns. The recipe also states that one of the uses of invisible indexes is as a test to determine whether or not it is safe to later drop the index - however, for the reason just mentioned, this is not a complete/safe test to determine whether or not an index may be safely dropped. (pages 70-71 reference reference2)
* Recipe 2-12 The recipe does not explain why "bitmap indexes are ideal for low-cardinality columns (few distinct values)" - additional clarification is needed. Ideally, bitmap indexes on low-cardinality columns would be bit ANDed together with other bitmap indexes, thus making two or more indexes on low-cardinality columns much more selective before touching the table. While bitmap indexes cannot be used on columns defined as unique, those types of indexes can be used even on columns that are not low-cardinality. Thus the need for additional clarification. (page 72)
* Recipe 2-15 describes using index monitoring and V$OBJECT_USAGE to determine if an index was used, so that unused indexes may be dropped. The same risks are present with this approach as with marking an index as invisible, as described in recipe 2-11. (pages 75-76)
* The introduction to chapter 3 states that the results of SQL queries and PL/SQL functions may be stored in the shared pool when a results cache is used. It might be a better idea to be much more specific, and state that Oracle Database is using the Result Cache Memory in the shared pool as indicated in the Oracle Database documentation, but maybe this is explained later. (page 83)
* In recipe 3-1 the authors mention that they would set the MEMORY_MAX_TARGET to 2000M, and then show a SQL statement that sets that parameter to 2G (that is 2048M rather than 2000M). This recipe seems to be mirroring a page from the Oracle documentation library, only that the MEMORY_TARGET parameter was set to a value after setting SGA_TARGET and PGA_AGGREGATE_TARGET to 0 in the book, while the MEMORY_TARGET was set first in the documentation (note that MEMORY_TARGET will be set to the value of MEMORY_MAX_TARGET if not explicitly set, so changing the order of the commands may or may not be a problem). This recipe states that "automatic memory management is the recommended approach to managing Oracle's memory allocation," but does not describe any of the potential pitfalls or operating system specific problem areas/limitations (reference1 reference2 reference3 reference4 reference5 reference6). Beyond telling the reader to simply add the current PGA_AGGREGATE_TARGET and SGA_TARGET values to determine a starting point for the MEMORY_MAX_TARGET parameter, the recipe gives little insight into the "best" value for this parameter. (pages 84-87)
* In recipe 3-2 the authors state, "if you want a segment to be cached (or pinned) in the buffer pool, you must specify the KEEP buffer pool [when creating the segment]." Something just doesn't seem to be right (or complete) about this statement - for one, I wonder what the CACHE and NOCACHE clauses might affect? Secondly, this statement might be implying that blocks belonging to an object will only remain in the buffer cache if those objects are assigned to the KEEP buffer cache. The recipe states that the approximate size of the KEEP buffer pool can be determined by "summing up the size of all candidate objects for this pool" - great, now what happens when Oracle Database creates up to six (and potentially many more) consistent read versions of the objects' blocks (reference)? The last two paragraphs of this recipe seem to indicate that it is the segments themselves (apparently all blocks) that are aged out of the buffer cache when space is needed to cache other blocks, rather than the segments' blocks that are aged out of the cache - additional clarity is needed in these two paragraphs. The second to last paragraph also makes a claim that I am not sure how to interpret: "Oracle figures that if a single segment takes up more than 10% of (nonsequential) physical reads, it's a large segment, for the purpose of deciding if it needs to use the KEEP or RECYCLE buffer pools." The SQL statement provided in recipe 3-2 is also found in the Oracle Database 9.2 Performance Tuning Manual (with a small, but notable correction), but that SQL statement potentially produces misleading information. (pages 87-88)
* In recipe 3-3, the book states, "The database will continue to automatically allocate memory to the various components of the SGA..." Technically, while the intention of this statement is headed in the right direction, confusing the technical terms is best avoided. The wording could have been cleaned up, keeping in mind that the "database" is actually the files that are stored in disk - the database does not allocate memory. (page 89)
* In recipe 3-4, the book describes the V$MEMORY_DYNAMIC_COMPONENTS view as showing the current allocations of memory when the MEMORY_TARGET parameter is in use. The book then shows how to query the view, unfortunately the example in the book shows querying the V$MEMORY_TARGET_ADVICE view, rather than the V$MEMORY_DYNAMIC_COMPONENTS view. The Oracle Database 11.2 documentation library indicates that the V$MEMORY_DYNAMIC_COMPONENTS view only shows information about dynamic SGA components, but a query of the view also shows an entry for "PGA Target", indicating that the Oracle documentation is not completely correct in this case. It probably would have been helpful if the query of V$MEMORY_RESIZE_OPS found in the book included the END_TIME column since the book indicates that this view may report up to 800 memory resizing operations from various time periods. The book states that the operation type is one of two modes: grow or shrink, while the Oracle Documentation library indicates that the operation type (the OPER_TYPE column) will report one of six values: STATIC, INITIALIZING, DISABLED, GROW, SHRINK, SHRINK_CANCEL. (pages 90-91)
* In recipe 3-5, the steps listed in the solution appear to be missing step 1.5, "Click the Memory Advisors" link. The book makes the following statement regarding the estimated improvement in DB time "The higher the value of improvement in DB time, the better off will be the performance." - that statement seems to suggest that the performance will definitely improve by adjusting the parameter, rather than the performance is estimated to improve. The description of the Memory Size Advice graph states that the maximum memory for the instance is indicated by a dotted straight line, while the graph itself seems to indicate that the maximum is indicated by a solid red line. This recipe recommends checking the ADDM reports, without first indicating the licensing requirements needed to access those reports. The book states, "Ideally, you'd want the PGA cache ratio somewhere upwards of around 70%.", but it does not state why 70% is ideal rather than 90% or 50% (the Oracle documentation states that "the optimal zone for cache hit percentage is between 75 and 100 percent.") (pages 91-93)
* In recipe 3-6, the book states "The memory you allocate to the PGA component is used by various SQL work areas running in the database" - the SQL work areas are memory allocations, not processes, so the SQL work areas cannot "run". The book states that a low value, such as 33.37% for the PGA Cache Hit Percentage is a definite indication of the need to increase the PGA_AGGREGATE_TARGET. No explanation is provided as to why the first query of V$SQL_WORKAREA_HISTOGRAM excluded workarea executions that required less than 64KB (note that this SQL statement is identical to a SQL statement found in the Oracle Database 9.2 Performance Tuning Guide - the second query of V$SQL_WORKAREA_HISTOGRAM is also found in the same Oracle documentation, as is the query of V$SYSSTAT). The book states, "One pass is slower than none at all, but a multi--pass operation is a sure sign of trouble in your database, especially if it involves large work areas." - this is stated as an absolute, while in reality there may be times when the only option is a multi-pass workarea execution due to the data size. The recipe mentions checking ASH, AWR, and ADDM reports without mentioning the licensing requirements. (pages 93-95)
* Recipes 3-7 through 3-11 are about the result cache which in an Enterprise Edition only feature - the book should have mentioned that limitation.
* Recipe 3-7 seems to closely mirror section 7.6.1 Managing the Server Result Cache of the Oracle 11.2 Performance Tuning Guide. (pages 95-97)
* Recipe 3-8 seems to have paraphrased sections of the Oracle 11.2 Performance Tuning Guide and sections of the PL/SQL Reference. (pages 97-99)
* Recipe 3-9 States: "When you run this query [with the NO_RESULT_CACHE hint], the server won't cache the results of this query any longer, because you've specified the MANUAL setting for the RESULT_CACHE_MODE initialization parameter." This statement is technically incorrect (or just confusing) - the hint does not change the RESULT_CACHE_MODE for the SQL statement to MANUAL, it simply prevents the SQL statement results from being added to the results cache if the RESULT_CACHE_MODE is set to FORCE, or the RESULTS_CACHE property (annotation) of the table(s) involved in the query is set to FORCE. The recipe also states the following about the table-level RESULT_CACHE mode: "The mode value DEFAULT is, of course, the default value, and this merely removes any table annotations you may have set and doesn't permit caching of results that involve this table" - this statement is misleading, the query results may still be cached depending on the value of the RESULTS_CACHE_MODE parameter and whether or not the RESULT_CACHE hint is specified. (pages 99-103 reference reference2 reference3)
* Recipe 3-10 states: "You can query the V$CLIENT_RESULT_CACHE_STATS view for details such as the number of cached result sets, number of cached result sets invalidated, and the number of cache misses." That view does not exist - the intended view name is likely CLIENT_RESULT_CACHE_STATS$. (pages 103-104)
* Recipe 3-11, much like the Oracle documentation, the book lists four views (V$RESULT_CACHE_STATISTICS, V$RESULT_CACHE_MEMORY, V$RESULT_CACHE_OBJECTS, and V$RESULT_CACHE_DEPENDENCY) that may be used to monitor the result cache. However, the recipe does not describe the individual views or show examples of querying those views. The comment about the RESULT_CACHE_REMOTE_EXPIRATION parameter in this recipe seems to be out of place, while a comment about RESULT_CACHE_RELIES_ON might have been a better approach to explain DML against which specific tables causes invalidation of PL/SQL function cache results. The bullet points under the heading Restrictions on the "PL/SQL Function Cache" are the same as the restrictions found in the Oracle documentation with one critical difference, the book states "An IN parameter of a function has one of these types: BLOB, CLOB, ..., and Record" while the Oracle documentation states, "No IN parameter has one of the following types: 'BLOB, 'CLOB, ..., 'Record". (pages 105-108)
* Recipe 3-12 states that the flash cache feature is for the Enterprise Edition only and that there only two supported operating systems, so that is a positive for this recipe. The Oracle recommended conditions in the book for enabling Flash Cache is missing this potentially important point from the Oracle documentation: "The Buffer Pool Advisory section of your Automatic Workload Repository (AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial." While the documentation states "Any multiplier [of the buffer cache size] less than two would not provide any benefit," the book states that trial and error should be used to pick a multiplier between 1 and 10. (pages 109-110)
* In recipe 3-13 we learn how to tune the redo log buffer. The book states, "since there's no cost whatsoever to increasing the LOG_BUFFER size, feel free to set it to higher than the suggested maximum of 1MB for this parameter." Starting with Oracle Database 10.2 the LOG_BUFFER parameter is automatically set to a value that is slightly smaller than the granule size - the most common granule size for Oracle Database 10.2 databases was likely 16MB (the maximum granule size). The granule size rules changed again with the release of Oracle Database 11.1 and again with the release of Oracle Database 11.2. With recent Oracle Database releases it is quite possible that the LOG_BUFFER could be auto-set to just under 512MB in size, depending on the size of the SGA (reference). The ratio of the `redo entries' statistic value divided by the `redo log space requests' statistic value, as shown in the SQL statement probably yields little useful information when adjusting the size of the LOG_BUFFER parameter. (pages 110-112)
* Recipe 4-10 incorrectly states that the "Parse CPU to Parse Elapsd" statistic found in an AWR report is "how much time the CPU is spending parsing SQL statements. The lower this metric is, the better. In the following example, it is about 2%, which is very low. If this metric ever gets to 5%, it may mean investigation is warranted to determine why the CPU is spending this much time simply parsing SQL statements." The book's definition of this statistic is incorrect - the statistic actually indicates delays (wait events) in the parsing of SQL statements, very likely due to contention between sessions (or possibly excessive competition for the server's CPUs, however such competition probably would not be explicitly captured in an Oracle wait event). Ideally, this statistic in an AWR report should be close to 100%. It appears that the book authors attempted to describe the "PARSE TIME CPU" statistic, which is not found in this section of an AWR report, or attempted to describe a derivative of the "Non-Parse CPU" statistic which does appear in the Instance Efficiency Percentages section of an AWR report. (page 133-134)
* Recipe 4-13 contains three SQL statements that attempt to calculate the SUM of the WAIT_TIME column plus the TIME_WAITED column when selecting from V$ACTIVE_SESSION_HISTORY - this approach can lead to misleading information for a couple of reasons, including: the session may actually be running on the CPU and this view will still be capturing the last wait event as if it were a currently active wait; the wait time is cumulative, thus the query would show a wait time of 3 seconds after the second second, 6 seconds after the third second, 10 seconds after the fourth second, etc. (page 142-145 reference)
* Chapter 5 is a little loose with terminology, as shown by this quote found on page 148: "It's easy to find out the percentage of time a database has spent waiting for resources instead of actually executing." The database does not wait, and it does not execute - it is a set of files on disk.
* In recipe 5-1, when describing the V$SESSION_EVENT view, the book states, "The data in this view is available only so long as a session is active." - considering that one of the columns in V$SESSION (another view described in this recipe) is named ACTIVE, I can see where the quoted statement could lead to confusion. The recipe seems to rely too much on the information provided by the ratios from V$SYSMETRIC, ignoring the possibility that a session simply burning CPU (through spinning on latches, excessive parsing, or performing unnecessary levels of consistent gets due to poorly selected execution plans) could make it appear that the instance is working optimally; the book states: "On the other hand, high values for the Database CPU Time Ratio indicate a well-running database, with few waits or bottlenecks." The last paragraph of the recipe states, "Note that you can query the V$WAITSTAT view for the same information as well." - that view was not previously introduced in the book and in actuality, the V$WAITSTAT view, related to buffer busy waits, does not retrieve information similar to the V$SYSTEM_EVENT view that was described a sentence or two earlier. (pages 147-150)
* Recipe 5-2 queries V$ACTIVE_SESSION_HISTORY without first providing a warning regarding the licensing requirements to access this view. SUM(WAIT_TIME + TIME_WAITED) when grouped by SQL_ID and USER_ID could lead to incorrect assumptions regarding the wait time per SQL statement for the last 30 minutes. There is a risk of double-counting, or counting incidental CPU usage immediately after a wait event completes, as part of the time spent in wait events for a specific SQL statement. Using COUNT is likely a better approach than using SUM. I am left wondering why the authors did not introduce the readers to the APPLICATION_WAIT_TIME, CONCURRENCY_WAIT_TIME, CLUSTER_WAIT_TIME, USER_IO_WAIT_TIME, PLSQL_EXEC_TIME, and JAVA_EXEC_TIME columns of V$SQL (or V$SQLAREA) periodically querying those columns, and calculating the delta (change) values for specific SQL statements - those columns have existed as long as ASH (since the release of Oracle Database 10.1). (pages 150-151)
* In Recipe 5-3, as of Oracle Database 10.1, "enqueue waits" probably should not be listed among the most common wait events since the most frequently occurring enqueue type waits are now broken out into separate wait events: "enq: TX - row lock contention", "enq: TM - contention", etc. Additionally, it would be helpful to determine how the seven wait events printed were selected as the most common, and not one of the following: "rdbms ipc message", "pmon timer", "SQL*Net message from client", "RMAN backup & recovery I/O" or "direct path read". It might have been helpful for the book to provide a little information about the seven selected wait events. (pages 151-152)
* Recipe 5-4 states, "Here are some examples of typical waits in some of these classes," however the book simply describes some of the characteristics of wait events found in four wait event groups, rather than listing and describing the actual wait events. The book states that only the "log file sync" wait event is in the Commit class, while Oracle Database 11.1 and above also include the "enq: BB - 2PC across RAC instances" wait event in the Commit class. The recipe states, "You can ignore the idle waits." - that statement is mostly true at the instance level, but mostly not true at the session level. This recipe did not provide much more information than what a query of V$EVENT_NAME would have provided. (pages 152-153)
* In recipe 5-5, the first query should include the STATE column in the WHERE clause to eliminate all of those waits that have completed in less than 0.01 seconds and those waits that have already completed with the session now burning server CPU time. The recipe probably should have stated that since Oracle Database 10.1 the columns mentioned in the V$SESSI
77 comments| 20 people found this helpful. Was this review helpful to you? Report abuse| Permalink
What's this?

What are product links?

In the text of your review, you can link directly to any product offered on To insert a product link, follow these steps:
1. Find the product you want to reference on
2. Copy the web address of the product
3. Click Insert product link
4. Paste the web address in the box
5. Click Select
6. Selecting the item displayed will insert text that looks like this: [[ASIN:014312854XHamlet (The Pelican Shakespeare)]]
7. When your review is displayed on, this text will be transformed into a hyperlink, like so:Hamlet (The Pelican Shakespeare)

You are limited to 10 product links in your review, and your link text may not be longer than 256 characters.