This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Oracle Home Space Cleanup (how random that sounds)

Just a quick post to make you aware of a fantastic option I recently discovered in opatch.
I look after a large number of database systems, some having multiple Oracle homes on the same drive (I know I know, but when you’re limited on drives…).
These homes consist of things like Oracle RDBMS (obvious one), Oracle 10g EM Agent and finally Oracle 10gAS (and 10g OEM if you like to think that OMS is separate, but it’s a 10gAS so I didn’t).

Generally the Oracle home for a 10gR2 RDBMS Enterprise Edition binary codeset seems to be around 1GB in size.
Apply a couple of opatches and you could be looking at 2GB!
All of a sudden, the partition is looking quite full (it used to be 9i on there when it was originally designed!!).

Lets bring in a new opatch command that came in as part of an opatch upgrade in 10gR2 (opatch patch 6880880).
The command is part of the “opatch util” command and is a sub-command called “cleanup”.
It supposedly removes the duplicated binary files taken during an opatch patch application.

Oracle doc 550522.1 states “Starting with 10.2, Opatch does not backup only the affected modules, it also takes a backup of the complete affected libraries to $ORACLE_HOME/.patch_storage//backup//.“.

The cleanup command only removes files that are not required as part of a patch rollback.

Syntax: “$ opatch util cleanup“.

You will be prompted to select the oracle home and confirm the cleanup. It will also inform you of the amount of space saving.
Enjoy the space you get back, while you can!

Checking Lack of Bind Variables & Adhoc SQL in Oracle

The following Oracle SQL is a crude method that can be used to check the V$SQLAREA view for SQL statements that have the same execution plan, but where the SQL is unique. e.g. SQL statements where only the predicate changes in the WHERE clause.  These are otherwise known as adhoc queries (something that SQL Server 2008 is supposed to be very good at detecting).

The statement was constructed based on the “Understanding Shared Pool Memory” document by Oracle (https://www.oracle.com/technetwork/database/focus-areas/manageability/ps-s003-274003-106-1-fin-v2-128827.pdf).

The doc basically says that a count of the PLAN_HASH_VALUE column versus the number of SQL_IDs using the plan indicates how bad the SQL needs bind variables since it should have only one or two SQL statements (the lower the better) per plan.

In my opinion, systems that return >60% for ACTION “Could use bind variables.” should either consider revising the application SQL code (permanent fix), or using the CURSOR_SHARING=FORCE init parameter (temporary fix!).

SELECT 'Multiple SQLs in SQLAREA using same plan:' DESCRIPTION,sum(pct_hash_use) PCT,'Could use bind variables.' ACTION FROM (
select hc.plan_hash_value,
hc.hash_use_count,
sc.sql_count,
round((hc.hash_use_count/sc.sql_count*100),2) pct_hash_use
from
(select s.plan_hash_value, count(s.plan_hash_value) hash_use_count
from v$sqlarea s
group by s.plan_hash_value
having plan_hash_value > 0
) hc,
(select count(1) sql_count
from v$sqlarea t
where plan_hash_value > 0
) sc
order by pct_hash_use desc
) WHERE hash_use_count > 1
UNION
SELECT 'Single SQLs in SQLAREA using same plan:' Description,sum(pct_hash_use) PCT,'No action needed.' ACTION FROM (
select hc.plan_hash_value,
hc.hash_use_count,
sc.sql_count,
round((hc.hash_use_count/sc.sql_count*100),2) pct_hash_use
from
(select s.plan_hash_value, count(s.plan_hash_value) hash_use_count
from v$sqlarea s
group by s.plan_hash_value
having plan_hash_value > 0
) hc,
(select count(1) sql_count
from v$sqlarea t
where plan_hash_value > 0
) sc
order by pct_hash_use desc
) WHERE hash_use_count <= 1;


You can also use the following query to determine roughly how much memory is wasted on single SQL statements that have been executed only once (you should use this on a well bedded in system that has been running for a while):

select count(1) num_sql,
sum(decode(executions, 1, 1, 0)) num_1_use_sql,
sum(sharable_mem)/1024/1024 mb_sql_mem,
sum(decode(executions, 1, sharable_mem, 0))/1024/1024 mb_1_use_sql_mem
from v$sqlarea
where sharable_mem >0;


You should double check the output with the SQL text in V$SQLAREA to ensure that the SQL is truly ad-hoc.

Remember to get some hard and fast values for the “Concurrency” wait class before and after the change (bind variables introduced or init parameter changed).

You can use SQL: “select * from v$system_event where wait_class#=4 order by average_wait desc;” to determine the average wait time.

I would expect the library cache operation wait times to reduce as hard parses are reduced.

Here is an example of a system that is not using bind variables to optimise SQL:

When running the script we get:

You should note that an OLTP system such as SAP ERP should have a much lower count for adhoc SQL  compared to a SAP BW system where users can write their own queries.

Oracle Explain Plans: Cardinality & Histograms

This is one of the better white papers on how to read an Oracle explain plan:
https://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf
The most important point is made on cardinality on page 10 when explaining “Data Skew”.
The term “Data Skew” is used to refer to the imbalance of values with respect to a normal distribution (https://en.wikipedia.org/wiki/Skewness).
The imbalance is shown as the Mode (https://en.wikipedia.org/wiki/Mode_(statistics)) of the values in a column of a table.

Example:
SELECT * FROM A_TABLE;

COL1 | COL2
-----|-----
   1 | A <== Mode = most frequent.
   2 | A
   3 | A <== Median = middle one
   4 | B
   5 | B

Histogram for COL2:

Value |Number
------|------
A     |###
B     |##

Data Skew means that more of (usually it takes a lot more, like +60%) the values in COL2 have a value of “A” than of “B”.
As the Oracle whitepaper points out “One of the simplest formulas is used when there is a single equality predicate in a single table query (with no histogram). In this case the Optimizer assumes a uniform distribution and calculates the cardinality for the query by dividing the total number of rows in the table by the number of distinct values in the column used in the where clause predicate.“.

In example “SELECT * FROM A_TABLE WHERE COL2='A'” this would equate to: Cardinality = 5 / 2 A cardinality value of 2.5 would be rounded to 3.
Accurate enough.

The problem with this is that the larger the number of rows the more skewed the cardinality will become.
Imagine the table with values like so:

Value |Number
------|---------------
A     |###############
B     |##

Now in example “SELECT * FROM A_TABLE WHERE COL2='A'” this would equate to: Cardinality = 17 / 2
A cardinality value of 8.5 would be rounded to 9.
Definitely not 15.

To solve this problem, Oracle has the capability to collect statistics for individual columns.
You can use the DBMS_STATS.GATHER_TABLE_STATS command to collect stats on the column (https://www.dba-oracle.com/t_histograms.htm).

The below statement will collect stats on the COL2 column of A_TABLE in the OE schema with a bucket size of 20 buckets.  These column stats are histograms of the distribution of values.

BEGIN
DBMS_STATS.GATHER_table_STATS (OWNNAME => 'OE', TABNAME => 'A_TABLE',
METHOD_OPT => 'FOR COLUMNS SIZE 20 COL2');

END;
/

Two types of HISTOGRAM are available in Oracle, FREQUENCY and HEIGHT.
Oracle uses HEIGHT based histograms *only* when the number of distinct values of a column is greater than the number of buckets specified.
In a height-balanced histogram, the column values are divided into bands so that each band contains approximately the same number of rows. The useful information that the histogram provides is where in the range of values the endpoints fall.“.

Oracle will use FREQUENCY based histograms by default when the number of distinct values of a column is less than or equal to the number of buckets specified.
In a frequency histogram, each value of the column corresponds to a single bucket of the histogram.“.

https://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm

It is possible to view the collected histogram for a column:

SELECT column_name, num_distinct, num_buckets, histogram
FROM USER_TAB_COL_STATISTICS
WHERE table_name = 'A_TABLE' AND column_name = 'COL2';

Oracle vs SQL Server – I like it!

I found this today https://www.oaktable.net/sites/default/files/deathmatch.pdf
It shows a good comparison of basic syntactical differences in SQL Server and Oracle when looking to compare performance / features.
I especially like the use of blockrecover in RMAN towards the end.  Nice.

Just take a look at the tags for it’s hosting blog page and it’s obvious which one will win without even reading the slide.

Use of Oracle AWR / ASH leading to bad coding?

I had a brief email exchange with another Oracle guru the other day.
He suggested that the quality of Oracle coding in PL*SQL and Plain Jane (www.medicaltextbooksrevealed.com/blog/2010/02/plain-jane/) SQL had gone down hill.

This could be attributed to two factors:
1, The level of coding experience has dropped.  Older more experienced coders have filled into the new architect roles and the void is being filled quickly by newer in-experienced coders.
2, The rigour with which debugging, testing and tuning is performed has become somewhat lax (www.thefreedictionary.com/lax ) because there’s just no emphasis on the developers to tune their code when the DBA has such great tools to do it for them.

Is it possible that the use of the additionally licensed tools such as AWR (Automatic Workload Repository) and ASH (Active Session History) introduced in Oracle 10g, have provided an easy mechanism for DBAs to seek out better performance.
I don’t think these tools are just for DBAs, but the way they are marketed makes me feel they are pushed that way.