This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

NetBackup Oracle Database bplist output columns

Scenario: When backing up an Oracle database using NetBackup and RMAN, the backup details can be listed using the NetBackup command bplist.

You wonder what the columns of output data represent.

Looking in the /usr/openv/netbackup/logs/dbclient directory in the log file created during the backup, and with “VERBOSE = 5” in the /usr/openv/netbackup/bp.conf file, I can see that the following data is stored for the backup:

14:17:22.370 [5210] <4> VxBSAGetEnv: INF - returning -
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectOwner.bsa_ObjectOwner:
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectOwner.app_ObjectOwner:
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectName.objectSpaceName: Oracle Database
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectName.pathName: /bk_986_1_806854642
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - createTime: 0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - copyType: 3
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - copyid: 0.0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - restoreOrder: 0.0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - estimatedSize: 0.100
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - resourceType: Oracle Backup
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectType: 4
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectStatus: 0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectDescription:
14:17:22.370 [5210] <4> VxBSACreateObject: objectOwner: -default_user-
14:17:22.371 [5210] <4> VxBSACreateObject: objectGroup: dba 1
14:17:22.371 [5210] <4> VxBSAGetEnv: INF - entering GetEnv - NBBSA_IMAGE_PERMISSION
14:17:22.371 [5210] <4> VxBSAGetEnv: INF - returning -

Therefore, the bplist outputs the following:

> /usr/openv/netbackup/bin/bplist -C <clientname> -t 4 -k <policyname> -Listpolicy -R /

9175040 0 0 0 9175040 1 0 -1 -3 33200 orasa1 dba 9175040 1360333082 1360333042 1360333042 1360333042 2 18 POLICY_TST 30 /bk_986_1_806854642

These relate to:

9175040    : The size (bytes) of the TAR of the backup piece.
0
0
0
9175040    : The size (bytes) of the TAR of the backup piece.
1
0
-1
-3
33200
orasid     : The UNIX Oracle username used to run RMAN.
dba        : The UNIX Oracle group name used to run RMAN.
9175040    : The size (bytes) of the TAR of the backup piece.
1360333082 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
2
18
POLICY_TST : The name of the NetBackup policy used.
30
/bk_986_1_806854642 : The RMAN backup piece file name.


I don’t know what some of the columns are for.  Maybe someone else knows more?

HowTo: Use AWR snapshot data to provide a history of sequential read values

Scenario: You have changed your back-end storage hardware (SAN array or SATA disk storage) and you want to see a historical overview of sequential read times of your database.

It’s possible to query the Oracle AWR data (provided you have paid for the license), to provide a historical list of sequential read times acccording to the snapshots taken by AWR.
You are obviously limited to the amount of data retained by AWR and the frequency of the AWR snapshots.

set linesize 400 pagesize 400
SELECT event_start.snap_id,
       to_char(snap.begin_interval_time,'DD-MM-YY HH24:MI') as begin_time,
       to_char(snap.end_interval_time,'HH24:MI') as end_time,
       round(decode(
                 (event_end.total_waits - nvl(event_start.total_waits, 0)),0, to_number(NULL),
       ((event_end.time_waited_micro -    nvl(event_start.time_waited_micro,0))/1000) / (event_end.total_waits - nvl(event_start.total_waits,0))
),0) avgwait,
       event_end.event_name event_name,
      (event_end.time_waited_micro - nvl(event_start.time_waited_micro,0)/1000000) total_ms,
      event_end.total_waits
FROM dba_hist_system_event event_start,
     dba_hist_system_event event_end,
     dba_hist_snapshot snap
WHERE event_end.snap_id = event_start.snap_id + 1
  AND event_end.event_name = 'db file sequential read'
  AND event_start.event_name = event_end.event_name
  AND event_start.snap_id = snap.snap_id
  AND event_start.dbid = snap.dbid
  AND event_start.instance_number = snap.instance_number
  AND snap.begin_interval_time > SYSDATE - 14            -- max 14 days history.
-- AND to_char(snap.begin_interval_time,'HH24') IN ('09','10','11','12','13','14','15','16','17')
-- AND to_char(snap.begin_interval_time,'MI') = '50'
ORDER BY event_start.snap_id;

NOTE: You can restrict the snapshot intervals used to provide “hourly” values by uncommenting the additional two lines.

impdp to Import Packages but No Overwrite / Create or Replace

Oracle say that you should be using impdp for all 11g import operations.

If you’re using impdp to import a dump file which contains some packages, you need to be aware that when you run an import with impdp, the package source is not automatically recreated i.e. your export will have exported the package create header with “CREATE PACKAGE …” and not “CREATE OR REPLACE PACKAGE …”.

Consequently, when you run the import of the package(s), if they already exist in the database, they will not be overwritten or replaced.
There is no command line option for impdp to change this feature (like “table_exists_action=replace” for TABLES).
You have to manually extract the package source from the exported dump file using the impdp “SQLFILE” command line option, which will create a separate SQL file containing the package sources.
Then modify the generated script file to change the “CREATE PACKAGE …” to “CREATE OR REPLACE PACKAGE …”.

If you’re running on UNIX, you can use the power of SED to accomplish this task:

> impdp <user>/<password> DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=scriptfile.sql

> cat scriptfile.sql | sed ‘s/^CREATE PACKAGE/CREATE OR REPLACE PACKAGE/’ > scriptfile2.sql

NOTE: You need the “^” in the command so that it only replaces the first occurrence of “CREATE PACKAGE” on a line, otherwise you might replace something you need.

Estimating Oracle Export Space Requirements

To estimate a full Oracle export space requirements, you can use DataPump.
The below command estimates for a full export of the database.
It makes use of the ESTIMATE_ONLY option on the expdp command line.

First you need to create your directory in the database:

> sqlplus / as sysdba

SQL> create directory dp_dump as '/your_path';

SQL> quit;

> expdp "/ as sysdba" full=y directory=DP_EXPORT logfile=full_exp_estimate.log estimate_only=yes
...

Total estimation using BLOCKS method: 2.418 GB

Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:50:48

The method above runs through each object to be exported and calculates the number of blocks relevant to the block size of each object, that will be required on disk.

If you have a large database and your statistics are up-to-date, then you could use the additional “ESTIMATE=STATISTICS” option, which uses the data gathered from the Oracle statistics collections to estimate the space required. This is a lot quicker but needs accurate stats.

The example above took 1 min.

With the “ESTIMATE=STATISTICS” option, it took 46 seconds, but estimated only 991.3 MB would be required (half as much as the BLOCKS method).  There’s obviously some missing stats on objects in my DB.

How to Patch an Oracle Database Under SAP

Are you thinking of patching an Oracle database which sits under an SAP system?
If you have a specific bug and you’ve identified the Oracle patch number that fixes the bug, you’d be tempted to just download the patch from Oracle.

According to SAP, you should not download any patches from Oracle directly.  As you know, the Oracle binaries themselves are slightly different for an SAP system.
Instead, if you have the Oracle patch number, search through the README files that come as part of the SAP Bundle Patch (SBP) for Oracle downloads located on the marketplace: https://service.sap.com/oracle-download  to see if the Oracle patch is included in the bundle patch.

If you can’t see it in there, then it may be worth asking SAP to clarify if/when they may include it in the next bundle patch.
Each bundle patch is released monthly, but it may not mean that relevant Oracle patches older than a month are included in the bundle.

The bundle patches themselves are cumulative, so you only need to apply the latest one.  It includes specific Oracle patches, plus a CPU patch (dependent on the date/time of the released SBP).

Remember to re-check the SAP notes about Oracle database parameters after applying SBPs, since SAP usually update the notes at each SBP release, to include any relevant _fix_control or event parameter settings.