This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

SAP SLD Change Log Cleanup Table Reorg

When you apply changes or content updates to the SLD (system landscape directory) in SAP, the change log grows.
It’s possible to see the change log entries from the SLD administrator page.
Due to this growth, the underlying database table will benefit from reorganisation at some point.
In our scenario, this was on DB2.
We decided to run the cleanup, but first here was the size of the table BC_SLD_CHANGELOG:

db2prd> db2 “SELECT TABNAME, NPAGES FROM SYSCAT.TABLES
ORDER BY NPAGES DESC”
TABNAME                       NPAGES
—————————- ——————–
BC_SLD_CHANGELOG              14162
BC_SLD_INST                   12830
J2EE_CONFIGENTRY              5870
BC_SLD_ASSINST                3905
EP_ATTR_VALUES3               3082
J2EE_CONFIG                   2163
DBH_STG_PKG_CACHE_METRICS     1421
SYSCOLDIST                    1193

Then delete the change log in https://<server>/sld
Select “Administration -> Maintenance -> Clean Up Change Log -> Remove Entries”.
I then scheduled a standard job to do this work from the “Cleanup Task Configuration” tab.
Finally, reorg the following tables to release the space:

db2 “REORG INDEXES ALL FOR TABLE SAPPODDB.BC_SLD_CHANGELOG”

db2 “runstats on table SAPPODDB.BC_SLD_CHANGELOG AND INDEXES ALL”

db2 “SELECT TABNAME, NPAGES FROM SYSCAT.TABLES WHERE TABNAME = ‘BC_SLD_CHANGELOG'”

TABNAME                       NPAGES
—————————- ——————–
BC_SLD_CHANGELOG              1205

That’s it.

Estimating Oracle Export Space Requirements

To estimate a full Oracle export space requirements, you can use DataPump.
The below command estimates for a full export of the database.
It makes use of the ESTIMATE_ONLY option on the expdp command line.

First you need to create your directory in the database:

> sqlplus / as sysdba

SQL> create directory dp_dump as '/your_path';

SQL> quit;

> expdp "/ as sysdba" full=y directory=DP_EXPORT logfile=full_exp_estimate.log estimate_only=yes
...

Total estimation using BLOCKS method: 2.418 GB

Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:50:48

The method above runs through each object to be exported and calculates the number of blocks relevant to the block size of each object, that will be required on disk.

If you have a large database and your statistics are up-to-date, then you could use the additional “ESTIMATE=STATISTICS” option, which uses the data gathered from the Oracle statistics collections to estimate the space required. This is a lot quicker but needs accurate stats.

The example above took 1 min.

With the “ESTIMATE=STATISTICS” option, it took 46 seconds, but estimated only 991.3 MB would be required (half as much as the BLOCKS method).  There’s obviously some missing stats on objects in my DB.

SAP PI RSXMB_DELETE_MESSAGES – Copy job failed

Whilst implementing a process for the “switch” deletion of XML messages from the SAP PI persistence layer you may end up, at some point, in a situation where the deletion job simply fails to run.

The contents of the job log state: “Repeat terminated copy job first”. No further information is displayed and there are no errors in the system log (SM21).

By executing the ABAP program RSXMB_DELETE_MESSAGES (same as the delete job step) in SE38, you can then double click the error at the bottom of the screen:

Finally you will get an explanation and a process for resolution:

Call transaction SXMB_MONI and choose Job Overview. Repeat the incomplete job. If the job cannot be completed due to database errors, first correct the database errors, and then repeat the job again.”.

It seems the jobs displayed using the “Job Overview” in SXMB_MONI is not simply a view of SM37 jobs, but a view of the underlying job control and enqueue function specifically for the messaging framework.

SAP PI – Persistence Layer Deletion – Oracle Stats

When configuring persistence layer deletion in SAP PI 7.0, you should be aware that the very first time you enable the “switch” procedure and execute the delete jobs, the new tables will be created in the database.

These tables (SXMS*2) will NOT have any database statistics on them.

Therefore, once the switch procedure is completed, until your stats gathering job is executed in DB13, your PI system may run very slow indeed!

Either schedule a one-off stats gathering after the very first “switch”, or schedule the stats gathering frequently (e.g. daily).

Monitoring Index Usage In Oracle 10G Using V$OBJECT_USAGE

Have you got some large indexes creeping around?
Some of my systems have 10GB indexes.
These take valuable resources away from the Oracle database:
– DML time (INSERT, UPDATE etc).
– Stats generation time.
– Segment space (hard disk).
– Structure validation time.
– Recovery time.

Removing the index may not be an option.  But what if it’s not actually used!!

How can we tell if it’s used?
Well you could monitor all SQL in the shared pool using the AWR capture script https://www.remote-dba.net/oracle_10g_tuning/t_oracle_index_usage_tracking.htm, or you could use Jonathan’s script https://jonathanlewis.wordpress.com/segment-scans/ to see if the index segment(s) has been scanned (full scan).  But these don’t comprehensively give you a definitive answer.
There could be holes in your monitoring.

Instead, you could use the V$OBJECT_USAGE view to monitor index usage.
Although a very basic method, if you only want to know definitively if an index has been used or not, then it gives you the answer.
It is described in great detail here: https://wiki.oracle.com/page/Monitor+Index+Usage

Turn on:
alter index <INDEX> monitoring usage;

ALTER INDEX SAPSR3."JCDS~Z01" monitoring usage;

Turn off:
alter index <INDEX> nomonitoring usage;

ALTER INDEX SAPSR3."JCDS~Z01" nomonitoring usage;

Check usage (must be as owner of monitored objects):
select count(1) from v$object_usage;

Or use query below to see all monitored objects:

SELECT
u.name owner ,
io.name index_name ,
t.name table_name ,
DECODE(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring ,
DECODE(bitand(ou.flags, 1), 0, 'NO', 'YES') used ,
ou.start_monitoring start_monitoring ,
ou.end_monitoring END_MONITORING
FROM
sys.user$ u ,
sys.obj$ io ,
sys.obj$ t ,
sys.ind$ i ,
sys.object_usage ou
WHERE
i.obj# = ou.obj#
AND io.obj# = ou.obj#
AND t.obj# = i.bo#
AND u.user# = io.owner#;