This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

DBCLONE Is Still Running, Running & Running Running…

Scenario: Your running through an upgrade of SAP on Oracle, either applying an EHP or a support package stack update.  You’re using the “Standard” downtime minimized approach and you’ve got to the SUM stage MAIN_SHDCRE/SUBMOD_SHDDBCLONE/DBCLONE and it has just bailed out!

During the DBCLONE step, a number of background jobs are created that copy certain tables, programs sources etc, from the current SAP database schema, to the shadow instance schema (on Oracle SAPSR3SHD).
The copies of the tables, sources etc, are placed into the new tablespace that you were asked to create in the earlier steps.

During this copy process, the database will be generating a lot of redo information (it is performing a lot of INSERTs).  This means that it will be generating a lot of archive logs also.  Most systems are in archive log mode by default, as this is the safest way of upgrading a production system.

The DBCLONE step can take a long time depending on a few factors:

  • Size of your SAP system master data.  Since the transactional data is not copied, most SAP systems will be roughly the same sort of size for the master data tables and sources etc (e.g tables D010TAB, D010INC, REPOSRC).  Don’t forget, once tables are cloned, it needs to build the required indexes on those tables too!  Then it will gather stats on the tables and indexes.
  • Quality of your database.  If your Oracle database is highly fragmented, the indexes are not in good shape, or there is a lack of memory allocated to the database.

  • Redo disk write times.  The faster the write times for redo, the quicker this is going to go.
  • Number of parallelised jobs.  The SUM tool recommends 3 jobs in parallel.  Prior to this SUM step, you would have been asked to configure the number of parallel jobs (and also your number of background work processes).  If you configure less than 3, then it will take longer.  I would personally recommend to have n+3, where n= your normal production number of background work processes.  This means you will not be hampering day-to-day usage by blocking background jobs.  The 3 jobs are created with high priority (Class A) so they get all the background processing they need.
  • Whether you elected to pre-size the new shadow tablespace data files.
    Setting them to autoextend is fine, but by default, the SAP brspace commands create the files with only 200MB.  By setting these files to be as big as they need to be (no autoextend) then you will save time.

During the DBCLONE step, the SUM tool monitors progress by RFC connection into the SAP system.  It checks to see when the DBCLONE background jobs complete (and that they complete successfully).
If you have limited space available in your archive log area, and this fills up, then the RFC connection from SUM fails to work (archiver stuck issue).
This causes SUM to report that the step has aborted, but that DBCLONE was still running.

You will still see DBCLONE running in the background when you resolve the archiver stuck issue.
At this point, you could choose to manually cancel the jobs by “Cancel without Core” in SM50 for the busy background work processes where DBCLONE is running.  However, they are still running, and simply waiting until they have finished, then restarting SUM, will continue from where it left off.

It knows where it got to, because it records the list of tables cloned in the SAPSR3.PUTTB_SHD table by setting the CLONSTATE column to ‘X’.
During the cloning process, the tables to be cloned are assigned to background jobs using the CLONSTATE column in the SAPSR3.PUTTB_SHD table.

You can monitor the cloning progress by using the following SQL:

SQL> select count(*) num, clonstate from SAPSR3.PUTTB_SHD group by clonstate;

You will notice that the CLONSTATE column will contain:
‘X’  – Tables cloned.
‘1’  – Tables on the work list of background job DBCLONE1.
‘2’  – Tables on the work list of background job DBCLONE2.
‘n’  – Tables on the work list of background job DBCLONEn.
‘  ‘  – Tables not being cloned.

As tables are cloned, the CLONSTATE changes from ‘n’ to ‘X’.
It seems that larger tables are performed first.

The method used to clone the tables is: “INSERT into <table> SELECT (<fields>) FROM <source>;”.
Then a “CREATE INDEX” is performed.

It’s also worth noting that you may need enough PSAPTEMP space to account for the index creation.
In a Solution Manager 7.1 SPS10 system, there are 13109 tables to be cloned.

As a final parting comment, if you have Oracle 11gR2, you should consider the database compression options available to you.  Reducing the I/O requirements will massively help speed up the process.

What’s in a SAP EHP Anyway…

For some time I’ve often wondered how you can determine what functionality exists in an Enhancement Package (EHP).
I’ve supported SAP ERP 6.00 systems for a while, but I’ve never been involved in the business functionality decisions (I’m a technical guy and this is a Solution Architect’s job).
So it made me wonder, if you were implementing a brand new SAP ERP 6.00 system, how would you determine what level of EHP was right for you.
Well the answer is simple, once I’d done a little reading.

All SAP EHPs are cumulative.  i.e. SAP ERP 6.00 EHP 5 includes all the new features included in EHPs 1 to 4, plus some shiny new ones.
So you see, you just need to implement the latest and greatest to get “everything”, then switch it on if you want it.  You can use the new SAP BFP (Business Function Prediction) to see what’s in each EHP.

So, why do SAP continue to dish out the older EHPs?  or even the base ERP 6.0 release?
Well that is down to software release management and the way that SAP develop the code.
Each EHP is effectively a branch off the main code-set.  Each of these branches will need patches, so SAP can’t just kill off EHP 4 when EHP 5 is out.  They must maintain the support.
For this reason, you will see why the Support Package Stack (SP-Stack) schedule always has the latest sp-stack for the base release out before the first EHP, before the second and so on.

Like this:
ERP 6.0
              -> ERP 6.0 EHP 1
                                            -> ERP 6.0 EHP 2

                                                                          -> ERP 6.0 EHP 3 ….

The release schedule simply follows the way the support packages are tested and released up through the code-set branches.

The time difference between the base release sp-stack and an EHP sp-stack could be a good indication of the amount of change between the base release and an EHP.

So is there a technical reason why you wouldn’t implement the latest and greatest EHP (as recommended by SAP)?

My answer is YES.  I don’t buy a car with everything included, because I know that I can get a SATNAV cheaper elsewhere.  Now I guess this depends on your software strategy.
Plus, the more tech you have, the more that could go wrong during application of support patches and upgrades.  Lastly, even if you’re not using the EHP functionality, you still have to apply the code during the patch process, increasing upgrade time.

I maintain, if you don’t need it or want it, why apply it.
This is obviously my view, and the counter argument would be that you have the option of “just in time” enabling of new business functionality.  In my experience, even the decisions take longer than the actual implementation time, even without “just in time”.