This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Power Notes Searcher Updated to v1.1

The Chromium project has recently (May 2014) fixed a bug in the Chrome web browser which means that users of my Power Notes Searcher Google Chrome extension may have seen an issue with the table ordering in the SAP Notes history table.

I have now made a slight correction to the extension and v1.1 is now available for update/install in the Google Chrome Extensions Web Store (or from the link on my initial blog post here).

If you haven’t already installed my extension, give it a go.  You don’t know what you’re missing!

DBCLONE Is Still Running, Running & Running Running…

Scenario: Your running through an upgrade of SAP on Oracle, either applying an EHP or a support package stack update.  You’re using the “Standard” downtime minimized approach and you’ve got to the SUM stage MAIN_SHDCRE/SUBMOD_SHDDBCLONE/DBCLONE and it has just bailed out!

During the DBCLONE step, a number of background jobs are created that copy certain tables, programs sources etc, from the current SAP database schema, to the shadow instance schema (on Oracle SAPSR3SHD).
The copies of the tables, sources etc, are placed into the new tablespace that you were asked to create in the earlier steps.

During this copy process, the database will be generating a lot of redo information (it is performing a lot of INSERTs).  This means that it will be generating a lot of archive logs also.  Most systems are in archive log mode by default, as this is the safest way of upgrading a production system.

The DBCLONE step can take a long time depending on a few factors:

  • Size of your SAP system master data.  Since the transactional data is not copied, most SAP systems will be roughly the same sort of size for the master data tables and sources etc (e.g tables D010TAB, D010INC, REPOSRC).  Don’t forget, once tables are cloned, it needs to build the required indexes on those tables too!  Then it will gather stats on the tables and indexes.
  • Quality of your database.  If your Oracle database is highly fragmented, the indexes are not in good shape, or there is a lack of memory allocated to the database.

  • Redo disk write times.  The faster the write times for redo, the quicker this is going to go.
  • Number of parallelised jobs.  The SUM tool recommends 3 jobs in parallel.  Prior to this SUM step, you would have been asked to configure the number of parallel jobs (and also your number of background work processes).  If you configure less than 3, then it will take longer.  I would personally recommend to have n+3, where n= your normal production number of background work processes.  This means you will not be hampering day-to-day usage by blocking background jobs.  The 3 jobs are created with high priority (Class A) so they get all the background processing they need.
  • Whether you elected to pre-size the new shadow tablespace data files.
    Setting them to autoextend is fine, but by default, the SAP brspace commands create the files with only 200MB.  By setting these files to be as big as they need to be (no autoextend) then you will save time.

During the DBCLONE step, the SUM tool monitors progress by RFC connection into the SAP system.  It checks to see when the DBCLONE background jobs complete (and that they complete successfully).
If you have limited space available in your archive log area, and this fills up, then the RFC connection from SUM fails to work (archiver stuck issue).
This causes SUM to report that the step has aborted, but that DBCLONE was still running.

You will still see DBCLONE running in the background when you resolve the archiver stuck issue.
At this point, you could choose to manually cancel the jobs by “Cancel without Core” in SM50 for the busy background work processes where DBCLONE is running.  However, they are still running, and simply waiting until they have finished, then restarting SUM, will continue from where it left off.

It knows where it got to, because it records the list of tables cloned in the SAPSR3.PUTTB_SHD table by setting the CLONSTATE column to ‘X’.
During the cloning process, the tables to be cloned are assigned to background jobs using the CLONSTATE column in the SAPSR3.PUTTB_SHD table.

You can monitor the cloning progress by using the following SQL:

SQL> select count(*) num, clonstate from SAPSR3.PUTTB_SHD group by clonstate;

You will notice that the CLONSTATE column will contain:
‘X’  – Tables cloned.
‘1’  – Tables on the work list of background job DBCLONE1.
‘2’  – Tables on the work list of background job DBCLONE2.
‘n’  – Tables on the work list of background job DBCLONEn.
‘  ‘  – Tables not being cloned.

As tables are cloned, the CLONSTATE changes from ‘n’ to ‘X’.
It seems that larger tables are performed first.

The method used to clone the tables is: “INSERT into <table> SELECT (<fields>) FROM <source>;”.
Then a “CREATE INDEX” is performed.

It’s also worth noting that you may need enough PSAPTEMP space to account for the index creation.
In a Solution Manager 7.1 SPS10 system, there are 13109 tables to be cloned.

As a final parting comment, if you have Oracle 11gR2, you should consider the database compression options available to you.  Reducing the I/O requirements will massively help speed up the process.

R/3 to ECC – Benefits of not upgrading anything?

This is a question that will probably be asked by many IT persons over the coming months, as SAP draws to a close support for the SAP R/3 4.7 system.
(see the SAP production availability matrix
Whilst upgrading to ECC will mean a SAP supported system, what other options are out there?
Let’s look at just a few so that you may have some ideas that you maybe hadn’t considered.

– Stay where you are and pay for extended support.
This is an interesting option.  Let’s face it, if you use SAP as a basic product e.g. for accounting or sales transactions, then exactly what else will you need from a product in the future?  Why not save the upgrade costs and simply pay for extended support, and keep paying each time it expires.
Whilst the initial support costs may be known, the future costs are not and SAP could hike these.  Also, there may be a fairly straight upgrade path to a newer product at the moment, but in the future you may have to follow that path, plus the additional paths and intricacies of upgrades to later versions in order to reach something more modern (UNICODE anyone!).
Things like OS support may bite you eventually, and those of you on HP-UX Itanium are already seeing what happens to non-x86 based operating systems when companies like Oracle decide to stop supporting you.  Your future upgrade path could involve skill-sets no longer available/costly, or even more lengthy processes because you’re moving from older hardware.
On the positive side, the future could hold hope in the form of faster systems, smarter tools and cheaper processes that could make future upgrades/migrations faster and cheaper than doing it now.  A big database in the future may not be so big in relational terms.

– Stay where you are and don’t pay for extended support.
You will loose all access to standard SAP support sites and tools, plus you will not benefit from any DB updates or DB vendor support.
This could be very problematic if your business needs to apply SAP legal patches for changes to HR related functions within the SAP modules.
I’m not entirely sure if you will still be able to request SCCR keys for modifying SAP objects or even be able to develop your own ABAP code in your own system.  Maybe someone can let me know on that one.
Some of the words in Oracle Database support contracts state that you may have to back-pay for support if you decide to re-enable support at a later date.  I’m not sure if SAP would be the same.
You would potentially suffer during external audits if additional security related legislation comes along (SOX for example) and you are not able to apply the updates/functionality to provide that security.

There are some common issues with both options above.  These mainly centre around the IT resources that are supporting those systems.  Nobody likes to stay still in IT.  Not unless they are happy in the knowledge that retirement is looming and they just need to keep rolling in the meantime.
The constant need to keep abreast of the latest technical enhancements/changes is one of the most difficult aspects of the IT profession.
However, with the advent of off-shore IT resources, it should be possible to secure long-term support resources even if you can’t secure them on-shore.  Having said that, I don’t yet know of any off-shore company that has a high retention level.  Maybe this is coming…

In summary, there are some cost advantages in the short term for not upgrading an SAP system.  But unfortunately those costs may hit you in the end in some form or another.

What’s in a SAP EHP Anyway…

For some time I’ve often wondered how you can determine what functionality exists in an Enhancement Package (EHP).
I’ve supported SAP ERP 6.00 systems for a while, but I’ve never been involved in the business functionality decisions (I’m a technical guy and this is a Solution Architect’s job).
So it made me wonder, if you were implementing a brand new SAP ERP 6.00 system, how would you determine what level of EHP was right for you.
Well the answer is simple, once I’d done a little reading.

All SAP EHPs are cumulative.  i.e. SAP ERP 6.00 EHP 5 includes all the new features included in EHPs 1 to 4, plus some shiny new ones.
So you see, you just need to implement the latest and greatest to get “everything”, then switch it on if you want it.  You can use the new SAP BFP (Business Function Prediction) to see what’s in each EHP.

So, why do SAP continue to dish out the older EHPs?  or even the base ERP 6.0 release?
Well that is down to software release management and the way that SAP develop the code.
Each EHP is effectively a branch off the main code-set.  Each of these branches will need patches, so SAP can’t just kill off EHP 4 when EHP 5 is out.  They must maintain the support.
For this reason, you will see why the Support Package Stack (SP-Stack) schedule always has the latest sp-stack for the base release out before the first EHP, before the second and so on.

Like this:
ERP 6.0
              -> ERP 6.0 EHP 1
                                            -> ERP 6.0 EHP 2

                                                                          -> ERP 6.0 EHP 3 ….

The release schedule simply follows the way the support packages are tested and released up through the code-set branches.

The time difference between the base release sp-stack and an EHP sp-stack could be a good indication of the amount of change between the base release and an EHP.

So is there a technical reason why you wouldn’t implement the latest and greatest EHP (as recommended by SAP)?

My answer is YES.  I don’t buy a car with everything included, because I know that I can get a SATNAV cheaper elsewhere.  Now I guess this depends on your software strategy.
Plus, the more tech you have, the more that could go wrong during application of support patches and upgrades.  Lastly, even if you’re not using the EHP functionality, you still have to apply the code during the patch process, increasing upgrade time.

I maintain, if you don’t need it or want it, why apply it.
This is obviously my view, and the counter argument would be that you have the option of “just in time” enabling of new business functionality.  In my experience, even the decisions take longer than the actual implementation time, even without “just in time”.

DBUA “You do not have OS authentication” – Fat Fingers

The other day, I was upgrading an Oracle database to using DBUA (it has it’s benefits and provides a nice consistent approach when under pressure, providing it’s used with care).

I constantly got the request to enter the password for a DB user with SYSDBA privileges.
I didn’t have the SYS or SYSTEM user passwords (really!), but the OS user I was using definitely was a member of the group that is compiled into the config.o library (see MYOS note “SYSDBA and SYSOPER Privileges in Oracle [ID 50507.1]”):

For reference, you check on UNIX by:

cd $ORACLE_HOME/rdbms/lib
cat config.[c|s]

Checking the values for SYS_DBA and SYS_OPER against the UNIX groups your user is a member of (use the “groups” command).

The solution: this was simple, the Oracle home had been manually added into the /etc/oratab file and had been misspelled.  If only it didn’t take 45 minutes to find it!
Correcting /etc/oratab and restarting DBUA fixed the problem.

I guess DBUA was just plain lying and it didn’t have the heart to tell me that I was about to upgrade a database when it couldn’t find the Oracle home.  Shame on you.