This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Heterogeneous Migration – Kernel Params for ASE 16 Parallel Index Creation

During an SAP BW 7.40 heterogeneous migration (using R3load) from ASE 16.0 to ASE 16.0, it’s possible to use an ASE parallel index creation feature to improve the speed of the index creation process (during import) on the target system.

In this post, I show the required parameter(s) to enable this parallel creation feature, instead of having to manually edit the TPL files to add it.
I am not going to go into the “why use the heterogeneous copy method for migration of an ASE to ASE database“, except to say that certain benefits can be had in certain migration conditions.

The steps that are impacted during the system copy are:

  • the creation of the BW specific table SQL files (SMIGR_CREATE_DDL) in the source system.
  • the import of the BW specific tables and the creation of their indexes in the target system.
  • the “update statistics” job execution selection within SWPM on the target system.

Best Practice

You should note that this post and process was put together with respect to the SAP on ASE Best Practice Guide, which is titled “SAP Applications on SAP Adaptive Server Enterprise – Best Practices for Migration and Runtime” and dated 26-01-2018 which can be found attached to SAP note 1680803.

In the Best Practice document, it specifically says:

  • see SAP Note 1952189: Parallel Index Creation with SMIGR_CREATE_DDL.
  • Changes to the template file DDLSYB_LRG.TPL will not affect the migration of special SAP BW tables

When we follow the mentioned SAP note 1952189, we see that simply selecting the correct option during execution of report SMIGR_CREATE_DDL (in the source system) should allow parallel index creation.
It does not! That is the reason for this post.
I have a theory as to why this selection process does not work; it was never tested by SAP for ASE to ASE. It’s just my theory.

Parallel Index Creation

During the subsequent import into the target ASE 16.0 database, the generated DDL (from SMIGR_CREATE_DDL) is used to create the indexes. In SAP ASE it is possible to use an additional clause on the “CREATE INDEX” statement (“consumers = n“) to use more than 1 worker process when performing the sort operation required for the index build.
This is not so much parallel creation, but parallel sorting during creation. It still speeds up the index creation.

The actual parallelism of the index creation is controlled, as usual, by the number of R3load processes you configure in SWPM (MIGMON).
However, we will stick with the terminology offered by the best practice document.

Hash Statistics

During the migration process, we really want to get through the downtime phase as fast as possible so that post-processing can begin, which can be done by more than 1 person/robot.
Fore this reason, we want to avoid the task “update ASE statistics” step of SWPM. In large BW systems this step can take hours and it prevents the completion of SWPM while it is running.

Instead, as well as a “consumers” clause on the index creation statement, there is also a “statistics” clause to force the creation of index statistics at the point of creation.
This feature allows us to completely skip the update statistics when we also combine this feature with the use of the DDLSYB_LRG.TPL template, which includes this clause also.
It makes complete sense to update statistics after after an index has been created, because the chances are, some of the required index pages are still in memory.

Creating the DDL

BW specific tables may be partitioned and have all sorts of additional features applied, which cannot be easily handled by the usual TABART based DDL generation process (in ASE this uses the DDLSYB.TPL and DDLSYB_LRG.TPL template files).

NOTE: SAP note 1680803 contains an updated DDLSYB_LRG.TPL with “fixes” described in the Best Practice document, also attached to that note.
NOTE: Follow SAP note 1836598 for instructions on how to use DDLSYB_LRG.TPL.

Because of those special BW uses, we are required to execute the SMIGR_CREATE_DDL report in the source system prior to starting the export process.
The report simply creates the necessary SQL DDL to re-create these special BW tables and their indexes.

It is at this point that we can influence the way that these BW tables and their indexes are created.
Specifically, the number of consumers and the creation of hash statistics.

The Parameter

While running the SMIGR_CREATE_DDL report with the correct selection options for the target database “Sybase” with “parallel index creation”, you will notice that the SQL files generated do not contain either the “consumers” or the “statistics” clauses.

After tracing the ABAP code through execution, it was found that the code was checking for 2 specific SAP profile parameters.

To enable the parallel index creation feature, you need to set the parameter “dbs/syb/pll_idx_creation”. The additional parameter related to consumers did not seem to work at Kernel 7.40, but we set it anyway:

dbs/syb/pll_idx_creation = X
dbs/syb/consumers = 4

Once set in RZ10 on the instance profile, the SAP system needs a restart for the parameters to take effect.
Because we found the consumers parameter did not get picked up at all, we just manually edited the SQL files to adjust the consumer number from the default of 3, to our chosen number (see best practice PDF document for comments on how to decide this number).

Just be aware, we are not certain if these parameters influence the standard BW process chain steps for index creation after load.
However, one thing I would say is that the best practice document also mentions that BW process chains should not be dropping indexes when running BW on ASE. This probably goes against the process the BW developers are used to performing with most other databases.
For the above mentioned reason, it is therefore best to only affect those parameters on one app server (where you run SMIGR_CREATE_DDL) and to revert them once the report has finished.


We have discussed how ASE to ASE migrations using R3load with NW 7.40, do not seem to call the ABAP code to generate the required “consumers” clause in the output SQL code from the running of the SMIGR_CREATE_DDL ABAP report.

I showed how to manually force this using the Kernel parameters, which also included the hashed statistics capability.

I explained how both of the parameters combined, can affect the subsequent R3load import process timings, significantly reducing downtime.

SAP Index is in the Customer Namespace

Scenario: You have your own SAP customer namespace (e.g. /MYSPACE/) and you are trying to add a new custom index Z01 to one of your tables in SE11.

When you try and “check” the index or activate it in SE11, you keep getting the error “Index xxxx is in the customer namespace”:

This problem is caused by the fact that the index name is the three character part of the index name at the end, and does not include the table name.

As an example:

/MYSPACE/MYTABLE~Z01 means the index name is Z01.

Because your table is in your customer namespace /MYSPACE, you cannot use the Y or Z development namespaces in your index name.

Therefore, remove the Z or Y from the index name.
You should be able to create an index /MYSPACE/MYTABLE~C01 instead.

SAP Performance FBL3n – Oracle Execution Plan Detail

Here’s a good reason to ensure that you always output (and read) the predicate information of the execution plan at the Oracle level and not just at the SAP level.

Inside the ST01 SQL Trace screen (in R/3 4.7 anyway), it doesn’t show how the predicates are accessed/filtered.

But in the Oracle view of the execution plan (I used autotrace, or DBMS_XPLAN) you can clearly see the “Predicate Information” section:

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 281 | 10 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| BSAS | 1 | 281 | 10 (0)|
|* 2 | INDEX RANGE SCAN | BSAS~Z2 | 1 | | 9 (0)|

Predicate Information (identified by operation id):
2 - access("MANDT"=:A0 AND "BUKRS"=:A1 AND "HKONT"=:A2 AND

This allowed me to see that the index BSAS~Z2 was not being accessed correctly due to a “feature” in the Oracle 10g CBO (see note: 176754, question #5) which SAP has documented as:

“If you specify indexed columns with LIKE, >, <, >=, <=, or BETWEEN, the system can no longer use the columns that follow to restrict the index range scan.”

After creating a new index BSAS~Z3 which contains exactly the same columns, but changing the order of the last two (BUDAT and XBLNR), I was able to get the execution plan to look like this:

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 281 | 1 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| BSAS | 1 | 281 | 1 (0)|
|* 2 | INDEX RANGE SCAN | BSAS~Z3 | 1 | | 1 (0)|

Predicate Information (identified by operation id):
2 - access("MANDT"=:A0 AND "BUKRS"=:A1 AND "HKONT"=:A2 AND

What’s the difference I hear you ask. Just look at the COST column.
In a table with 100 million records, the difference is about 20 seconds.
Filter predicates are not as efficient as access predicates (just ask Tom).

This example was fairly specific to transaction FBL3n performance, since I noted that whenever this transaction was used to query open items, it always adds a predicate of “BUDAT < 99991231” before any of the dynamic selection predicates. Bad SAP!

Proving new Oracle Index in SAP Without a Transport

Let’s suppose you have created a new Oracle level index on a SAP table using SE11.
You’ve created it in DEV and proven that it changes the EXPLAIN PLAN using SQL Trace or Oracle level tools.

Unfortunately you don’t have a significant amount of data in your DEV instance, but you do have data in your UAT or integration test system.
You could release your workbench request that you will have created to create the new index in SE11, but this means a flurry of potential additional transports to remove it again if it doesn’t work.
You just want a proof of concept that can be removed again.

Since it’s entirely possible for you to create an index at the Oracle level below SAP, you just need to ensure that you create the index the same as SAP would.
You can choose a couple of methods to ensure this, but we will discuss the SAP biased method.

Using SE11, find the table on which the new index exists in DEV.

NOTE: We’re showing R/3 Enterprise, but it should be similar for Netweaver based systems.

Click the “Indexes…” button to list the indexes:

Double click the new index you created earlier.
Now on the menu, select “Utilities -> Activation Log”:

Click the magnifying glass icon next to the “Technical log for mass activation” option on the main screen.

Expand all the possible expansion icons:

Look for the text “CREATE” to find the create statement:

Copy the text and paste into SQL*Plus in any other SAP systems Oracle database to create the index.
You’re done.

Just remember that this index does not exist in the SAP data dictionary, so you must remove it once you have proven the EXPLAIN PLAN is working for the larger amount of data.

Oracle Index Rows Per Leaf Block

The query below can be used to obtain the number of rows per index leaf block.
You will need to know the index name, the table name that the index references, the pk column in the index and the object id for the index:

SELECT object_id
from user_objects
WHERE object_name = 'MGMT_VERSIONS_PK';

SELECT rows_per_block, count(*) blocks
      SELECT /*+ cursor_sharing_exact
             sys_op_lbid(349440,              -- << INDEX OBJECT ID HERE
                                 'L',rowid) block_id,
             count(*) rows_per_block
        FROM MGMT_VERSIONS                    -- << TABLE NAME HERE
      GROUP BY sys_op_lbid(349440,            -- << INDEX OBJECT ID HERE

GROUP BY rows_per_block;

This is useful for determining sparse index blocks:

It could be combined with this script ( which helps to detect full table scans and index fast full scans:



       value scans
       statistic_name = 'segment scans'
  and  value != 0
order by owner, value;

(When I get time, I’ll combine it and post it here).

Running the first SQL statement, then checking if the table has “high” segment scans would then give a good indication if the index is used frequently for large multi-block operations and is very sparse (block wise) and potentially up for a re-build.