During an SAP BW 7.40 heterogeneous migration (using R3load) from ASE 16.0 to ASE 16.0, it’s possible to use an ASE parallel index creation feature to improve the speed of the index creation process (during import) on the target system.
In this post, I show the required parameter(s) to enable this parallel creation feature, instead of having to manually edit the TPL files to add it.
I am not going to go into the “ why use the heterogeneous copy method for migration of an ASE to ASE database“, except to say that certain benefits can be had in certain migration conditions.
The steps that are impacted during the system copy are:
the creation of the BW specific table SQL files (SMIGR_CREATE_DDL) in the source system. the import of the BW specific tables and the creation of their indexes in the target system. the “update statistics” job execution selection within SWPM on the target system. Best Practice
You should note that this post and process was put together with respect to the SAP on ASE Best Practice Guide, which is titled “
SAP Applications on SAP Adaptive Server Enterprise – Best Practices for Migration and Runtime” and dated 26-01-2018 which can be found attached to SAP note 1680803.
In the Best Practice document, it specifically says:
“ see SAP Note 1952189: Parallel Index Creation with SMIGR_CREATE_DDL.“ “ Changes to the template file DDLSYB_LRG.TPL will not affect the migration of special SAP BW tables“
When we follow the mentioned SAP note 1952189, we see that simply selecting the correct option during execution of report SMIGR_CREATE_DDL (in the source system) should allow parallel index creation.
It does not! That is the reason for this post. I have a theory as to why this selection process does not work; it was never tested by SAP for ASE to ASE. It’s just my theory. Parallel Index Creation
During the subsequent import into the target ASE 16.0 database, the generated DDL (from SMIGR_CREATE_DDL) is used to create the indexes. In SAP ASE it is possible to use an additional clause on the “CREATE INDEX” statement (“
consumers = n“) to use more than 1 worker process when performing the sort operation required for the index build. This is not so much parallel creation, but parallel sorting during creation. It still speeds up the index creation.
The actual parallelism of the index creation is controlled, as usual, by the number of R3load processes you configure in SWPM (MIGMON).
However, we will stick with the terminology offered by the best practice document. Hash Statistics
During the migration process, we really want to get through the downtime phase as fast as possible so that post-processing can begin, which can be done by more than 1 person/robot.
Fore this reason, we want to avoid the task “update ASE statistics” step of SWPM. In large BW systems this step can take hours and it prevents the completion of SWPM while it is running.
Instead, as well as a “consumers” clause on the index creation statement, there is also a “statistics” clause to force the creation of index statistics at the point of creation.
This feature allows us to completely skip the update statistics when we also combine this feature with the use of the DDLSYB_LRG.TPL template, which includes this clause also. It makes complete sense to update statistics after after an index has been created, because the chances are, some of the required index pages are still in memory. Creating the DDL
BW specific tables may be partitioned and have all sorts of additional features applied, which cannot be easily handled by the usual TABART based DDL generation process (in ASE this uses the DDLSYB.TPL and DDLSYB_LRG.TPL template files).
NOTE: SAP note 1680803 contains an updated DDLSYB_LRG.TPL with “fixes” described in the Best Practice document, also attached to that note. NOTE: Follow SAP note 1836598 for instructions on how to use DDLSYB_LRG.TPL. Because of those special BW uses, we are required to execute the SMIGR_CREATE_DDL report in the source system prior to starting the export process. The report simply creates the necessary SQL DDL to re-create these special BW tables and their indexes.
It is at this point that we can influence the way that these BW tables and their indexes are created.
Specifically, the number of consumers and the creation of hash statistics. The Parameter
While running the SMIGR_CREATE_DDL report with the correct selection options for the target database “Sybase” with “parallel index creation”, you will notice that the SQL files generated do not contain either the “consumers” or the “statistics” clauses.
After tracing the ABAP code through execution, it was found that the code was checking for 2 specific SAP profile parameters.
To enable the parallel index creation feature, you need to set the parameter “dbs/syb/pll_idx_creation”. The additional parameter related to consumers
did not seem to work at Kernel 7.40, but we set it anyway:
dbs/syb/pll_idx_creation = X
dbs/syb/consumers = 4
Once set in RZ10 on the instance profile, the SAP system needs a restart for the parameters to take effect.
Because we found the consumers parameter did not get picked up at all, we just manually edited the SQL files to adjust the consumer number from the default of 3, to our chosen number (see best practice PDF document for comments on how to decide this number).
Just be aware, we are not certain if these parameters influence the standard BW process chain steps for index creation after load.
However, one thing I would say is that the best practice document also mentions that BW process chains should not be dropping indexes when running BW on ASE. This probably goes against the process the BW developers are used to performing with most other databases. For the above mentioned reason, it is therefore best to only affect those parameters on one app server (where you run SMIGR_CREATE_DDL) and to revert them once the report has finished. Summary
We have discussed how ASE to ASE migrations using R3load with NW 7.40, do not seem to call the ABAP code to generate the required “consumers” clause in the output SQL code from the running of the SMIGR_CREATE_DDL ABAP report.
I showed how to manually force this using the Kernel parameters, which also included the hashed statistics capability.
I explained how both of the parameters combined, can affect the subsequent R3load import process timings, significantly reducing downtime.
You may also be interested in: