This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Dropping Empty SAP BW Table Partitions in SAP ASE

In a SAP BW 7.4 on SAP ASE database, table partitions are used as a method of storing data, primarily for query performance but also for object (table) management.

In this post I show a simple way to identify tables with many empty partitions, so that you can more quickly drop those empty partitions.
Less empty partitions reduces the downtime of database migrations, and can also increase the performance of business-as-usual queries.

Partitioning in SAP ASE

To use SAP BW on SAP ASE, the “partitioning” license needs to be bought/included in the database license.
The license is automatically included in the runtime license for ASE for SAP Business Suite.

SAP note 2187579 “SYB: Pros and cons of physical partitioning of fact tables” list all of the benefits and the options of partitions for ASE 15.7 and ASE 16.0.

During normal business usage, the database can use less effort to obtain data from a partitioned table, when the partition key column is used as a predicate in the query.
This is because the database knows exactly where the data is held.
It’s in its own table partition and is therefore more manageable.

A good analogy is to imagine that you have two holding pens, one with all cats, and one with all dogs. The partition key is “animal type” and each holding pen is a partition.
Both holding pens together make the table.
If we wanted to get all data from the table where the occupant was a cat, we simply go to the pen with all the cats and read the data.

Now imagine that we had 3 partitions that make up our table, but one of those partitions was actually empty.
In some cases, depending on the database settings, certain types of data queries will still scan for data in that empty partition.
These additional scans do not take a huge amount of time individually, but it does cause extra effort nevertheless.

If we upscale our scenario to a large multi-terabyte SAP BW system, and to a BW FACT table with thousands of partitions.
Imagine if we had thousands of empty partitions and we were accessing all records of the table (a full table scan), this would give a reasonable delay before the query could return the results.
For this exact reason, SAP provide a BW report.

The Standard SAP BW Report

The standard ABAP report SAP_DROP_EMPTY_FPARTITIONS is specifically for the FACT tables of a BW system and it is a recommendation in the ASE Best Practices document for this report to be run before a BW system migration/copy is performed.

By reducing the empty partitions, we also reduce the system export time. Easy winner.

The problem with the SAP standard report, is that you will need to go through each individual BW info-cube and execute the report in “show” mode.
This is really, really painfully slow.

A Better Way

Instead of the standard report, I decided to go straight to the DB layer and use SQL.
The example below is for SAP ASE 16.0 (should work on 15.7 also):

select distinct 
       convert(varchar(20), as tabname, 
       t_spc.numparts-1 as num_parts, 
       t_spn.numparts-1 as num_emptyparts 
from sysobjects so, 
             count(sp1.partitionid) as numparts 
      from syspartitions sp1 
      where sp1.indid = 0 
      group by 
     ) as t_spc,
            count(sp2.partitionid) as numparts 
      from syspartitions sp2, 
           systabstats sts 
      where sp2.indid = 0 
        and sp2.partitionid = sts.partitionid
        and sts.indid = 0 
        and sts.rowcnt = 0 
      group by 
     ) as t_spn 
where like '/BIC/F%' 
  and = 
  and = 
  and so.loginame = 'SAPSR3' 
  and t_spn.numparts > 1 
order by t_spn.numparts asc,

It’s fairly crude because it restricts the tables to those owned by SAPSR3 (change this if your schema/owner is different) and it is looking for FACT tables by their name (“/BIC/F*”) which may not be conclusive.

Below is an example output of the SQL versus the report SAP_DROP_EMPTY_FPARTITIONS in “show” mode:

You can see we are only 1 count out (I’ve corrected in the SQL now) but essentially we get a full list of the tables on which we can have the most impact!

Let’s look at a sample SELECT statement against that table:

We used the following SQL:

set statistics time on 
select count(*) from [SAPSR3./BIC/FZPSC0201]

Execution time on that statement was 25.9 seconds (elapsed time of 25931 ms).
We spent 2 seconds parsing and compiling the SQL statement (lots of partitions probably doesn’t help this either).
Since the CPU time is only 7 seconds, we have to assume that I/O was the reason for the delay while ASE scanned over the partitions.

Dropping The Partitions

Let’s go ahead and actually drop those empty partitions using another standard ABAP report SAP_DROP_EMPTY_FPARTITIONS.

NOTE: See SAP note 1974523 “SYB: Adaption of SAP_DROP_EMPTY_FPARTITIONS for SAP ASE” for more details on how to use the report.

We need to run this in the background, because dropping over 1,000 partitions will take a while.

Once dropped, we can re-run our select statement:

Total elapsed time is now down to just 6 seconds.
Admittedly there could be some time saving due to the data cache and plan cache already being populated for this table, so I ran ASE stored procedure: sp_unbindcache, which seemed to have done something.
Then I re-ran the query:

Being unsure if the unbind worked or not (I could not bounce the DB to be absolutely sure), I’m going to just accept that we have improved the result by dropping those empty partitions.

Heterogeneous Migration – Kernel Params for ASE 16 Parallel Index Creation

During an SAP BW 7.40 heterogeneous migration (using R3load) from ASE 16.0 to ASE 16.0, it’s possible to use an ASE parallel index creation feature to improve the speed of the index creation process (during import) on the target system.

In this post, I show the required parameter(s) to enable this parallel creation feature, instead of having to manually edit the TPL files to add it.
I am not going to go into the “why use the heterogeneous copy method for migration of an ASE to ASE database“, except to say that certain benefits can be had in certain migration conditions.

The steps that are impacted during the system copy are:

  • the creation of the BW specific table SQL files (SMIGR_CREATE_DDL) in the source system.
  • the import of the BW specific tables and the creation of their indexes in the target system.
  • the “update statistics” job execution selection within SWPM on the target system.

Best Practice

You should note that this post and process was put together with respect to the SAP on ASE Best Practice Guide, which is titled “SAP Applications on SAP Adaptive Server Enterprise – Best Practices for Migration and Runtime” and dated 26-01-2018 which can be found attached to SAP note 1680803.

In the Best Practice document, it specifically says:

  • see SAP Note 1952189: Parallel Index Creation with SMIGR_CREATE_DDL.
  • Changes to the template file DDLSYB_LRG.TPL will not affect the migration of special SAP BW tables

When we follow the mentioned SAP note 1952189, we see that simply selecting the correct option during execution of report SMIGR_CREATE_DDL (in the source system) should allow parallel index creation.
It does not! That is the reason for this post.
I have a theory as to why this selection process does not work; it was never tested by SAP for ASE to ASE. It’s just my theory.

Parallel Index Creation

During the subsequent import into the target ASE 16.0 database, the generated DDL (from SMIGR_CREATE_DDL) is used to create the indexes. In SAP ASE it is possible to use an additional clause on the “CREATE INDEX” statement (“consumers = n“) to use more than 1 worker process when performing the sort operation required for the index build.
This is not so much parallel creation, but parallel sorting during creation. It still speeds up the index creation.

The actual parallelism of the index creation is controlled, as usual, by the number of R3load processes you configure in SWPM (MIGMON).
However, we will stick with the terminology offered by the best practice document.

Hash Statistics

During the migration process, we really want to get through the downtime phase as fast as possible so that post-processing can begin, which can be done by more than 1 person/robot.
Fore this reason, we want to avoid the task “update ASE statistics” step of SWPM. In large BW systems this step can take hours and it prevents the completion of SWPM while it is running.

Instead, as well as a “consumers” clause on the index creation statement, there is also a “statistics” clause to force the creation of index statistics at the point of creation.
This feature allows us to completely skip the update statistics when we also combine this feature with the use of the DDLSYB_LRG.TPL template, which includes this clause also.
It makes complete sense to update statistics after after an index has been created, because the chances are, some of the required index pages are still in memory.

Creating the DDL

BW specific tables may be partitioned and have all sorts of additional features applied, which cannot be easily handled by the usual TABART based DDL generation process (in ASE this uses the DDLSYB.TPL and DDLSYB_LRG.TPL template files).

NOTE: SAP note 1680803 contains an updated DDLSYB_LRG.TPL with “fixes” described in the Best Practice document, also attached to that note.
NOTE: Follow SAP note 1836598 for instructions on how to use DDLSYB_LRG.TPL.

Because of those special BW uses, we are required to execute the SMIGR_CREATE_DDL report in the source system prior to starting the export process.
The report simply creates the necessary SQL DDL to re-create these special BW tables and their indexes.

It is at this point that we can influence the way that these BW tables and their indexes are created.
Specifically, the number of consumers and the creation of hash statistics.

The Parameter

While running the SMIGR_CREATE_DDL report with the correct selection options for the target database “Sybase” with “parallel index creation”, you will notice that the SQL files generated do not contain either the “consumers” or the “statistics” clauses.

After tracing the ABAP code through execution, it was found that the code was checking for 2 specific SAP profile parameters.

To enable the parallel index creation feature, you need to set the parameter “dbs/syb/pll_idx_creation”. The additional parameter related to consumers did not seem to work at Kernel 7.40, but we set it anyway:

dbs/syb/pll_idx_creation = X
dbs/syb/consumers = 4

Once set in RZ10 on the instance profile, the SAP system needs a restart for the parameters to take effect.
Because we found the consumers parameter did not get picked up at all, we just manually edited the SQL files to adjust the consumer number from the default of 3, to our chosen number (see best practice PDF document for comments on how to decide this number).

Just be aware, we are not certain if these parameters influence the standard BW process chain steps for index creation after load.
However, one thing I would say is that the best practice document also mentions that BW process chains should not be dropping indexes when running BW on ASE. This probably goes against the process the BW developers are used to performing with most other databases.
For the above mentioned reason, it is therefore best to only affect those parameters on one app server (where you run SMIGR_CREATE_DDL) and to revert them once the report has finished.


We have discussed how ASE to ASE migrations using R3load with NW 7.40, do not seem to call the ABAP code to generate the required “consumers” clause in the output SQL code from the running of the SMIGR_CREATE_DDL ABAP report.

I showed how to manually force this using the Kernel parameters, which also included the hashed statistics capability.

I explained how both of the parameters combined, can affect the subsequent R3load import process timings, significantly reducing downtime.

BW Table Types & Descriptions

A useful list of table types in a SAP BW system, including the proposed table name definitions and the SAP table where the list of those types of tables are stored.
SID table /BI0/S<characteristic>: RSDMSIDTAB
Text table /BI0/T<characteristic>: RSDMTXTTAB
Master data table (time-independent) /BI0/P<characteristic>: RSDMCHNTAB
Master data table (time-dependent) /BI0/Q<characteristic>: RSDMCHTTAB
View master data tables /BI0/M<characteristic>: RSDMCHKTAB
Attribute SID table (time-independent) /BI0/X<characteristic>: RSDMASITAB
Attribute SID table (time-dependent) /BI0/Y<characteristic>: RSDMASTTAB
SID view: /BI0/R<characteristic>: RSDMSIDVIEW
Hierarchy table /BI0/H<characteristic>: RSDMHIETAB
Hierarchy SID table /BI0/K<characteristic>: RSDMHSITAB
Hierarchy structure SID table: /BI0/I<characteristic>: RSDMINCTAB
Hierarchy interval table: /BI0/J<characteristic>: RSDMHINTAB
Hierarchy SID view /BI0/Z<characteristic>: RSDMHSIVIEW

Testing SAP BI Java Query Outside iView

To test a SAP BI Java query outside of an Enterprise Portal iView call, you can use the URL:

https://[EP Server]/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!!2fplatform_add_ons!!2fiViews!
You should substitute “[EP Server]” with your SAP Enterprise Portal server hostname, and change “TEMP1” to be the name of the BEx query you wish to run.  You should take the same name as that used in the iView.

HowTo: Find Version of SAP BWA/BIA (Accelerator)

The SAP BWA (BW Accelerator) is based on the TRex search service and uses dedicated hardware to provide an additional in-memory index search capability for an existing SAP BW system.  NOTE: This is not to be confused with the SAP HANA DB, which is also in-memory, except that HANA is a more advanced and fully rounded product and not related to TRex.

Scenario: You may know there is a BWA connected to your BW system, but you don’t know where it is and what version it is.  You may need to consider this information in preparation for an upgrade.
The BWA details can be seen from the BW system via transaction TREXADMIN.
The “Summary” tab shows all the revision details and the make and model of the dedicated hardware:


Additional version information can be seen on the “Version” tab, you can also see any additional load balancing nodes in the TRex landscape:


Connectivity to TRex is performed either via RFC server on the TRex server (BWA 700) or via the ICM (BWA 720+).
The TRex Web Service which can be accessed via “https://<trex server>:3xx05/TREX”.
The “Connectivity” tab allows you to perform connectivity tests for RFC and HTTP to the BIA.
For RFC based connections, once registered at the gateway, you can see the detail in transaction SMGW (select “Goto -> Logged on Clients”):


You can see the TRex connections based on the “TP Name” column:


For ICM based connections, you will see the HTTP requests going out via the ICM in transaction SMICM.
For SAP notes searches, the component for the BWA is BC-TRX-BIA.