This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

HowTo: Calculate SAP ASE HADR Replication Agent Buffer Pool Maximum Size

Scenario: In a SAP ASE HADR system, the primary ASE database log contains the following warning:
RepAgent(X): Autoscaling: Warning: The maximum configured buffer pool size:8 is less than the allocated buffer pool size:200. The maximum is set to 200“.

You would like to know what is a good value for buffer pool max size and how do you know we have enough memory assigned to the Rep Agent?

If we reference here:
We can see that the “buffer pool” number is actually the number of buffers, which is also known as the “number of packages”.

If your current parameter values are as such:
“buffer pool maximum size” = 200
“stream buffer size” = 1572864 bytes (1.5MB)
“rep agent memory size” = 500MB

We can calculate the amount of memory needed for the Rep Agent like so:
max 200 packages x 1.5MB = 300MB of Rep Agent memory.

Now we know how to work forwards in the calculation, we can work backwards, starting with the Rep Agent memory:
300MB of Rep Agent memory / 1.5MB = 200 packages (buffer pool max size).

New SAP ASE Audit Logging Destination in 16.0.4

Let’s face it, auditing in SAP ASE 16 is difficult to configure due to the requirement to produce your own stored procedure and correctly size the rotating table setup with multiple database segments for each of the multiple audit tables. Once configured, you then had the realisation that to obtain the records, you needed to extract them from the database somehow, and then the problem of who does this task, what privileges they need, should they themselves be audited etc etc.

Good news! With the introduction of ASE 16.0 SP04, configuring auditing in the SAP ASE database just got dramatically easier and way more useable!

Introducing “audit trail type”

In previous versions of the SAP ASE database, database audit records were stored only in the sybsecurity database in a specific set of tables that you had to size and create and rotate yourself (or with a stored procedure).

Once the records were in the database, it was then up to you to define how and when those records would be analysed.
Depending on whether your SIEM tool supports direct ODBC/JDBC access to SAP ASE or not, would depend on how complex the extraction process would be.

In SP04 a new parameter was introduced called “audit trail type” where you can now set the audit store to be “syslog”.

When setting the store to be “syslog”, the audit records are pushed out to the Linux syslogd daemon (or rsyslogd or syslog-ng) and written to the O/S defined location according to the configuration of syslogd:

Each audit record gets a tag/program name of “SAP_ASE_AUDIT”, which means you can define a custom syslogd log file to hold the records, and also then specify a custom rotation should you wish.
Your syslogd logs may already be pulled into your SIEM tools, in which case you will simply need to classify and store those records for analysis.

With the new parameter set to “syslog” and the audit records being stored as file(s) on the file system, you will need to ensure that the file system has adequate space and establish a comfortable file retention (logrotate) configuration to ensure that audit records do not cause the file system to fill (preventing persistence of additional audit records).

Of course, should you enjoy torture, you can always go ahead and continue to use the database to store the audit records. Simply setting the new parameter “audit trail type” to “table”, will store the audit records in the database just like the previous versions of ASE.

Useful Links

What’s new in ASE 16.0.4

Parameter: audit trail type

HowTo: Extract SAP PI/PO Message Payload from SAP ASE DB

Sometimes you may need to directly the extract the SAP PO message payload from the underlying database tables such as BC_MSG_LOG in SAP ASE 16.0 database.
This could also potentially be called: extracting hex encoded ASCII data from an ASE image column. Because the SAP PO tables use an ASE image data type to store the payload as an off-row LOB.

There are plenty of examples for doing this extraction in Oracle, but in ASE it is not so easy because the message size could be larger than that supported by the page size of ASE (usually 16k in an ASE for BusSuite).
This means you won’t be able to store it into a T-SQL variable and use the ASE functions.

Instead, we can use the below simple approach to extract the raw hex, and then use Python 2 to convert it to ASCII:

1, Execute the selection SQL using isql at the Linux command prompt on the database server:

isql -USAPSR3DB -S<SID> -w999 -X

select MSG_BYTES
where MSG_ID='<YOUR MSG ID>'


The output will consist of hexadecimal output, which starts with “0x” and should look something like this:


Copy & paste into a text file on the Linux server (use your favourite text editor) and call the file data.txt.

Edit the data.txt file and remove the first “0x” characters from the data.
Remove all newlines and carriage returns in the file.

Now create a simple Python script to read the data from our file data.txt and translate from hex to ASCII then print to the screen:

with open('data.txt', 'r') as file:
    data =
print data.decode('hex')

Run the Python script:

python ./

The output should contain a header and a footer which start with:  “–SAP_”.
If you get an error from the Python script, then it could be because there are additional newlines or carriage returns in the data.txt file.

SAP ASE HADR Overview – Part5

In this multi-part post, I’m explaining the basics behind how SAP Replication Server works when replicating from a SAP ASE database to a SAP ASE database as part of an HADR (ASE always-on) setup for a SAP Business Suite system.
The post will be based on SAP ASE (Adaptive Server Enterprise) 16.0 HADR with SAP Replication Server (SRS) 16.0.

In Part 1 we started with:

  • What is SRS.
  • The basic premise of HADR with SRS.
  • What a transaction is.

In Part 2 we went on to discuss:

  • What is the ASE transaction log.
  • Which databases are replicated.
  • How do transactions move to the SRS.

In Part 3 we covered:

  • What is the Active SRS.
  • What are the Key Parts of SRS.
  • Replication Synchronisation Modes
  • Open Transaction Handling
  • What are the Implications of Replication Delay

In Part 4 we stepped through the replication of Bob’s data change and saw how the transactional data was replicated first to the SRS and eventually to the companion (secondary) database.

This is the last part of my ASE 16.0 HADR mini-series, and in this final part I will discuss possible issues that can impact an ASE 16.0 HADR system, which might be useful when planning operational acceptance testing.

Companion ASE Unavailable

When the companion (secondary) database is unavailable for whatever reason (undergoing maintenance, it’s broken or for other reasons), then the replicated transactions will still continue to move through the SRS until they get to the outbound queue (OBQ).
The assumption is that the active SRS (on same server as the companion DB) is still up and working.

In the OBQ the transactions will wait until the companion is available again.
As soon as the companion is available, the transactions will move from the OBQ into the companion.
The primary database will be unaffected during this time and transactions will continue through the SRS until the OBQ becomes full.

If the OBQ fills up, then transactions will start to accumulate in the inbound queue (IBQ).

If the companion database is down for a long period of time, you may need to make a decision:

  • Increase the stable queue partition space to hold more transactions.
    With the hope that the companion can be brought back online.
  • Disable replication, removing the Secondary Truncation Point (STP) from the primary database and acknowledging that the companion will need re-materialisation to bring it back in-sync with primary.

Inbound Queue Full

When the inbound queue becomes full, the transactions will start to build up into the simple persistent queue (SPQ).
You should note that the IBQ, by default, is only allowed to use up to 70% of the stable queue partition size. The rest is for the OBQ. So “full” is not actually 100% of the queue space.

There can be two common reasons for the IBQ to fill:

  1. The OBQ is also full due to an issue with the connection to the companion ASE database, or the companion is unavailable.
  2. There is an open transaction in the IBQ and the SRS is waiting for the “commit” or “rollback” command to come through from the SPQ for the open transaction.

To resolve the full IBQ, you are going to need to establish which of the two issues is occurring.
An easy way to do this is to check the OBQ fill level.
If transactions are moving from the OBQ to the companion, then the issue is an open transaction.

If an open transaction has caused the IBQ to fill, then the “commit” or “rollback” transaction could now be stuck in the SPQ. Since there is no space in the IBQ, the SRS is also unable to process the SPQ records, which leaves the IBQ open transaction in a stale-mate situation.
You will need to make a decision:

  • Add more space to the stable queues to increase the IBQ size.
  • Increase the proportion of stable queue size that the IBQ can use (if OBQ is empty).
  • Zap (remove) the open transaction from the IBQ (will mean data-loss on companion so a rematerialise may be needed).

Normally, you can just add more space by adding another partition to the stable queues, hopefully resolve the issue, then remove the extra space again. How much is needed? Nobody will know.
However, if you have to zap the open transaction, then make sure you dump the queue contents out first, so you can see what DML was open, you can then make a decision on how the missing transaction will affect the companion database integrity (could negate the need for rematerialisation).

During this problematic period, the SPQ has remained functional, which has meant that the primary database has been able to continue to send transactions to the active SRS and therefore allowed it to continue to commit data in a timely manner. The primary database will have no issues.

Simple Persistent Queue Full

This is probably the most serious of the scenarios.
Once the SPQ becomes full, it immediately impacts the Replication Agent running in the primary ASE database.

Transactions are unable to move from the primary database transaction log to the active SRS.
You will start to get urgent warnings in the primary ASE database error log, informing you that the SPQ is full and that the buffers in the primary Replication Agent are full.

You will also see that the Replication Agent will be producing error messages and informing you that it has switched from synchronous to asynchronous replication mode.

The integrity of your production ASE database is now at risk!

It is possible you can add more space to the SPQ if you think you can resolve the problem in the IBQ and have the time to wait for the IBQ to empty!

You should note that this scenario has the symptoms if the active SRS is not working at all. If the SPQ is not available, then you need to troubleshoot the SRS error log. It’s also possible you may have issues with the Replication Agent in the primary ASE.

Primary Transaction Log Full

If your active SRS has now filled up to the SPQ, your primary ASE database is now at serious risk.
With the Replication Agent unable to move the Secondary Truncation Point (STP) then the transaction log of the primary ASE will start to fill up.
You will not be able to release the segments, even with a transaction log dump, because they are still needed by the Replication Agent.

You have the following options available:

  • Add more space to the transaction log by adding a new device (recommended, instead of expanding the existing device).
    This will give you more time to maybe fix the SRS.
  • Disable replication, which removes the STP and allows the transaction log to be dumped. A rematerialise of the companion will be needed at a later date.
  • If the transaction log is already full, then even trying to disable replication will not work (no transactions will be permitted).
    You will have to use DBCC to remove (ignore) the “LTM” (Last Truncation Marker), which will have a similar outcome to disabling replication.

At this point, if all else fails and your transaction log is full DO NOT RESTART THE PRIMARY ASE!
If you restart the primary ASE with a full transaction log, then you will be in a world of pain trying to get it to start up again.
You need to stop all processing (it won’t be working anyway), then try and resolve the issue.


This concludes the 5 part mini-series on SAP ASE 16.0 HADR.
I hope it has given you enough of an overview to be able to explain a typical setup, the way replication occurs and the most common issues that put your production ASE database at risk.

Feel free to contact me with any questions and feedback.

SAP ASE HADR Overview – Part4

In this multi-part post, I’m explaining the basics behind how SAP Replication Server works when replicating from a SAP ASE database to a SAP ASE database as part of an HADR (ASE always-on) setup for a SAP Business Suite system.
The post will be based on SAP ASE (Adaptive Server Enterprise) 16.0 HADR with SAP Replication Server (SRS) 16.0.

In Part 1 we started with:

  • What is SRS.
  • The basic premise of HADR with SRS.
  • What a transaction is.

In Part 2 we went on to discuss:

  • What is the ASE transaction log.
  • Which databases are replicated.
  • How do transactions move to the SRS.

In Part 3 we covered:

  • What is the Active SRS.
  • What are the Key Parts of SRS.
  • Replication Synchronisation Modes
  • Open Transaction Handling
  • What are the Implications of Replication Delay

In this penultimate part we step through the process of replication for an individual transaction, looking at how each of the previously discussed components plays it part along the way.

Step 1 -Begin Transaction

In the first step, our SAP Business Suite application is being used by Bob, our end-user.
Bob has a screen open and saves a change to an item of data.
The Business Suite application calls the Netweaver application stack to persist the change to the data according to the application’s dictionary model.
The dictionary dictates the relationship of the business object (e.g. an invoice) to the relational database tables.
The Netweaver code uses the SAP Kernel supplied libraries to save the change to the necessary system database tables.

There could be many tables involved in Bob’s one change.
It would not be right to update just one table without also updating the others that are related. Otherwise we would not have consistency.
For this reason, a single database transaction can include many data manipulation language (DML) statements.
By grouping the DML statements for the related table updates into a single database transaction, the SAP Kernel can enforce consistency.

In our example, our transaction will include updates to 2 different tables: tableA and tableB.
To designate the start of the database transaction, the SAP Kernel calls the database library to “BEGIN TRANSACTION”.

The affect of the database call to “BEGIN TRANSACTION”, is that a new transaction log record is OPENened:

Step 2 – Replication Agent – Open Transaction

Once a transaction has started (opened), the ASE HADR Replication Agent will see it.
Remember, we discussed the Replication Agent in Part 2.

The Replication Agent sends the transaction data across the network to the target SAP Replication Server (SRS). It knows where to send the data because of the configuration applied to the Replication Agent during the HADR setup process.

The SRS receives the data from the Replication Agent and writes it to the Simple Persistent Queue (SPQ), then sends an acknowledgement back to the Replication Agent.

Step 3a – Update Tables – DML

So far, all that has happened is a new transaction has been opened.
Now Netweaver will apply the required DML to the transaction:

UPDATE tableA SET column1=”ABC”
UPDATE tableB SET column1=”123″

The Kernel will apply the required DML to the opened transaction, this will also update the transaction log, which will be seen by the Replication Agent and sent across to the SRS as before.

You will notice that at this point, we are still using transaction log space, but we are also consuming space in the SPQ.

At this step, if one of the required “UPDATE” statements was to fail, then the whole transaction could be cancelled (a rollback) and no changes would be permanently made to any of the tables.
This is one of the requirements of the ACID principles.

Step 3b – The SRS Inbound Queue

At the same time as the DML is being applied to the open transaction in step 3a, the SRS continues to process the open transaction.

Inside the SRS, there are various component modules that process the incoming transactions from the SPQ, placing them in the correct order and compacting them (grouping) into larger, more efficient transactions.
Once this initial processing has completed, the new transaction is placed into the inbound queue (IBQ).

Something you will notice, is that we now have consumed space in:

  • Primary database transaction log.
  • Simple Persistent Queue.
  • Inbound Queue.

Once the transaction is safely persisted into the IBQ, the record in the SPQ is now free for re-use.

Step 4 – End Transaction

In steps 3a and 3b, we see the opening transaction record and the DML move across to the SRS.
At this point in time, Bob’s changes are still not replicated to the companion (secondary/standby) database.
In fact, Bob’s changes are not even visible to other users of the primary database, because Bob’s changes are not yet committed.

Once all the DML in Bob’s transaction has been applied at the primary database successfully (still not committed), then the SAP Kernel can issue the “END TRANSACTION”.
This signifies that this group of changes are finished.
After the “END TRANSACTION”, the SAP Kernel can issue one of two things; a “COMMIT” or a “ROLLBACK”.
In our case, the Kernel issues a “COMMIT”.

The “COMMIT” on the primary database is now performed, at the same time, the “COMMIT” record is also sent by the Replication Agent to the SPQ of the SRS.

I can hear the DB experts gasp at this point!
Yes, in Part 3 I mentioned that the commit is not allowed on the primary until after the Replication Agent has successfully sent the commit to the SPQ. In actual fact, this is not a hard and fast rule. The Replication Agent will attempt to send the commit record to the SPQ; it will wait for a given amount of time, before switching to asynchronous replication mode (see Part 3 for a description of this mode).
The commit to the primary database is therefore allowed to happen, even if it has not yet been acknowledged by the SPQ. This is the trade-off between performance and protection. The HADR solution has flexibility that allows a configurable amount of replication delay before synchronous replication is switched to asynchronous.

The acknowledgement that the “COMMIT” record has been successfully stored in the SRS SPQ, allows the primary database Replication Agent to move the Secondary Truncation Point (STP) forward and release the transaction log record, which allows it to be freed when the next “DUMP TRANSACTION” (transaction log backup) is performed.

The primary database data change becomes visible to all users of the database.
Bob’s screen returns a message telling him that his data is saved.

Step 5 – The SRS Outbound Queue

Inside the SRS IBQ, all of our transaction is now complete, we have a “BEGIN” and an “END”, a “COMMIT” and some DML in between that contains the updates to the required tables.

Once the “COMMIT” record is seen by the Inbound Queue (IBQ) of the SRS, then the SRS will process the re-packaged transaction from the IBQ to the Outbound Queue (OBQ).

This processing could involve adjusting the SQL language used, if the target database is not ASE.

From the OBQ, the Data Server Interface (DSI) component of the SRS, applies the transaction to the target database.

Finally, the replicated transaction data is applied to the target secondary database and harmony is achieved.

Bob’s View

Throughout this whole replication cycle, Bob had no idea that his data was being replicated to the target secondary database hundreds of kilometres away.
The response time of the database to the SAP Kernel was only slightly impacted by the addition of the Replication Agent processing time, plus the network transfer time to the active SRS, plus the processing and persistence time to the SPQ of the SRS.

As well as the response time, we noticed how the storage requirements of the SRS are bigger than the actual transaction log of the primary database due to the way the transaction is processed/transformed and re-processed through the SRS, then queued for application to the target database.

In the final part 5, I will discuss some common issues that can occur with HADR, allowing you to comprehensively plan your operational acceptance testing.