This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

HowTo: Call sp_dump_history with Purge in ASE 16.0 SP03

Due to a bug in ASE 16.0 SP03 PL08 (and maybe earlier versions) the sp_dump_history call does not work properly.
It just returns an empty screen.
When you look at the procedure code, you will see that it fails to find the tempdb for whatever reason.

We can use the sp_dump_history procedure to look at the dump history (backup history) for an ASE database.
From ASE 15.7 onwards we can also use it to purge the history, reducing the size of the dumphist file and in some cases improving the performance of backups themselves (see SAP note 2753062 for details on issues).

By prefixing the tempdb onto the call from the master database, you can get sp_dump_history to work as it should.

Below is an example that purges all backup records from the dump history file (dumphist) from before January 29th @ 15:00:00 :

use master
go

tempdb..sp_dump_history @operation='purge', @until_time="january 29,2020 15:00:00:00"
go

Tip: In Linux the correct date format can be obtained with the date command:

date "+%b %d,%Y %T"

As another tip, you can use the sp_helptext procedure to look at the source code of any stored procedures. Take a look on help.sap.com for the parameters for sp_helptext.

ASE Issue During Setuphadr Removes Logins on Companion

There’s a specific scenario that can occur during the setup of SAP ASE HADR, which can leave the companion (secondary) database with no logins or roles. You will be unable to log into it. It will be completely unusable.

This issue could apply to any ASE SP03 where auditing is turned on (default for PL06 and above) but I have seen this in two systems, one on ASE SP03 PL08 and one on PL08-HF1.

The symptoms are seen during setuphadr in the step for materialisation of the master database; the setuphadr screen output doesn’t change. Eventually it displays a timeout.
When doing a “ps” on the primary ASE server, you can see a number of BCP processes that appear to be “stuck”. They are connecting to the companion database.
Inside the repdata directory location, the BCP data and out files are present and but they do not change.

When attempting to restart the setuphadr, the process fails as it cannot log into the companion ASE.
When trying manually to log into the companion ASE using isql, you are unable to log in as sapsa, sapsso. You can log in as “sa” only, but you have no roles/privs to do anything.
You check the list of logins and roles in the companion master DB (table master..syslogins) and the tables are empty, all logins have been removed!
You may also see login errors mentioned in the RMA log file on the companion server (the active SRS) with “SQLState: ZZZZZ“.

From what I can see, the cause of the issue is ASE auditing. It is enabled on the companion DB (the sp_configure parameter “auditing” has value of “1”).
The audit table is full or nearly full and/or the sybsecurity database is full or nearly full or the sybsecurity database tran log is full or nearly full.
This prevents the BCP processes from successfully loading the table data from the primary.
It doesn’t seem to prevent the truncation of the companion master DB table data, leaving it with no table data for roles and logins in the master DB.

The workaround that I have found that works most effectively is to completely disable ASE auditing (auditing = 0) in the companion ASE instance, to prevent the issue from happening in the first place.
There are a couple of params that can change the way auditing works and maybe adjusting the filtering to prevent auditing of system users would also solve the problem, but this is the setup of replication so I don’t see why you would want that process audited at all. You can always re-enable auditing after the HADR setup is completed.

Another prevention tip is to ensure that sybsecurity database and tran log have space before the setuphadr process is started.

What if you have already hit the above issue and are looking for help right now?
If the above issue is hit, then the companion master DB could be restored from backup, except you cannot log into it to start the LOAD command.
You could copy the master.dat from primary, but the inbuilt GUID that gets set on DB creation would then be identical to primary (companion and primary should be different).

This leaves two possible options:

  • Re-create the companion master db following SAP note 2631728 “How to rebuild master device and master database from backup – SAP ASE”.
  • Restore master DB from backup (could be a backup from primary master DB).
  • If backup from primary was used, then you may need to re-create the SID database (to be on the safe side), using the DDLGEN commands.

or:

  • Blast the companion ASE away and re-install using SWPM.

Good luck, and make sure you take regular backups 🙂

Tuning SAP ASE – Decompressing Small Lookup Tables

By default, for SAP applications based on SAP Netweaver, SAP ASE 16.0 will compress the data pages within the tables inside the database.
In some scenarios this will not help with performance.

In this brief post I explain the scenario, I correct a SAP note and show you how to check and adjust the tables in a safe way (do not just follow the note, you could corrupt the table data).

Data Page Compression

In ASE 16.0, table data pages can be compressed.

For SAP Netweaver based SAP systems such as SAP ERP, the use of table data page compression can help improve performance in two ways:

  1. Reduced I/O
    Reading and writing compressed data pages to disk makes it more efficient. Imagine that it’s a bit like copying a 4MB zip file on your computer, instead of an uncompressed 40MB text file!
  2. Reduced Memory Consumption
    In memory (in the data cache), the data pages can be held in compressed form and may not need to be uncompressed in order to satisfy certain queries.

The above sounds good and indeed it is good.
But there are some borderline cases where compression on a table may not be beneficial.

The Drawback with Compression

In some cases, compression can affect the response time of queries in very specific circumstances.

If you have “small” tables and those tables are accessed frequently with SQL queries that perform full table scans, then the extra CPU time required to decompress the table contents in the data cache, can impact the response time.

How do We Identify Candidate Tables?

If there are tables which are adversely affected from having compression turned, on, how do we identify those tables?

We can follow SAP note 1775764, which provides a nice simple SQL statement to list likely candidates. Great!

However, the provided SQL statement actually does not do what it should!
Oh dear!
Instead, the SQL incorrectly lists tables that are definitely not good candidates for decompression. In fact, some of them are the complete opposite of good candidates!

The SQL taken from the SAP note is:

--- DO NOT RUN THIS IT IS WRONG!!! ---
use <SID>
go
select top 10 ObjectName, 
LogicalReads 
from master..monOpenObjectActivity 
where DBID = db_id() 
and LogicalReads > 1000000 
-->   and data_pages(DBID,0)) < 6400      <-- WRONG LINE HERE! 
and object_attr(DBName || '.SAPSR3.' || ObjectName,'compression') <> 'none' 
order by LogicalReads desc
go

You will see in the above SQL taken from the SAP note, the “data_pages” function call has 2 parameters, the ID of the current database and an object id. Except the object id on line 9, has been hard coded to “0”.

This causes a value of “0” to be returned from the “data_pages” call, which is obviously always less than 6400.

Essentially, any table that has compression enabled, with high numbers of LogicalReads (table scans in the data cache), will be reported as a candidate! How confusing.

Correcting the SQL

We can see that we need to correct the SQL.
Here’s how:

--- This is the corrected statement ---
use <SID>
go
select top 10 ObjectName, 
LogicalReads 
from master..monOpenObjectActivity 
where DBID = db_id() 
and LogicalReads > 1000000 
and data_pages(DBID,ObjectID)) < 6400
and object_attr(DBName || '.SAPSR3.' || ObjectName,'compression') <> 'none' 
order by LogicalReads desc
go

It’s simple enough, we just substitute the incorrect “0” on line 9, with the column “ObjectID” from the monOpenObjectActivity table we are selecting from.
This correction then allows the correct output from the call to function “data_pages“.
With the correction in place, tables with a data page count of less than 6400 pages (using 16KB pages, means tables less than 100MB in size), with compression enabled and a high number of LogicalReads, will be listed as candidates. Yay!

Adjusting the Candidate Tables

Once you have your table list, you might now wish to turn off compression.
This is simple enough.
You can use the additional SQL supplied in the SAP note:

--- WARNING: check SAP note 2614712 first! ---
use <SAPSID>
go
setuser 'SAPSR3'
set quoted_identifier on
set chained on
go
alter table SAPSR3.<tablename> set compression = none
reorg rebuild SAPSR3.<tablename> [with online]  <-- check 2614712 first!
commit
go

As you can see, you will need to perform a REORG on the table.
The SQL statement specifies the optional “with online” option.
BE WARNED: There are lots of SAP notes about issues with REORG ONLINE, please ensure that you check SAP note 2614712 first! Shame on SAP for not mentioning this in the SAP note!

Checking the Result

We’ve gone through the effort of adjusting tables we think that might benefit from no-compression, but how do we now confirm that we have positively impacted the response time?

Well that’s a difficult one. I’ll leave that for another post.

Summary

The summary for this post, is a little bit “Boris Johnson”:

  • Follow the SAP notes.
  • But don’t follow the SAP notes.
  • Check and double check what is output from commands, don’t just take the output as being correct!
  • Check your results.
  • Compressing tables is a good thing.
  • Compressing specific tables may not be a good thing.

Simple in-Cloud SAP LaMa DR Setup

When running the SAP Landscape Management tool (LaMa) in the cloud, you need to be aware of the tool’s importance in your SAP landscape in the context of disaster recovery (DR).

In this post I will highlight the DR strategies for hosting SAP LaMa with your favourite cloud provider.

What is SAP LaMa?

For those not yet accustomed to SAP LaMa, it is SAP’s complete SAP/non-SAP landscape management and orchestration tool for both on-premise and cloud.

SAP LaMa comes in two guises:

  • Standard Edition
  • Enterprise Edition

The Enterprise edition comes with many additional features, but crucially, it includes the “Cloud Connectors” for all the mainstream cloud vendors.
A “Cloud Connector” allows seamless start/stop/provisioning of cloud hosted VMs.

Using SAP LaMa to execute a pre-configured, ordered startup of VMs and the applications on those VMs can be a huge time saving during a disaster.

What Installation Patterns Can We Use with SAP LaMa?

SAP LaMa is a software component installed inside a standard SAP Netweaver Java stack. Therefore, you may use the standard Netweaver Java installation patterns such as single-system or distributed system.
SAP LaMa will work in either pattern.

What is a Normal Installation Pattern in the Cloud?

In the cloud (e.g. Azure, GCP, AWS etc), when installing SAP Netweaver, you would usually want to use the distributed system architecture pattern, to prevent a single VM outage from disrupting the SAP Netweaver application too much. The distributed system pattern is preferred because you have slightly less control over the patching of the physical host systems, so it afford you that little bit extra up-time.

This usually means having: a Web Dispatcher tier, at least 2 application servers in the application tier, the Central Services (SCS) instance having failover and using Enqueue Replication Server (ERS), plus database replication technology on the database tier.


How is DR catered for in SAP LaMa?

For large organisations with business critical SAP systems like SAP S/4HANA, SAP ECC etc, you would usually have a “hot” DR database server (i.e. running and actively replicating from the primary database) in your designated DR cloud region.
This means there is minimal data-loss and as the DR database is mere minutes behind the primary database in transactional consistency.
The application tier and Web Dispatcher tier would use the cloud provider’s VM replication technology (e.g in Azure this is called Azure Site Recovery), ensuring that the application patching and config is also replicated.

I would designate the above pattern as a “hot” DR architecture pattern.

For SAP LaMa the situation is slightly more flexible because:

  1. It is not business critical, only operations critical.
  2. The database is only a repository for configuration and monitoring data. Therefore, transactional data loss is not critical.
    In fact, the configuration data in SAP LaMa can be exported into a single XML file and re-imported into another LaMa system.

Due to the above, we have some different options that we can explore for Disaster Recovery.
Excluding the “hot” DR architecture pattern, we could classify the DR architecture pattern options for SAP LaMa as “restore”, “cold”, “cool” and finally “warm”. (These are my own designators, you can call them what you like really).

What is a “restore” DR pattern for SAP LaMa?

A “restore” DR setup for SAP LaMa, is when you have no pre-existing VM in your cloud DR region. Instead you are replicating your VM level backups into a geo-replicated file storage service (in Azure this is called Azure Vault).

In this setup, during a DR scenario, the VM backups from your primary region would need to be accessible to restore to a newly built VM in the DR region.

This is the most cost friendly option, but there is a significant disadvantage here. Your system administrators will not have the benefit of LaMa to see the current state of the landscape and they will not be able to make use of the start/stop technology.

Instead they will need a detailed DR runbook with start/stop commands and system/VM startup priority, to be able to start your critical systems in a DR scenario. You are also placing your trust in the VM backup and restore capability to get LaMa back online.

The VM backup timing could actually be an issue depending on the state of the running database at the time of backup. Therefore, you may need to also replicate and restore the database backup itself.

During a DR scenario, the pressure will be immense and time will be short.

Cost: $
Effort: !!!! (mainly all during DR)
Bonus: 0

What is a “cold” DR pattern for SAP LaMa?

A “cold” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, but the duplicate system is completely shutdown, including the VM(s).

In this setup, during a DR scenario, the VM would need to be started using the cloud provider tools (or other method) and then the SAP LaMa system would be started.

Once running, the latest backup of the LaMa configuration would need restoring (it’s an XML file) and the cloud connectors connecting to the cloud provider. After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Compared to the “restore” pattern, we can have our DR LaMa system up and running and start using it to start the VMs and applications in a pre-defined DR operation template (like a runbook).
However, we need a process in place to export and backup the export of the configuration from the primary LaMa system, so that the configuration file is available during a DR scenario.

In Azure, for example, we would store the configuration file export on a geo-replicated file storage service that was accessible from multiple-regions. We also have the associated hosting costs and the required patching/maintenance of the DR VM and LaMa system. As an added bonus, this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Cost: $$
Effort: !!! (some during DR, patching)
Bonus: +

What is a “cool” DR pattern for SAP LaMa?

A “cool” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, and the duplicate system is frequently started (maybe daily) and the configuration synchronised with the primary SAP LaMa system.

The synchronisation could be using the in-built configuration synchronisation of the LaMa software layer, or it could be a simple automated configuration file import from a shared file location where the configuration file has previously been exported from the primary LaMa system.

In this setup, during a DR scenario, the VM *may* need to be started (depends on when the failure happens), using the cloud provider tools (or other method) and then the SAP LaMa system *may* need to be started. Once running, the latest backup of the LaMa configuration would probably not need restoring (it’s an XML file), because for your business critical systems, they would already exist and be configured as a result of the frequent synchronisation. The cloud connectors would need connecting to the cloud provider.
After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Compared to the “cold” pattern, we save a little time by having the frequent configuration file synchronisation already done. We can choose to also have a process in place to export and backup the export of the configuration from the primary LaMa system, should we choose to also use that configuration file.
There is an obvious cost to the frequent starting of the VM. Since you pay for the VM to be running.

As an added bonus, this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Cost: $$$
Effort: !! (a little during DR, patching)
Bonus: +

What is a “warm” DR pattern for SAP LaMa?

A “warm” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, and the duplicate system is constantly running with frequent (could be hourly) synchronisation with the primary SAP LaMa system.
The synchronisation could be using the in-built configuration synchronisation in the LaMa software component, or it could be a simple automated file import from a shared file location where the configuration file has been exported from the primary LaMa system.

In this setup, during a DR scenario, the cloud connectors would need connecting to the cloud provider. After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Like the “cool” pattern, we get an added bonus that this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Compared to the other patterns, we gain the immediate advantage of being able to start/stop VMs and SAP systems in the DR region. However, there is a constant cost for the VM to be running (if using a PAYG VM pricing model).

Cost: $$$$
Effort: ! (hardly any during DR, patching)
Bonus: +

Summary

Depending on your strategy, you may choose to stick to your existing architecture patterns.

You could choose to use a “hot” DR pattern, and ensure that your DR LaMa system is in-synch to the primary.
However, for the most risk averse, I would be inclined to calculate the costs/benefits for the “warm” pattern.
A “warm” pattern also means you could forgo the distributed system installation pattern for the DR system. Choosing the more cost-effective single-system pattern and removing the extra complexity of database level replication.

For SMEs, I would favour more the “cool” pattern. This could remove the need for a Development system, allowing testing of patching on the DR system instead. I feel it represents the middle ground between using the technology vs cost.

SAP ASE Instance Log File Rotation

SAP ASE does a bare minimum of rotating its text log files.
I’m talking about the logs that you can use to see what’s going on under the covers.
Each time the ASE instance is restarted, it pushes the previous instance log file to a date stamped file and starts fresh. This is not great if you have had issues with the database and the log has grown huge, because who restarts the database every day?

ASE also does not preserve the backupserver log or the job server agent log, which could be critical in solving problems.
Instead, it’s better to devise your own custom daily log rotation process.

In this post I describe the main SAP ASE log files, and my recommendation for a rotation configuration on Linux (yay!) and Windows (here be dragons!).

What Needs to Be Rotated?


The following are standard ASE 16.0 text log files that just keep on growing:

  • ASE database instance log: /sybase/<SID>/ASE-16_0/install/<SID>.log
  • ASE backup server log: /sybase/<SID>/ASE-16_0/install/<SID>_BS.log
  • ASE Job Server log: /sybase/<SID>/ASE-16_0/install/<SID>_JSAGENT.log

I do not include the dumphist file, which (if dumphistory is enabled) gets backed up daily from /sybase/<SID>/ASE-16_0/dumphist to /sybase/<SID>/ASE-16_0/dumphist.nnnnnnnnn.
These can also accumulate, so it might be best to just clear them down, especially if (as is good practice) you copy them off with your database backups.

NOTE: For ASE instances with HADR installed (a.k.a SAP Replication Server) there are also an additional 4 log files which should be rotated, that I won’t go into in this post. (Hire me, and I’ll tell you).

What Happens When ASE Rotates Those Logs?

When the ASE instance is restarted, some of the above standard logs are rotated and some are cleared down. For the ones that are rotated, there is no end to the rotation, they keep building up. They are never removed.

As part of your log rotation solution, you should also rotate the ASE rotated logs, making sure that they don’t build up over the years of ASE instance restarts (especially if starting/stopping systems frequently in The Cloud).

This is what happens to the auto-rotated logs:

  • <SID>.log is moved to <SID>.log.nnnnnnnn_nnnnnn
  • <SID>_BS.log is cleared down.
  • <SID>_JSAGENT.log is cleared down.

We can see that for those log files that are cleared down, a daily rotation should capture them, but it will be hit and miss. If the ASE instance is restarted part way through the day, then the morning logs will not have been preserved (unless you remember to manually run your log rotation). There’s not much we can do about this.

For the log that is auto-rotated by ASE (on restart), then we also need to capture that in our rotation schedule. We have a choice to maybe just compress it and only retain a specific number.

What Log Retention Works Best?

For production systems, I usually say 30 rotations is ideal (1 month).
It’s good to keep the logs so you can see when a problem may have started. Critically, this might help you decide how far back you may need to restore a database (in the dire event of corruption). If you keep 30 days of database backups, then having the <SID>_BS.log for 30 days might just help.

Rotation Schedule

With tools like logrotate, you can decide to rotate logs on a time basis, or even on a size basis.

Since SAP ASE automatically rotates the <SID>.log file on instance restart, it would get a bit messy if we also have a daily rotation on top of this.
Instead, it works a little bit better to rotate this file on a size basis. Keeping the <SID>.log within a defined file size means that you are able to prevent the log file from growing huge, but also reducing the number of rotations within rotations.

We’ve already mentioned that the ASE backup server log is quite important for helping us to understand when backups were taken and any issues during that backup. This is the reason we rotate this file daily, no matter what size it is.

The ASE job server is a little less critical, so we really don’t need it rotated every day. However it can grow quite big if the job server experiences a database space issue. So instead we can choose to rotate on size.

With the above in mind, this is what I would recommend for the ASE log files:

  • <SID>.log rotated on size >5MB with 30 rotations.
  • <SID>_BS.log rotated daily with 30 rotations.
  • <SID>_JSAGENT.log rotated on size>5MB with 30 rotations.

For ASE rotated copies of <SID>.log (called <SID>.log.nnnnnnnn_nnnnnn), it’s best to retain something like 10 rotations for production systems.
This captures the logs from the last 10 restarts. If you are restarting the ASE instance every day, then you may wish to retain more than 10.

Log File Truncation

During the rotation process, you have an option to keep the existing log file or truncate it. My preference is to truncate the original log file, to prevent the log from growing.
If you don’t truncate the original, then you will have a cumulative collection of logs for 30 days, with each day’s copy being bigger than the last and including the previous day’s content.

The exception to this rule is obviously the ASE auto-rotated copies of the <SID>.log. We don’t want to be truncating that at all.

Log File Compression

After the copy of the logs has been taken, the copy can then be compressed. This makes complete sense, because text files (these are ASCII text logs) compress really, really well. You should expect ~90% compression ratios.

Just make sure that you chose a compression tool/utility that has an easy method to access the compressed contents.

For example, in Linux I can use gzip. Then if I need to scan the log for a particular error I can use zcat and pipe through grep to check the contents (like this: “zcat <theGZlog>|grep error”).
I would not want to first uncompress the log, then go through it. That would be painful.

On Windows, I would need to use PowerShell to stream the zipped contents and pattern match. So a script or a separate utility would be needed to help with this.

Log Copy Naming

Once the log file has been copied to a new file and compressed, the name of the copied log file should be appended with the date time stamp.
This allows you to easily identity from when a log file was copied.
Just bear in mind that if you run your log rotation at exactly midnight, the date time stamp will be for the current day, but the log contents will be for the previous day!

The exception to this rule is the ASE auto-rotated copies of <SID>.log.
These copies already have a date time stamp on the end, so we don’t need another one.

Sample Log Rotation Config

Here’s a sample logrotate configuration file that includes all that we have mentioned above.
As part of the logrotate setup we have a designated “status” file which is used by logrotate to record its current state of play, in preparation for its next execution.

The call to logrotate looks like this: logrotate –state <state-file> <config-file>

# logrotate configuration file. 
# This file defines the config for which logrotate uses.
################################################ 
# The global defaults are established first.
# compress all files using default gzip -9. 
compress
# Dont rotate it if its empty. 
notifempty
# Ignore if missing. 
missingok
# Add date as extension instead of just a number. 
dateext 
################################################
# Each section below is specific to a log file match. 
# The contents of the section can override the global defaults.
#
# The ASE dataserver log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/[A-Z][0-9][A-Z].log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file based on size. 
size 5M 
}
#
# The auto-rotated ASE dataserver log after a restart of ASE. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/[A-Z][0-9][A-Z].log.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9] { 
# We just zip this file. 
# We dont truncate it. 
nocopytruncate 
# We dont copy it. 
nocopy 
# We dont add the date on the end, its already there. 
nodateext 
# Check daily for these files. 
daily 
# Permit 10 rotations = 10 restarts of ASE. 
rotate 10 
}
#
# The ASE backup server log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/*_BS.log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file daily. 
daily 
}
#
# The ASE jsagent server log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/*_JSAGENT.log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file based on size. 
size 5M 
}

The above is just the start.
As I mentioned right at the beginning, if you have an ASE instance with HADR configured, then you should also include the 4 other log files (1 for RMA and 3 for RepServer).

There is also the issue of the <SID>.cfg configuration file which is copied to <SID>.nnnnnnnnn on each ASE instance configuration change. You can have a lot of these files build up.
It’s good practice to include this file in your backup regime, but you can also include it in a rotation schedule and remove the ASE created copies.

Executing File Removal

In the logrotate configuration file, in every log rotation file block, it is possible to include a script block that gets executed every time a successful file rotation is performed.
In the logrotate manual, look at the “lastaction” block and include the removal of old dumphist files in there using the “rm” command.
A word of warning, when using these script blocks, logrotate has the ability to pass the path and name of the rotated file as the 1st parameter (i.e. as “$1”). I have found that in my specific version it includes a space on the end. Rotating “somefile.ext.001” and then having a script block in “lastaction” with a pattern like “rm $1.[0-9][0-9][0-9]” would actually remove the file passed in as “$1” (somefile.ext) plus anything that it finds that matches pattern “.[0-9][0-9][0-9]”. Quite problematic and dangerous.
Instead I found it more reliable to code the file pattern manually just like the main part of the file block.

Rotation Fun with Windows

Contrary to the header, it’s not fun to try and rotate logs on Windows.
There are generally 2 issues I’ve found:

  1. Windows locks files and can cause issues with log file truncation unless you use a utility able to handle this (LogRotateWin can).
  2. There is no built-in log rotation utility on Windows.

If you have a heterogeneous landscape you generally do not want to be maintaining too many different tools and configurations.
I needed to rotate ASE logs on Linux and Windows, so I found that using the free LogRotateWin sort of worked well. It is mostly compatible with the same configuration options as the Linux version.
It does not support all the configuration settings of the Linux version. Example, “notifempty” is not supported and throws an error in the debug output.
It also does not like pattern matching properly. What it seems to do is traverse through the whole directory structure from the moment it sees a wildcard, looking for files at each branch, instead of following my specified pattern.
An example being pattern: F:\sybase\???\ASE-16_0\???.log.
The LogRotateWin tool seems to look through all directories under “F:\sybase”, trying to match *files* and folders with wildcard “???”, when it is clearly a folder that I have specified.
For this reason, I actually found it far more reliable to use PowerShell to transform an existing configuration file and find and replace those wildcards with the actual file names.
I used PowerShell to find the files I needed to be rotated and adjust a temporary version of the LogRotateWin configuration.
In fact this worked so well, that I went a step further! I created some PowerShell to download my Linux logrotate configuration file, adjust it to make it LogRotateWin compatible by swapping the slash directions, pre-finding the files (by traversing the system drives) and replacing the wildcards then also removing unsupported features like “notifempty”.
It also swapped my Linux “rm” syntax with a call to “pwsh” (PowerShell) and the short-hand call to the “remove-item” (rm) command-let.
The PowerShell script then calls the LogRotateWin with the temporary configuration file.

Not fun, but it works and it means I only have 1 configuration file I need to worry about.

Summary:

A good log rotation strategy for SAP ASE databases is important to maintain the ASE text log files effectively.
It’s not just about disk space management and housekeeping, but it is also about retaining vital information to help resolve issues and having easy and quick access to that information.

Maintaining a consistent approach across heterogeneous landscapes is not easy, and may require you to be creative in your method.
You should seek to harmonise configuration as much as possible and reduce maintenance effort.

As part of the ASE log rotation configuration, you should also look to include the dumphist file and also the HADR log files (from SRS and RMA).

A final comment: In some cases, I’ve witnessed an ASE system error which has caused thousands of retries to execute “sp_configure” which caused thousands of versions of the <SID>.cfg file to be created. So many, that it took multiple executions of “rm” to remove them.
You may wish to look at including these <SID>.cfg file copies in a rotation configuration as a preventative measure.