This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

HowTo: Call sp_dump_history with Purge in ASE 16.0 SP03

Due to a bug in ASE 16.0 SP03 PL08 (and maybe earlier versions) the sp_dump_history call does not work properly.
It just returns an empty screen.
When you look at the procedure code, you will see that it fails to find the tempdb for whatever reason.

We can use the sp_dump_history procedure to look at the dump history (backup history) for an ASE database.
From ASE 15.7 onwards we can also use it to purge the history, reducing the size of the dumphist file and in some cases improving the performance of backups themselves (see SAP note 2753062 for details on issues).

By prefixing the tempdb onto the call from the master database, you can get sp_dump_history to work as it should.

Below is an example that purges all backup records from the dump history file (dumphist) from before January 29th @ 15:00:00 :

use master
go

tempdb..sp_dump_history @operation='purge', @until_time="january 29,2020 15:00:00:00"
go

Tip: In Linux the correct date format can be obtained with the date command:

date "+%b %d,%Y %T"

As another tip, you can use the sp_helptext procedure to look at the source code of any stored procedures. Take a look on help.sap.com for the parameters for sp_helptext.

ASE Issue During Setuphadr Removes Logins on Companion

There’s a specific scenario that can occur during the setup of SAP ASE HADR, which can leave the companion (secondary) database with no logins or roles. You will be unable to log into it. It will be completely unusable.

This issue could apply to any ASE SP03 where auditing is turned on (default for PL06 and above) but I have seen this in two systems, one on ASE SP03 PL08 and one on PL08-HF1.

The symptoms are seen during setuphadr in the step for materialisation of the master database; the setuphadr screen output doesn’t change. Eventually it displays a timeout.
When doing a “ps” on the primary ASE server, you can see a number of BCP processes that appear to be “stuck”. They are connecting to the companion database.
Inside the repdata directory location, the BCP data and out files are present and but they do not change.

When attempting to restart the setuphadr, the process fails as it cannot log into the companion ASE.
When trying manually to log into the companion ASE using isql, you are unable to log in as sapsa, sapsso. You can log in as “sa” only, but you have no roles/privs to do anything.
You check the list of logins and roles in the companion master DB (table master..syslogins) and the tables are empty, all logins have been removed!
You may also see login errors mentioned in the RMA log file on the companion server (the active SRS) with “SQLState: ZZZZZ“.

From what I can see, the cause of the issue is ASE auditing. It is enabled on the companion DB (the sp_configure parameter “auditing” has value of “1”).
The audit table is full or nearly full and/or the sybsecurity database is full or nearly full or the sybsecurity database tran log is full or nearly full.
This prevents the BCP processes from successfully loading the table data from the primary.
It doesn’t seem to prevent the truncation of the companion master DB table data, leaving it with no table data for roles and logins in the master DB.

The workaround that I have found that works most effectively is to completely disable ASE auditing (auditing = 0) in the companion ASE instance, to prevent the issue from happening in the first place.
There are a couple of params that can change the way auditing works and maybe adjusting the filtering to prevent auditing of system users would also solve the problem, but this is the setup of replication so I don’t see why you would want that process audited at all. You can always re-enable auditing after the HADR setup is completed.

Another prevention tip is to ensure that sybsecurity database and tran log have space before the setuphadr process is started.

What if you have already hit the above issue and are looking for help right now?
If the above issue is hit, then the companion master DB could be restored from backup, except you cannot log into it to start the LOAD command.
You could copy the master.dat from primary, but the inbuilt GUID that gets set on DB creation would then be identical to primary (companion and primary should be different).

This leaves two possible options:

  • Re-create the companion master db following SAP note 2631728 “How to rebuild master device and master database from backup – SAP ASE”.
  • Restore master DB from backup (could be a backup from primary master DB).
  • If backup from primary was used, then you may need to re-create the SID database (to be on the safe side), using the DDLGEN commands.

or:

  • Blast the companion ASE away and re-install using SWPM.

Good luck, and make sure you take regular backups 🙂

Tuning SAP ASE – Decompressing Small Lookup Tables

By default, for SAP applications based on SAP Netweaver, SAP ASE 16.0 will compress the data pages within the tables inside the database.
In some scenarios this will not help with performance.

In this brief post I explain the scenario, I correct a SAP note and show you how to check and adjust the tables in a safe way (do not just follow the note, you could corrupt the table data).

Data Page Compression

In ASE 16.0, table data pages can be compressed.

For SAP Netweaver based SAP systems such as SAP ERP, the use of table data page compression can help improve performance in two ways:

  1. Reduced I/O
    Reading and writing compressed data pages to disk makes it more efficient. Imagine that it’s a bit like copying a 4MB zip file on your computer, instead of an uncompressed 40MB text file!
  2. Reduced Memory Consumption
    In memory (in the data cache), the data pages can be held in compressed form and may not need to be uncompressed in order to satisfy certain queries.

The above sounds good and indeed it is good.
But there are some borderline cases where compression on a table may not be beneficial.

The Drawback with Compression

In some cases, compression can affect the response time of queries in very specific circumstances.

If you have “small” tables and those tables are accessed frequently with SQL queries that perform full table scans, then the extra CPU time required to decompress the table contents in the data cache, can impact the response time.

How do We Identify Candidate Tables?

If there are tables which are adversely affected from having compression turned, on, how do we identify those tables?

We can follow SAP note 1775764, which provides a nice simple SQL statement to list likely candidates. Great!

However, the provided SQL statement actually does not do what it should!
Oh dear!
Instead, the SQL incorrectly lists tables that are definitely not good candidates for decompression. In fact, some of them are the complete opposite of good candidates!

The SQL taken from the SAP note is:

--- DO NOT RUN THIS IT IS WRONG!!! ---
use <SID>
go
select top 10 ObjectName, 
LogicalReads 
from master..monOpenObjectActivity 
where DBID = db_id() 
and LogicalReads > 1000000 
-->   and data_pages(DBID,0)) < 6400      <-- WRONG LINE HERE! 
and object_attr(DBName || '.SAPSR3.' || ObjectName,'compression') <> 'none' 
order by LogicalReads desc
go

You will see in the above SQL taken from the SAP note, the “data_pages” function call has 2 parameters, the ID of the current database and an object id. Except the object id on line 9, has been hard coded to “0”.

This causes a value of “0” to be returned from the “data_pages” call, which is obviously always less than 6400.

Essentially, any table that has compression enabled, with high numbers of LogicalReads (table scans in the data cache), will be reported as a candidate! How confusing.

Correcting the SQL

We can see that we need to correct the SQL.
Here’s how:

--- This is the corrected statement ---
use <SID>
go
select top 10 ObjectName, 
LogicalReads 
from master..monOpenObjectActivity 
where DBID = db_id() 
and LogicalReads > 1000000 
and data_pages(DBID,ObjectID)) < 6400
and object_attr(DBName || '.SAPSR3.' || ObjectName,'compression') <> 'none' 
order by LogicalReads desc
go

It’s simple enough, we just substitute the incorrect “0” on line 9, with the column “ObjectID” from the monOpenObjectActivity table we are selecting from.
This correction then allows the correct output from the call to function “data_pages“.
With the correction in place, tables with a data page count of less than 6400 pages (using 16KB pages, means tables less than 100MB in size), with compression enabled and a high number of LogicalReads, will be listed as candidates. Yay!

Adjusting the Candidate Tables

Once you have your table list, you might now wish to turn off compression.
This is simple enough.
You can use the additional SQL supplied in the SAP note:

--- WARNING: check SAP note 2614712 first! ---
use <SAPSID>
go
setuser 'SAPSR3'
set quoted_identifier on
set chained on
go
alter table SAPSR3.<tablename> set compression = none
reorg rebuild SAPSR3.<tablename> [with online]  <-- check 2614712 first!
commit
go

As you can see, you will need to perform a REORG on the table.
The SQL statement specifies the optional “with online” option.
BE WARNED: There are lots of SAP notes about issues with REORG ONLINE, please ensure that you check SAP note 2614712 first! Shame on SAP for not mentioning this in the SAP note!

Checking the Result

We’ve gone through the effort of adjusting tables we think that might benefit from no-compression, but how do we now confirm that we have positively impacted the response time?

Well that’s a difficult one. I’ll leave that for another post.

Summary

The summary for this post, is a little bit “Boris Johnson”:

  • Follow the SAP notes.
  • But don’t follow the SAP notes.
  • Check and double check what is output from commands, don’t just take the output as being correct!
  • Check your results.
  • Compressing tables is a good thing.
  • Compressing specific tables may not be a good thing.

SAP ASE Instance Log File Rotation

SAP ASE does a bare minimum of rotating its text log files.
I’m talking about the logs that you can use to see what’s going on under the covers.
Each time the ASE instance is restarted, it pushes the previous instance log file to a date stamped file and starts fresh. This is not great if you have had issues with the database and the log has grown huge, because who restarts the database every day?

ASE also does not preserve the backupserver log or the job server agent log, which could be critical in solving problems.
Instead, it’s better to devise your own custom daily log rotation process.

In this post I describe the main SAP ASE log files, and my recommendation for a rotation configuration on Linux (yay!) and Windows (here be dragons!).

What Needs to Be Rotated?


The following are standard ASE 16.0 text log files that just keep on growing:

  • ASE database instance log: /sybase/<SID>/ASE-16_0/install/<SID>.log
  • ASE backup server log: /sybase/<SID>/ASE-16_0/install/<SID>_BS.log
  • ASE Job Server log: /sybase/<SID>/ASE-16_0/install/<SID>_JSAGENT.log

I do not include the dumphist file, which (if dumphistory is enabled) gets backed up daily from /sybase/<SID>/ASE-16_0/dumphist to /sybase/<SID>/ASE-16_0/dumphist.nnnnnnnnn.
These can also accumulate, so it might be best to just clear them down, especially if (as is good practice) you copy them off with your database backups.

NOTE: For ASE instances with HADR installed (a.k.a SAP Replication Server) there are also an additional 4 log files which should be rotated, that I won’t go into in this post. (Hire me, and I’ll tell you).

What Happens When ASE Rotates Those Logs?

When the ASE instance is restarted, some of the above standard logs are rotated and some are cleared down. For the ones that are rotated, there is no end to the rotation, they keep building up. They are never removed.

As part of your log rotation solution, you should also rotate the ASE rotated logs, making sure that they don’t build up over the years of ASE instance restarts (especially if starting/stopping systems frequently in The Cloud).

This is what happens to the auto-rotated logs:

  • <SID>.log is moved to <SID>.log.nnnnnnnn_nnnnnn
  • <SID>_BS.log is cleared down.
  • <SID>_JSAGENT.log is cleared down.

We can see that for those log files that are cleared down, a daily rotation should capture them, but it will be hit and miss. If the ASE instance is restarted part way through the day, then the morning logs will not have been preserved (unless you remember to manually run your log rotation). There’s not much we can do about this.

For the log that is auto-rotated by ASE (on restart), then we also need to capture that in our rotation schedule. We have a choice to maybe just compress it and only retain a specific number.

What Log Retention Works Best?

For production systems, I usually say 30 rotations is ideal (1 month).
It’s good to keep the logs so you can see when a problem may have started. Critically, this might help you decide how far back you may need to restore a database (in the dire event of corruption). If you keep 30 days of database backups, then having the <SID>_BS.log for 30 days might just help.

Rotation Schedule

With tools like logrotate, you can decide to rotate logs on a time basis, or even on a size basis.

Since SAP ASE automatically rotates the <SID>.log file on instance restart, it would get a bit messy if we also have a daily rotation on top of this.
Instead, it works a little bit better to rotate this file on a size basis. Keeping the <SID>.log within a defined file size means that you are able to prevent the log file from growing huge, but also reducing the number of rotations within rotations.

We’ve already mentioned that the ASE backup server log is quite important for helping us to understand when backups were taken and any issues during that backup. This is the reason we rotate this file daily, no matter what size it is.

The ASE job server is a little less critical, so we really don’t need it rotated every day. However it can grow quite big if the job server experiences a database space issue. So instead we can choose to rotate on size.

With the above in mind, this is what I would recommend for the ASE log files:

  • <SID>.log rotated on size >5MB with 30 rotations.
  • <SID>_BS.log rotated daily with 30 rotations.
  • <SID>_JSAGENT.log rotated on size>5MB with 30 rotations.

For ASE rotated copies of <SID>.log (called <SID>.log.nnnnnnnn_nnnnnn), it’s best to retain something like 10 rotations for production systems.
This captures the logs from the last 10 restarts. If you are restarting the ASE instance every day, then you may wish to retain more than 10.

Log File Truncation

During the rotation process, you have an option to keep the existing log file or truncate it. My preference is to truncate the original log file, to prevent the log from growing.
If you don’t truncate the original, then you will have a cumulative collection of logs for 30 days, with each day’s copy being bigger than the last and including the previous day’s content.

The exception to this rule is obviously the ASE auto-rotated copies of the <SID>.log. We don’t want to be truncating that at all.

Log File Compression

After the copy of the logs has been taken, the copy can then be compressed. This makes complete sense, because text files (these are ASCII text logs) compress really, really well. You should expect ~90% compression ratios.

Just make sure that you chose a compression tool/utility that has an easy method to access the compressed contents.

For example, in Linux I can use gzip. Then if I need to scan the log for a particular error I can use zcat and pipe through grep to check the contents (like this: “zcat <theGZlog>|grep error”).
I would not want to first uncompress the log, then go through it. That would be painful.

On Windows, I would need to use PowerShell to stream the zipped contents and pattern match. So a script or a separate utility would be needed to help with this.

Log Copy Naming

Once the log file has been copied to a new file and compressed, the name of the copied log file should be appended with the date time stamp.
This allows you to easily identity from when a log file was copied.
Just bear in mind that if you run your log rotation at exactly midnight, the date time stamp will be for the current day, but the log contents will be for the previous day!

The exception to this rule is the ASE auto-rotated copies of <SID>.log.
These copies already have a date time stamp on the end, so we don’t need another one.

Sample Log Rotation Config

Here’s a sample logrotate configuration file that includes all that we have mentioned above.
As part of the logrotate setup we have a designated “status” file which is used by logrotate to record its current state of play, in preparation for its next execution.

The call to logrotate looks like this: logrotate –state <state-file> <config-file>

# logrotate configuration file. 
# This file defines the config for which logrotate uses.
################################################ 
# The global defaults are established first.
# compress all files using default gzip -9. 
compress
# Dont rotate it if its empty. 
notifempty
# Ignore if missing. 
missingok
# Add date as extension instead of just a number. 
dateext 
################################################
# Each section below is specific to a log file match. 
# The contents of the section can override the global defaults.
#
# The ASE dataserver log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/[A-Z][0-9][A-Z].log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file based on size. 
size 5M 
}
#
# The auto-rotated ASE dataserver log after a restart of ASE. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/[A-Z][0-9][A-Z].log.[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9] { 
# We just zip this file. 
# We dont truncate it. 
nocopytruncate 
# We dont copy it. 
nocopy 
# We dont add the date on the end, its already there. 
nodateext 
# Check daily for these files. 
daily 
# Permit 10 rotations = 10 restarts of ASE. 
rotate 10 
}
#
# The ASE backup server log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/*_BS.log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file daily. 
daily 
}
#
# The ASE jsagent server log. 
/sybase/[A-Z][0-9][A-Z]/ASE-[0-9][0-9]*/install/*_JSAGENT.log { 
# Truncate the original log once rotated. 
copytruncate 
# Permit 30 rotations. 
rotate 30 
# Rotate the file based on size. 
size 5M 
}

The above is just the start.
As I mentioned right at the beginning, if you have an ASE instance with HADR configured, then you should also include the 4 other log files (1 for RMA and 3 for RepServer).

There is also the issue of the <SID>.cfg configuration file which is copied to <SID>.nnnnnnnnn on each ASE instance configuration change. You can have a lot of these files build up.
It’s good practice to include this file in your backup regime, but you can also include it in a rotation schedule and remove the ASE created copies.

Executing File Removal

In the logrotate configuration file, in every log rotation file block, it is possible to include a script block that gets executed every time a successful file rotation is performed.
In the logrotate manual, look at the “lastaction” block and include the removal of old dumphist files in there using the “rm” command.
A word of warning, when using these script blocks, logrotate has the ability to pass the path and name of the rotated file as the 1st parameter (i.e. as “$1”). I have found that in my specific version it includes a space on the end. Rotating “somefile.ext.001” and then having a script block in “lastaction” with a pattern like “rm $1.[0-9][0-9][0-9]” would actually remove the file passed in as “$1” (somefile.ext) plus anything that it finds that matches pattern “.[0-9][0-9][0-9]”. Quite problematic and dangerous.
Instead I found it more reliable to code the file pattern manually just like the main part of the file block.

Rotation Fun with Windows

Contrary to the header, it’s not fun to try and rotate logs on Windows.
There are generally 2 issues I’ve found:

  1. Windows locks files and can cause issues with log file truncation unless you use a utility able to handle this (LogRotateWin can).
  2. There is no built-in log rotation utility on Windows.

If you have a heterogeneous landscape you generally do not want to be maintaining too many different tools and configurations.
I needed to rotate ASE logs on Linux and Windows, so I found that using the free LogRotateWin sort of worked well. It is mostly compatible with the same configuration options as the Linux version.
It does not support all the configuration settings of the Linux version. Example, “notifempty” is not supported and throws an error in the debug output.
It also does not like pattern matching properly. What it seems to do is traverse through the whole directory structure from the moment it sees a wildcard, looking for files at each branch, instead of following my specified pattern.
An example being pattern: F:\sybase\???\ASE-16_0\???.log.
The LogRotateWin tool seems to look through all directories under “F:\sybase”, trying to match *files* and folders with wildcard “???”, when it is clearly a folder that I have specified.
For this reason, I actually found it far more reliable to use PowerShell to transform an existing configuration file and find and replace those wildcards with the actual file names.
I used PowerShell to find the files I needed to be rotated and adjust a temporary version of the LogRotateWin configuration.
In fact this worked so well, that I went a step further! I created some PowerShell to download my Linux logrotate configuration file, adjust it to make it LogRotateWin compatible by swapping the slash directions, pre-finding the files (by traversing the system drives) and replacing the wildcards then also removing unsupported features like “notifempty”.
It also swapped my Linux “rm” syntax with a call to “pwsh” (PowerShell) and the short-hand call to the “remove-item” (rm) command-let.
The PowerShell script then calls the LogRotateWin with the temporary configuration file.

Not fun, but it works and it means I only have 1 configuration file I need to worry about.

Summary:

A good log rotation strategy for SAP ASE databases is important to maintain the ASE text log files effectively.
It’s not just about disk space management and housekeeping, but it is also about retaining vital information to help resolve issues and having easy and quick access to that information.

Maintaining a consistent approach across heterogeneous landscapes is not easy, and may require you to be creative in your method.
You should seek to harmonise configuration as much as possible and reduce maintenance effort.

As part of the ASE log rotation configuration, you should also look to include the dumphist file and also the HADR log files (from SRS and RMA).

A final comment: In some cases, I’ve witnessed an ASE system error which has caused thousands of retries to execute “sp_configure” which caused thousands of versions of the <SID>.cfg file to be created. So many, that it took multiple executions of “rm” to remove them.
You may wish to look at including these <SID>.cfg file copies in a rotation configuration as a preventative measure.

Korn Shell Calling SAP HANA – Hello Hello!

So you’ve automated SAP HANA stuff huh?
What tools do you use?
Python? Chef? Puppet? Ansible? DSC/Powershell?

No. I use Korn shell. ¯\_(ツ)_/¯

Me, Trying to Support Korn…

I find Korn shell is a lowest common denominator across many Linux/Unix systems, and also extremely simple to support.
It does exactly what I need.

For me it’s readable, fairly self-explanatory, easily editable and dependable.

…and Failing?

But I do know what you’re saying:

  • there’s no built in version control
  • it’s not easy to debug
  • it’s definitely not very cool 🤓
  • you can’t easily do offline development
  • my Grandad told me about something called Korn shell ?

Have you Considered

If you have no budget for tools, then you can start automating by using what you already have. Why not.
Don’t wait for the right tool, start somewhere and only then will you understand what works for you and doesn’t work for you.

Sometimes it’s not about budget. There are companies out there that do not allow certain sets of utilities and tools to be installed on servers, because they can provide too much help to hackers. Guess what they do allow? Yep, Korn shell is allowed.

Let’s Get Started

Here’s a sample function to run some SQL in HANA by calling the hdbsql (delivered with the HANA client) and return the output:

#!/bin/ksh
function run_sql {
   typeset -i l_inst="${1}" 
   typeset l_db="${2}" 
   typeset l_user="${3}" 
   typeset l_pw="${4}" 
   typeset l_col_sep="${5}" 
   typeset l_sql="${6}" 
   typeset l_output="" 
   typeset l_auth="" 
   typeset -i l_ret=0

   # Check if PW is blank, then use hdbuserstore (-U). 
   if [[ -n "${l_pw}" && "${l_pw}" != " " ]] ; then 
      l_auth="-u ${l_user} -p ${l_pw}" 
    else l_auth="-U ${l_user}" 
   fi

   l_output="$(hdbsql -quiet -x -z -a -C -j -i ${l_inst} ${l_auth} -d ${l_db} -F "${l_col_sep}"<<-EOF1 2>>/tmp/some-script.log 
		${l_sql}; 
		quit 
EOF1 )"
   
   l_ret=$?

   # For HANA 1.0 we need to trim the first 6 lines of output, because it doesn't understand "-quiet". 
   #if [[ "$(check_major_version)" -lt 2 ]] ; then 
      # l_output="$(print "${l_output}"| tail -n +7)" 
   #fi

   print "${l_output}" 
   return $l_ret 

}

To call the above function, we then just do (in the same script):

l_result="$(run_sql "10" "SystemDB" "SYSTEM" "SYSTEMPW" " " "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence','log_mode')='overwrite' WITH RECONFIGURE")"

We are passing in the HANA instance number 10, you can use whatever your instance number is.

We can check the function return code (did the function return cleanly) like so:

if [[ $? -ne 0 ]] ; then 
   print "FAILED" 
   exit 1; 
fi

Here’s what we’re passing in our call to hdbsql (you can find this output by calling “hdbsql –help”):

-i instance number of the database engine
-d name of the database to connect
-U use credentials from user store
-u user name to connect
-p password to connect
-x run quietly (no messages, only query output)
-quiet Do not print the welcome screen
-F use as the field separator (default: ‘,’)
-C suppress escape output format
-j switch the page by page scroll output off
-Q show each column on a separate line
-a do not print any header for SELECT commands

If you wanted to return a value, then the “l_result” variable would contain the output.

Ideally, the function we wrote would be put into a chunk of modular code that could be referenced time and again from other Korn shell scripts.

You would also be looking to create some sets of standard functions for logging of messages to help with debugging. You can make it as complex as you wish.

In the call to “run_sql” we pass a column separator.
I usually like to use a “|” (pipe), then parse the returned values using the “awk” utility like so:

l_result="$(run_sql "10" "SystemDB" "SYSTEM" "SYSTEMPW" "|" "SELECT file_name,layer_name,section,key, value FROM SYS.M_INIFILE_CONTENTS WHERE layer_name='SYSTEM'")"

echo "${l_result}" | /bin/awk -F'|' '{ print $2" "$3" "$4 }'

When we execute the script we get the first 3 columns like so:

daemon.ini SYSTEM daemon 
diserver.ini SYSTEM communication 
global.ini SYSTEM auditing 
configuration global.ini SYSTEM 
backup global.ini SYSTEM
...

Obviously we don’t really embed the password in the script; it gets passed in.
You can either pass it in using the command line parameter method (./myscript.ksh someparam) or via the Linux environment variables (export myparam=something; ./myscript.ksh).
If you want you can even pipe it in (echo “myparam”| ./myscript.ksh) and “read” it into a variable.
You can also take a look at the “expect” utility to automate command line input.
Also, take a look at the “hdbuserstore” utility to store credentials for automation scripts (remember to set appropriatly secure privs on these database users).

That’s all there is to it for you to get started using Korn shell to call HANA.