This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Create a SAML Key & Cert in Powershell for PowerBI to HANA SSO

Create a SAML Key & Cert in Powershell for PowerBI to HANA SSO

You’ve seen the Microsoft documentation, you want to create a private key and a certificate that can be used with a SAML assertion for single-sign-on between Microsoft PowerBI Gateway and a SAP HANA 2.0 database.

Where’s the problem?

The problem, is that the Microsoft documentation simply says to use “openssl”. Not only that, but it suggests that you create a CA (Certificate Authority) key and certificate first, then create a separate key and certificate for the actual IdP (which in this setup is the PowerBI Gateway), and sign it with the new CA certificate, creating a chain.

This is just not needed in SAML. In a SAML assertion, the certificate used for signature doesn’t need to be signed/issued by anyone in particular because the chain is not validated. The lifetime of the certificate can be loooong, because there is no chain. In the case of a compromised key, the IdP key and certificate would need to be re-created and the new certificate distributed to the target systems (service providers like SAP HANA database).

In this post I’m actually addressing two issues with the Microsoft documentation (I’ve raised one on the github page, but alas it has not been closed yet).

  1. No CA is needed. A self-signed IdP certificate works perfectly fine.
  2. OpenSSL is not needed. We can do it all in Powershell (with an evil laugh added in for extra evilness).
  3. We don’t need to export the key and certificate into a PFX file, then import it into the Windows certificate store.

Show me the Code

After fiddling around with various iterations, I found that the following Powershell code produces a key and certificate that is accepted by PowerBI Gateway (the key uses a valid provider).
The certificate is also accepted by SAP HANA once you’ve imported it into the new Identity Provider that you create in the HANA Cockpit (or HANA Studio if you must – tut tut tut).

Code to create a private key, plus certificate valid for 10 years from today and store it in the Windows “LocalMachine” certificates store in the “Personal” folder on the PowerBI Gateway:

$params = @{
Subject = "CN=PowerBI HANA IdP,OU=MyDept,O=MyCompany,L=MyTown,ST=MyShire,C=GB"
KeyAlgorithm = 'RSA'
KeyLength = 2048
NotAfter = $(Get-Date).AddMonths(120)
FriendlyName = "PowerBI HANA IdP Cert"
KeyFriendlyName = "PowerBI HANA IdP Cert"
HashAlgorithm = 'SHA256'
KeyUsage = 'None'
SuppressOid = '2.5.29.37'
CertStoreLocation = 'Cert:\LocalMachine\My'
Provider = 'Microsoft Enhanced RSA and AES Cryptographic Provider'
KeySpec = 'KeyExchange'
TextExtension = @("2.5.29.19={text}CA=true")
}

$cert = New-SelfSignedCertificate @params

The code above generates an x.509 certificate that contains “CA=true” in the BasicConstraints section.
This is the only part that may not actually be needed. I was really tired after all the investigation. Think of the time I have just saved you here!
If you don’t do this part and it still works, let me know.

Without the “Provider” parameter specifically set to “Microsoft Enhanced RSA and AES Cryptographic Provider” the PowerBI Gateway would not select the key during the SSO connection test.
Instead it would throw an error like so in the GatewayErrors log:

InnerToString=<ccon>System.Security.Cryptography.CryptographicException: Invalid provider type specified.

I had to use the “SuppressOid” parameter to prevent the KeyUsage being added, even if I specifically set “KeyUsage = ‘None'”.
This is really just cosmetic, because I wanted the certificate to look as identical to the openssl generated certificate. If you don’t do this part and it still works, let me know.

I believe that the OpenSSL equivalent command lines are:

openssl req -x509 -sha256 -days 3650 -newkey rsa:2048 -keyout IdP_Key.pem -out IdP_Cert.pem -subj "/C=GB/ST=MyShire/L=MyTown/O=MyCompany/OU=MyDept/CN=PowerBI HANA IdP"
[enter my password]

openssl pkcs12 -export -out IdP_PFX.pfx -in IdP_Cert.pem -inkey IdP_Key.pem -passin pass:my-password -passout pass:my-password

By doing all the work in Powershell, there is no need to enter passwords, export files and transfer files (if you don’t have openssl.exe on the PowerBI Gateway).

HowTo: Calculate SAP ASE HADR Replication Agent Buffer Pool Maximum Size

Scenario: In a SAP ASE HADR system, the primary ASE database log contains the following warning:
RepAgent(X): Autoscaling: Warning: The maximum configured buffer pool size:8 is less than the allocated buffer pool size:200. The maximum is set to 200“.

You would like to know what is a good value for buffer pool max size and how do you know we have enough memory assigned to the Rep Agent?

If we reference here: https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.8/en-US/fe0b1842bd1c1014858c88846330cc94.html
We can see that the “buffer pool” number is actually the number of buffers, which is also known as the “number of packages”.

If your current parameter values are as such:
“buffer pool maximum size” = 200
“stream buffer size” = 1572864 bytes (1.5MB)
“rep agent memory size” = 500MB

We can calculate the amount of memory needed for the Rep Agent like so:
max 200 packages x 1.5MB = 300MB of Rep Agent memory.

Now we know how to work forwards in the calculation, we can work backwards, starting with the Rep Agent memory:
300MB of Rep Agent memory / 1.5MB = 200 packages (buffer pool max size).

New SAP ASE Audit Logging Destination in 16.0.4

Let’s face it, auditing in SAP ASE 16 is difficult to configure due to the requirement to produce your own stored procedure and correctly size the rotating table setup with multiple database segments for each of the multiple audit tables. Once configured, you then had the realisation that to obtain the records, you needed to extract them from the database somehow, and then the problem of who does this task, what privileges they need, should they themselves be audited etc etc.

Good news! With the introduction of ASE 16.0 SP04, configuring auditing in the SAP ASE database just got dramatically easier and way more useable!

Introducing “audit trail type”

In previous versions of the SAP ASE database, database audit records were stored only in the sybsecurity database in a specific set of tables that you had to size and create and rotate yourself (or with a stored procedure).

Once the records were in the database, it was then up to you to define how and when those records would be analysed.
Depending on whether your SIEM tool supports direct ODBC/JDBC access to SAP ASE or not, would depend on how complex the extraction process would be.

In SP04 a new parameter was introduced called “audit trail type” where you can now set the audit store to be “syslog”.

When setting the store to be “syslog”, the audit records are pushed out to the Linux syslogd daemon (or rsyslogd or syslog-ng) and written to the O/S defined location according to the configuration of syslogd:

Each audit record gets a tag/program name of “SAP_ASE_AUDIT”, which means you can define a custom syslogd log file to hold the records, and also then specify a custom rotation should you wish.
Your syslogd logs may already be pulled into your SIEM tools, in which case you will simply need to classify and store those records for analysis.

With the new parameter set to “syslog” and the audit records being stored as file(s) on the file system, you will need to ensure that the file system has adequate space and establish a comfortable file retention (logrotate) configuration to ensure that audit records do not cause the file system to fill (preventing persistence of additional audit records).

Of course, should you enjoy torture, you can always go ahead and continue to use the database to store the audit records. Simply setting the new parameter “audit trail type” to “table”, will store the audit records in the database just like the previous versions of ASE.

Useful Links

What’s new in ASE 16.0.4

Parameter: audit trail type

HowTo: Extract SAP PI/PO Message Payload from SAP ASE DB

Sometimes you may need to directly the extract the SAP PO message payload from the underlying database tables such as BC_MSG_LOG in SAP ASE 16.0 database.
This could also potentially be called: extracting hex encoded ASCII data from an ASE image column. Because the SAP PO tables use an ASE image data type to store the payload as an off-row LOB.

There are plenty of examples for doing this extraction in Oracle, but in ASE it is not so easy because the message size could be larger than that supported by the page size of ASE (usually 16k in an ASE for BusSuite).
This means you won’t be able to store it into a T-SQL variable and use the ASE functions.

Instead, we can use the below simple approach to extract the raw hex, and then use Python 2 to convert it to ASCII:

1, Execute the selection SQL using isql at the Linux command prompt on the database server:

isql -USAPSR3DB -S<SID> -w999 -X

select MSG_BYTES
from [SAPSR3DB.BC_MSG_LOG]
where MSG_ID='<YOUR MSG ID>'
and DIRECTION='OUTBOUND'
and LOG_LOCATION='MS'

go

The output will consist of hexadecimal output, which starts with “0x” and should look something like this:

0x2d2d5341505f6

Copy & paste into a text file on the Linux server (use your favourite text editor) and call the file data.txt.

Edit the data.txt file and remove the first “0x” characters from the data.
Remove all newlines and carriage returns in the file.

Now create a simple Python script to read the data from our file data.txt and translate from hex to ASCII then print to the screen:

with open('data.txt', 'r') as file:
    data = file.read()
print data.decode('hex')

Run the Python script:

python ./myscript.py

The output should contain a header and a footer which start with:  “–SAP_”.
If you get an error from the Python script, then it could be because there are additional newlines or carriage returns in the data.txt file.

SAP ASE HADR Overview – Part5

In this multi-part post, I’m explaining the basics behind how SAP Replication Server works when replicating from a SAP ASE database to a SAP ASE database as part of an HADR (ASE always-on) setup for a SAP Business Suite system.
The post will be based on SAP ASE (Adaptive Server Enterprise) 16.0 HADR with SAP Replication Server (SRS) 16.0.

In Part 1 we started with:

  • What is SRS.
  • The basic premise of HADR with SRS.
  • What a transaction is.

In Part 2 we went on to discuss:

  • What is the ASE transaction log.
  • Which databases are replicated.
  • How do transactions move to the SRS.

In Part 3 we covered:

  • What is the Active SRS.
  • What are the Key Parts of SRS.
  • Replication Synchronisation Modes
  • Open Transaction Handling
  • What are the Implications of Replication Delay

In Part 4 we stepped through the replication of Bob’s data change and saw how the transactional data was replicated first to the SRS and eventually to the companion (secondary) database.

This is the last part of my ASE 16.0 HADR mini-series, and in this final part I will discuss possible issues that can impact an ASE 16.0 HADR system, which might be useful when planning operational acceptance testing.

Companion ASE Unavailable

When the companion (secondary) database is unavailable for whatever reason (undergoing maintenance, it’s broken or for other reasons), then the replicated transactions will still continue to move through the SRS until they get to the outbound queue (OBQ).
The assumption is that the active SRS (on same server as the companion DB) is still up and working.

In the OBQ the transactions will wait until the companion is available again.
As soon as the companion is available, the transactions will move from the OBQ into the companion.
The primary database will be unaffected during this time and transactions will continue through the SRS until the OBQ becomes full.

If the OBQ fills up, then transactions will start to accumulate in the inbound queue (IBQ).

If the companion database is down for a long period of time, you may need to make a decision:

  • Increase the stable queue partition space to hold more transactions.
    With the hope that the companion can be brought back online.
  • Disable replication, removing the Secondary Truncation Point (STP) from the primary database and acknowledging that the companion will need re-materialisation to bring it back in-sync with primary.

Inbound Queue Full

When the inbound queue becomes full, the transactions will start to build up into the simple persistent queue (SPQ).
You should note that the IBQ, by default, is only allowed to use up to 70% of the stable queue partition size. The rest is for the OBQ. So “full” is not actually 100% of the queue space.

There can be two common reasons for the IBQ to fill:

  1. The OBQ is also full due to an issue with the connection to the companion ASE database, or the companion is unavailable.
    or
  2. There is an open transaction in the IBQ and the SRS is waiting for the “commit” or “rollback” command to come through from the SPQ for the open transaction.

To resolve the full IBQ, you are going to need to establish which of the two issues is occurring.
An easy way to do this is to check the OBQ fill level.
If transactions are moving from the OBQ to the companion, then the issue is an open transaction.

If an open transaction has caused the IBQ to fill, then the “commit” or “rollback” transaction could now be stuck in the SPQ. Since there is no space in the IBQ, the SRS is also unable to process the SPQ records, which leaves the IBQ open transaction in a stale-mate situation.
You will need to make a decision:

  • Add more space to the stable queues to increase the IBQ size.
    or
  • Increase the proportion of stable queue size that the IBQ can use (if OBQ is empty).
    or
  • Zap (remove) the open transaction from the IBQ (will mean data-loss on companion so a rematerialise may be needed).

Normally, you can just add more space by adding another partition to the stable queues, hopefully resolve the issue, then remove the extra space again. How much is needed? Nobody will know.
However, if you have to zap the open transaction, then make sure you dump the queue contents out first, so you can see what DML was open, you can then make a decision on how the missing transaction will affect the companion database integrity (could negate the need for rematerialisation).

During this problematic period, the SPQ has remained functional, which has meant that the primary database has been able to continue to send transactions to the active SRS and therefore allowed it to continue to commit data in a timely manner. The primary database will have no issues.

Simple Persistent Queue Full

This is probably the most serious of the scenarios.
Once the SPQ becomes full, it immediately impacts the Replication Agent running in the primary ASE database.

Transactions are unable to move from the primary database transaction log to the active SRS.
You will start to get urgent warnings in the primary ASE database error log, informing you that the SPQ is full and that the buffers in the primary Replication Agent are full.

You will also see that the Replication Agent will be producing error messages and informing you that it has switched from synchronous to asynchronous replication mode.

The integrity of your production ASE database is now at risk!

It is possible you can add more space to the SPQ if you think you can resolve the problem in the IBQ and have the time to wait for the IBQ to empty!

You should note that this scenario has the symptoms if the active SRS is not working at all. If the SPQ is not available, then you need to troubleshoot the SRS error log. It’s also possible you may have issues with the Replication Agent in the primary ASE.

Primary Transaction Log Full

If your active SRS has now filled up to the SPQ, your primary ASE database is now at serious risk.
With the Replication Agent unable to move the Secondary Truncation Point (STP) then the transaction log of the primary ASE will start to fill up.
You will not be able to release the segments, even with a transaction log dump, because they are still needed by the Replication Agent.

You have the following options available:

  • Add more space to the transaction log by adding a new device (recommended, instead of expanding the existing device).
    This will give you more time to maybe fix the SRS.
    or
  • Disable replication, which removes the STP and allows the transaction log to be dumped. A rematerialise of the companion will be needed at a later date.
    or
  • If the transaction log is already full, then even trying to disable replication will not work (no transactions will be permitted).
    You will have to use DBCC to remove (ignore) the “LTM” (Last Truncation Marker), which will have a similar outcome to disabling replication.

At this point, if all else fails and your transaction log is full DO NOT RESTART THE PRIMARY ASE!
If you restart the primary ASE with a full transaction log, then you will be in a world of pain trying to get it to start up again.
You need to stop all processing (it won’t be working anyway), then try and resolve the issue.

Summary

This concludes the 5 part mini-series on SAP ASE 16.0 HADR.
I hope it has given you enough of an overview to be able to explain a typical setup, the way replication occurs and the most common issues that put your production ASE database at risk.

Feel free to contact me with any questions and feedback.