This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Power BI Desktop Single-Sign-On to SAP HANA

This post is all about Single-Sign-On (SSO) from Microsoft Power BI Desktop to SAP HANA database 2.0.
When you initially get the task to design and/or set this up, there are a few questions that need to be asked first, before you start setting up SSO.
In the post I will compare the two methods for single-sign-on: Kerberos and SAML, plus the use of the Power BI Gateway (also known as the “On-Premises Data Gateway”).

Index:

Questions to Ask

Before setting up SSO from Power BI Desktop to SAP HANA, you should ask these questions:

  1. Define what the Power BI Desktop end-user will be doing:
    Are the end users creating reports or are they consuming already published reports?

    End users that are creating new reports will need a direct Power BI Desktop to HANA connection with SSO (a.k.a SSO2DB). This will need to use Kerberos because SAML is not supported.
    End users that are consuming already published reports can use the On-Premises Data Gateway with SSO to access and execute the reports from Power BI Desktop. The On-Premises Data Gateway can use Kerberos or SAML.

  2. Define where Power BI Desktop will be running:
    Do end-users all have Windows accounts in the same domain?

    For direct to HANA connections with SSO, Kerberos is used and requires the end-user to be signed into Windows with a Windows account on the machine where Power BI Desktop is running.
    If the end-user does not have a Windows account (or the they sign into Windows with a different, un-trusted domain) they can enter Windows credentials into the login box inside Power BI Desktop (this is not quite so seamless), but they will still need an AD account and one that is federated with the domain in which SAP HANA has been added to (it gets it’s own service account).

  3. If using On-Premises Data Gateway, define how many HANA systems will be connected to it:
    Is the On-Prem Data Gateway needing to connect to multiple HANA systems?

    When connecting On-Premises Data Gateway to HANA using SAML for SSO, there is a one-to-many relationship with the SAML key and certificate generated for On-Premises Data Gateway. The On-Premises Data Gateway can only use one certificate and this one has to be deployed and trusted on all the HANA systems that it will be connecting to. Therefore, you really need to have a On-Premises Data Gateway for each HANA environment (or at least, one for Production and one for Non-Production) to allow proper testing and better security for production.
  4. If planning to use Kerberos for SSO, identify corporate security policies & settings for Active Directory (AD) service accounts:
    Do AD service accounts required AES256 bit encryption?
    What are the server names and domains of the required domain Key Distribution Centre (KDC) server(s)?
    What will be the full UPN of the user account when using Kerberos?

    When AD service accounts have AES256 bit encryption, it changes the process for setting up the keytab file that is placed onto the SAP HANA server.
    The KDC and domain information will be needed for the configuration of the HANA server’s krb5_hdb.conf file.
    The AD administrators should be asked for the above information.

On-Premises Data Gateway or no On-Premises Data Gateway

You can use the On-Premises Data Gateway (Power BI Gateway) for accessing the data in “on-premise” systems. This includes HANA databases. The gateway acts as a kind of reverse proxy because it connects out to Microsoft from inside the customer’s network (where it is hosted).

The Gateway provides a distribution (publishing) framework where reports can be curated and published for access by many users.
End-users can connect from their Power BI Desktop (installed on their local computer) to the On-Premises Data Gateway *over the internet*.

Without the On-Premises Data Gateway, each Power BI Desktop end-user will need a direct connection to the SAP HANA database. It is recommended that this is performed over a VPN connection, or for the end-user to be physically in a corporate office on the LAN/WAN. In the future the Azure “v-Net” connection option may support SAP HANA connections if you happen to host your SAP HANA in Microsoft Azure.
NOTE: In the below, we could be using Azure AD or classic Active Directory Domain Services.

HANA Integration Point

Before we continue we need to highlight that the Power BI Desktop and On-Premises Data Gateway connect to the SAP HANA database indexserver for SSO via both Kerberos and SAML.
Changes are not required to the HANA XSA (Application Server Extended) for these integrations. It is not the same integration that you may read in other guides (especially guides related to HANA’s analytical capabilities).

Kerberos or SAML

Whether to use Kerberos or SAML is really up to your organisation’s preferences and capabilities.
Microsoft and SAP recommend SAML as a modern approach to Single-Sign-On.

SAML de-couples the SAP HANA system from the identity provider and is simpler to use with potentially less firewall changes.
Be aware, the On-Premises Data Gateway can only use one certificate for SAML for all the HANA databases it talks to.
When using SAML, the On-Premises Data Gateway connection to HANA needs securing with TLS, otherwise the SAML assertion (kind of like a certificate) would be sent unencrypted.

On the other hand, Kerberos provides a centralised identity management approach and is much more rigid in design with a few more steps involved in the setup. It is also a much older protocol with its own set of vulnerabilities, but it comes without the requirement to setup TLS (although it is still recommended).

If you need to have Power BI Desktop connecting directly to SAP HANA (a.k.a SSO2DB), then as of writing this can only use Kerberos for single-sign-on. Kerberos delegation is not needed in this scenario.
For connection from Power BI Desktop via the On-Premises Data Gateway to SAP HANA, then both Kerberos or SAML can be used.
When using the On-Premises Data Gateway with SAML, the On-Premises Data Gateway becomes the SAML identity provider (IdP).
When using the On-Premises Data Gateway with Kerberos, the On-Premises Data Gateway will use Kerberos delegation on behalf of the end-user.

Power BI Direct to HANA via Kerberos (SSO2DB)

NOTE: This is also known as SSO2DB.

The first thing to note about connecting Power BI directly to SAP HANA using Kerberos for single-sign-on, is that your BASIS team will need to work with the Microsoft Active Directory (AD) team.
It is possible that the AD team can delegate a proportion of the work to the BASIS team by creating a separate (dedicated) organisation unit (OU) and apply permissions to allow the BASIS team to use their Windows accounts to manage the AD entities created in this new OU.

Here is how the architecture will look for a direct connection from Power BI to SAP HANA via Kerberos:

Process Flow:

  1. User opens Power BI (or Excel).
  2. User connects to SAP HANA database using a Windows authentication account (authenticates via Azure AD in this example).
  3. Kerberos authentication token (ticket) is forwarded to SAP HANA during the HANA logon process.
  4. HANA decrypts token using keytab file which contains the key for the stored service principle (SPN) and maps the decrypted Windows account name (UPN) to the HANA DB account.

There is no requirement for Kerberos delegation in this setup.

For the above setup to work, there are some required steps and some optional steps:

  • Required: Install SAP HANA client
    The main requirement is that the SAP HANA client is to be installed onto the end-user’s computer (where Power BI desktop is running). For SAP administrators, you will note that this HANA client will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAP HANA in use.
  • Recommended: Install SAPCRYPTOLIB
    As well as the requirement for the SAP HANA client, it is recommended that you secure the connection to SAP HANA using TLS.
    For this, you will need the SAPCRYPTOLIB also installing into the HANA client location on the end-user’s machine.
    This set of libraries allow TLS to be used to encrypt the connection which is part of your “data-in-transit” security posture.
    You will also therefore need a SAP Personal Security Environment (PSE) file placing onto the end-user’s machine along with the SAPCRYPTOLIB.
    These libraries will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAPCRYPTOLIB in use on the SAP HANA server.
  • Required: Define Env Variable SECUDIR
    So that the HANA client knows where the SAPCRYPTOLIB libraries (DLLs) have been deployed (if they are being deployed), you should set a SYSTEM environment variable called “SECUDIR” to point to the location of the SAPCRYPTOLIB files.
  • Optional: Enable “Server Validation”
    An optional step is to enable “Server Validation” on the connection properties. It is recommended to enable this, because without server validation, it is not possible to know that the target SAP HANA server that has been connected to, is to be trusted with the Kerberos ticket that will be sent during logon.
    This also serves as a method of helping to restrict who can connect to which servers, by un-trusting specific servers (maybe old sandbox ones).
    For “Server Validation” to work, the PSE file which is located in the HANA client directory on the end-user’s computer, will need to be populated with the public TLS certificate(s) of the SAP HANA system(s) the end-user will be connecting to and these certificates will need to contain the FQDN that has been used to initiate the connection (e.g. my-virtual-db-hostname.corp.net).
  • Required: Configure Kerberos on HANA server
    The krb5_hdb.conf is configured on the HANA server, according to your AD domain setup and whether AES256 is needed for the AD service account.
    Once krb5_hdb.conf is configured, the AD service account can be tested at the Linux level using the required kinit and ktutil tools.
    The Kerberos keytab can only be created once the AD service account has been created and the required SPN(s) mapped. The method of creating this changes depending on whether AES256 encryption is needed on the service account.
    When using AES256 bit encryption, you cannot simply rotate the key in the keytab, you will need to take an interruption to SSO connectivity while you update the password in AD, then generate a new keytab key and update the keytab on the HANA system.
    The SAP document speaks of not needing to restart HANA, this was not the case on all systems for whatever reason. Be prepared for HANA restarts or place the files into the /etc folder (changing names and permissions accordingly) until a restart can be done.
    An important point is the host name resolution. When you setup the Kerberos keytab, the SPNs you are told to create are prefixed with “hdb/server-host”. When authentication tracing is enabled on HANA with “debug” level, you can see the hostname detection in the trace files. HANA finds its hostname then finds every canonical name it finds from DNS, then looks for matching entries in the keytab file. Obviously it has an order but from what I’ve seen you can get it to match on any canonical name even if the entry in DNS is uppercase and the keytab is lowercase.
  • Required: Map HANA User to UPN
    In the HANA system, the database user account(s) need their “External ID” setting to the UPN that is passed in the Kerberos ticket. The UPN may not be apparent as you may imagine this to be “user.name@corp.net”, but in actual fact it may be the actual domain name “user.name@REALM”. Testing and tracing in the HANA system with the auth trace turned on, will reveal the UPN to you.

All of the above software and files can be packaged up and distributed to the end-user’s computer using orchestration tools such as SCCM.

Power BI via On-Premise Data Gateway to HANA using Kerberos

Connecting Power BI via the On-Premise Data Gateway to SAP HANA using Kerberos for single-sign-on will need to use something called Kerberos delegation. This delegation technique allows the On-Premise Data Gateway to impersonate the source user account when accessing the target SAP HANA system. It is similar to you lending your credit card to your partner (not your pin, but just your card, allowing them to make contact-less payments up-to a predefined value).
Again, the AD team will need to be involved in a similar way to the “direct to HANA via Kerberos” method.
In this setup, the On-Premise Data Gateway must be running as a domain service user (for delegation to be allowed).

As well as the AD team, you will also need to involve the Power BI administrators (or someone to configure the On-Premise Data Gateway) as some specific changes will need to be made on the gateway machine.

Here is how the architecture will look for a connection from Power BI via the On-Premise Data Gateway to SAP HANA using Kerberos for SSO:

Process Flow:

  1. User login to Power BI Desktop.
  2. Authentication via Azure AD (in this example).
  3. User accesses query/connection for SAP HANA configured and published from the On-prem Data Gateway.
  4. On-prem Data Gateway receives UPN and switches context to impersonate the end-user (account delegation), getting the token from AD and sending on to the HANA system.
  5. HANA decrypts token using keytab file which contains the key for the stored SPN and maps the decrypted Windows account name (UPN) to the HANA DB account.

For the above setup to work, there are some required steps and some optional steps:

  • Required: Install SAP HANA client
    The main requirement is that the SAP HANA client is to be installed onto the On-Premise Data Gateway machine. For SAP administrators, you will note that this client will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAP HANA in use.
  • Recommended: Install SAPCRYPTOLIB
    As well as the requirement for the SAP HANA client, it is recommended that you secure the connection to SAP HANA using TLS.
    For this, you will need the SAPCRYPTOLIB also installing into the HANA client location on the On-Premise Data Gateway machine.
    This set of libraries allow TLS to be used to encrypt the connection which is part of your “data-in-transit” security posture.
    You will also therefore need a SAP Personal Security Environment (PSE) file placing onto the On-Premise Data Gateway machine along with the SAPCRYPTOLIB.
    These libraries will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAPCRYPTOLIB in use on the SAP HANA server.
  • Required: Define Env Variable SECUDIR
    So that the HANA client knows where the SAPCRYPTOLIB libraries (DLLs) have been deployed (if they are being deployed), you should set a SYSTEM environment variable called “SECUDIR” to point to the location of the SAPCRYPTOLIB files.
  • Optional: Enable “Server Validation”
    An optional step is to enable “Server Validation” on the connection properties. It is recommended to enable this, because without server validation, it is not possible to know that the target SAP HANA server that has been connected to, is to be trusted with the Krberos ticket that will be sent during logon.
    For “Server Validation” to work, the PSE file which is located in the HANA client directory on the On-Premise Data Gateway machine, will need to be populated with the public TLS certificate(s) of the SAP HANA system(s) being connected to and these certificates will need to contain the FQDN that has been used to initiate the connection (e.g. my-virtual-db-hostname.corp.net).
    Although this is optional, I suspect there is a bug in the On-Premise Data Gateway software, since it does not seem possible to use the “test connection” facility without enabling “Server Validation”.
    For “Server Validation” to work, the PSE file will need to be populated with the public TLS certificate(s) of the SAP HANA system(s) the end-user will be connecting to.
  • Required: Configure Kerberos on HANA server
    The krb5_hdb.conf is configured according to your AD domain setup and whether AES256 is needed for the AD service account.
    Once krb5_hdb.conf is configured, the AD service account can be tested at the Linux level using the required kinit and ktutil tools.
    The Kerberos keytab can only be created once the AD service account has been created and the required SPN(s) mapped. The method of creating this changes depending on whether AES256 encryption is needed on the service account.
    Once configured, the AD service account can be tested.
    The keytab can only be created once the AD service account has been created and the required SPN(s) mapped.
  • Required: Map HANA User to UPN
    In the HANA system, the database user account(s) need their “External ID” setting to the UPN that is passed in the Kerberos ticket. The UPN may not be apparent as you may imagine this to be “user.name@corp.net”, but in actual fact it may be the actual domain name “user.name@REALM”. Testing and tracing in the HANA system with the auth trace turned on, will reveal the UPN to you.

In a High Availability cluster with 2 nodes for the On-Premise Data Gateway, both nodes will need the same files and config.

Power BI via On-Premise Data Gateway to HANA using SAML

Connecting Power BI via the On-Premise Data Gateway to SAP HANA using SAML is the most simplistic setup because it de-couples the HANA system from Azure AD, with the On-Premise Data Gateway becoming the Identity Provider in this scenario.
There is no need to make changes to AD and in this setup, the On-Premise Data Gateway service can be running as a local computer account.
You will need to involve the Power BI administrators (or someone to configure the On-Premise Data Gateway) as some specific changes will need to be made on the gateway machine.

One important point to note about this setup: The On-Premise Data Gateway can only use one certificate to connect to HANA. This means if you have more than one HANA system, they will need to all trust the same On-Premise Data Gateway certificate.
This is a limitation in the configuration file of the On-Premise Data Gateway.

Here is how the architecture will look for a connection from Power BI via the On-Premise Data Gateway to SAP HANA using SAML for SSO:

Process Flow:

  1. User login to Power BI Desktop.
  2. Authentication via Azure AD.
  3. User accesses query/connection for SAP HANA configured and published from the On-prem Data Gateway.
  4. On-prem Data Gateway receives UPN and generates a SAML assertion.
  5. Gateway signs the SAML assertion including target user account details, using IdP key and sends to HANA DB server over TLS.  HANA DB validates signature using IdP pub key then maps the target user to a DB user ID and performs the query work.

For the above setup to work, there are some required steps and some optional steps:

  • Required: Install SAP HANA client
    The main requirement is that the SAP HANA client is to be installed onto the On-Premise Data Gateway machine. For SAP administrators, you will note that this client will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAP HANA in use.
  • Recommended: Install SAPCRYPTOLIB
    As well as the requirement for the SAP HANA client, it is recommended that you secure the connection to SAP HANA using TLS.
    For this, you will need the SAPCRYPTOLIB also installing into the HANA client location on the On-Premise Data Gateway machine.
    This set of libraries allow TLS to be used to encrypt the connection which is part of your “data-in-transit” security posture.
    You will also therefore need a SAP Personal Security Environment (PSE) file placing onto the On-Premise Data Gateway machine along with the SAPCRYPTOLIB.
    These libraries will also need to be included in your frequent patching & maintenance routines to ensure it is aligned with the version of SAPCRYPTOLIB in use on the SAP HANA server.
  • Required: Define Env Variable SECUDIR
    So that the HANA client knows where the SAPCRYPTOLIB libraries (DLLs) have been deployed (if they are being deployed), you should set a SYSTEM environment variable called “SECUDIR” to point to the location of the SAPCRYPTOLIB files.
  • Optional: Enable “Server Validation”
    An optional step is to enable “Server Validation” on the connection properties. It is recommended to enable this, because without server validation, it is not possible to know that the target SAP HANA server that has been connected to, is to be trusted with the SAML assertion that will be sent during logon.
    For “Server Validation” to work, the PSE file which is located in the HANA client directory on the On-Premise Data Gateway machine, will need to be populated with the public TLS certificate(s) of the SAP HANA system(s) being connected to and these certificates will need to contain the FQDN that has been used to initiate the connection (e.g. my-virtual-db-hostname.corp.net).
    Although this is optional, I suspect there is a bug in the On-Premise Data Gateway software, since it does not seem possible to use the “test connection” facility without enabling “Server Validation”.
    For “Server Validation” to work, the PSE file will need to be populated with the public TLS certificate(s) of the SAP HANA system(s) the end-user will be connecting to.
  • Required: Create HANA SAML Provider
    In the HANA system a new SAML provider needs creating and assinging the IdP certificate that is to be trusted.
  • Required: Map HANA User to UPN
    In the HANA system, the database user account(s) need their account enabling for SAML authentication and mapping to an allowed provider IdP.
    The UPN may not be the same as the UPN use for any Kerberos setup. You may imagine this to be “user.name@REALM”, but in actual fact it may be the actual domain name “user.name@corp.net”. Testing and tracing in the HANA system with the auth trace turned on, will reveal the UPN to you.
    Once a provider is mapped, the user’s HANA account needs updating with their external ID (the UPN).

In a High Availability cluster with 2 nodes for the On-Premise Data Gateway, both nodes will need the same files.

A new private key and certificate will need to be generated for the On-Premise Data Gateway. Whilst the Microsoft documentation for SAML setup shows using OpenSSL to create the certificate, it is entirely possible to do this in PowerShell (see my other post here which will save you much hassle 😉 ).

Another step that the Microsoft documentation has, is to create a Certificate Authority key, then create a signing request for a new non-CA key. This is just not required with SAML. A certificate chain is not needed and HANA does not verify the chain.
Instead just create a CA key and certificate (again see my other post here). If you use my linked PowerShell method you don’t even need to manually transfer keys around, just create and import into the Microsoft Certificate Store (for local computer).

In the Microsoft documentation there are a couple of additional errors/ommissions that may catch you out:

  • The On-Premise Data Gateway configuration file is prefixed with “Microsoft.”. This was missing in the documentation.
    It should be: Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config
  • The thumbprint of the certificate should be in lowercase. It is not known if this is actually required, but ad-hoc Google searching revealed some customers were not able to get it to work with an uppercase certificate thumbprint.
  • When adding the certificate thumbprint to the Gateway config file, the file is XML format.
    This means you need to change the closing tag of the “setting” element and add a child “value” element.
    Overall it should look like this for the thumbprint:

    <setting name=”SapHanaSAMLCertThumbprint” serializeAs=”String”>
    <value>the-thumbprint-here</value>
    </setting>

Troubleshooting

During the setup process, do not expect it to be straightforward.
From experience, the following areas will cause issues:

  • Knowledge of the Active Directory KDC servers.
    Setting up the Kerberos configuration on the HANA server will need asnwers from the AD adminstrators.
  • Knowledge of the AD domain federation.
    Setting up the Kerberos configuration on the HANA server will need asnwers from the AD adminstrators.
  • Knowledge of public key cryptography.
    Creation of the IdP SAML keys is tricky and the documentation shows a convoluted method with added confusion.
  • Lack of accurate documentation.
    Some of the Microsoft documntation is not correct or accurate enough.
  • On-Premise Data Gateway trace log files
    These are difficult to get at as they have to be downloaded and un-zipped each time.
  • HANA system fails to find the Kerberos config and keytab, with only resolution being to place them in /etc or full HANA system restart.

My best advice is:

  1. Test the Kerberos setup on the HANA server using the kinit and other tools. If using Kerberos this must work and will report “valid” during the testing of the kvno (key version number).
  2. Use the On-Premise Data Gateway trace logs (if using On-Premise Data Gateway).
    Once you are sure that it has selected the IdP certificate and is trying to talk to HANA, then switch to the HANA traces.
  3. Use the HANA authorisation trace with “debug” setting, then check the traces.
    This is useful once you know that the On-Premise Data Gateway is actually trying to talk to HANA (if using On-Premise Data Gateway), or if you are using SS2DB use these traces straight away.
    These traces will tell you the decoded UPN and whether HANA has found an appropriate user account mapping (or SAML provider if using SAML).

Thanks for reading and good luck!

References:

https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem-indepth

https://learn.microsoft.com/en-us/power-bi/guidance/whitepaper-powerbi-security#vnet-connectivity-preview—coming-soon

SAP Note 2093286 – Migration from OpenSSL to CommonCryptoLib

SAP Note 2303807 – SAP HANA Smart Data Access: SSO with Kerberos and Microsoft Windows Active Directory

SAP Note 1837331 – HowTo configure Kerberos SSO to SAP HANA DB using Microsoft Windows Active Directory

https://learn.microsoft.com/en-us/power-bi/connect-data/service-gateway-sso-kerberos-sap-hana

https://learn.microsoft.com/en-us/power-bi/connect-data/service-gateway-sso-saml

https://en.wikipedia.org/wiki/Kerberos_(protocol)

https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language

https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/1885fad82df943c2a1974f5da0eed66d.html?version=2.0.03&locale=1885fad82df943c2a1974f5da0eed66d.html

https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c786f2cfd976101493dfdf14cf9bcfb1.html?version=2.0.03

https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/db6db355bb571014b56eb25057daec5f.html?version=2.0.03&locale=1885fad82df943c2a1974f5da0eed66d.html

https://social.technet.microsoft.com/wiki/contents/articles/36470.active-directory-using-kerberos-keytabs-to-integrate-non-windows-systems.aspx

Using Single Sign-on with the Power BI Gateway

https://blogs.sap.com/2020/03/22/sap-bi-platform-saml-sso-to-hana-database/

PowerShell Encrypt / Decrypt OpenSSL AES256 CBC

A few months back I had a Korn shell script which used OpenSSL to encrypt some text using AES 256 CBC.
I managed, through the power of stackoverflow.com and various other blogs, to write a Java routine to perform the exact same encrypt/decrypt.
This allowed me to encrypt in Korn on Linux and decrypt in Java which was running inside a SAP Netweaver application server, or the other way around (encrypt in Java and decrypt in Korn using OpenSSL).

About 2 months after that, I needed the same set of routines to be written in PowerShell, allowing the same encrypted text to be encrypted on Linux with OpenSSL and decrypted on Windows in PowerShell (no need for OpenSSL).

I forked the PowerShell code which did the initial encryption and wrote the decryption routine which I’ve published as a Github gist here:

https://gist.github.com/Darryl-G/d1039c2407262cb6d735c3e7a730ee86

Hardening SAP Hostagent SSL Connections

You may have recently had a penetration test and in the report you find that the SSL port for the SAP Hosagent (saphostexec) are listed as allowing weak encryption cipher strength and older SSL protocols.
You want to know how you can remedy this.

In this post I will show how we can appease the Cyber Security penetration testing team, by hardening the SSL ciphers and protocols used for connections to the Hostagent.

What Are Weak Ciphers?

Ciphers, like Triple-DES and Blowfish use 64-bit block sizes (the cipher text is split up into blocks of 64-bit in length) which makes a block cipher more vulnerable to compromise, compared to a cipher that uses a larger 128-bit block size.

The cipher is agreed upon during the setup of the SSL connection between the client and the server (SAP Hostagent in our scenario).
If a server advertises that it supports weaker ciphers, and a client elected to use one of the supported weaker ciphers during the SSL connection negotiation, then the connection could be vulnerable to decryption.

What Are Older SSL Protocols?

Through time the SSL protocol has been improved and strengthened.
The SSL protocol versions go from SSL v1.0 to SSL v3.0, then re-named to TLS and the versions again incremented from TLS 1.0, TLS 1.1, TLS 1.2 and the most recent TLS 1.3 (in 2018).

The old SSL versions of the protocol are deprecated and should not be used. The slightly newer TLS versions 1.0 and 1.1 are also now widely deprecated (do not confuse “deprecated” with “unused”).

It is therefore recommended, generally, to use TLS 1.2 and above.

Why Harden the Hostagent SSL Service?

Now we have an appreciation of our older ciphers and protocols, let’s look at the Hostagent.
Usually the PEN test report will highlight the SSL TCP port 1129, and the report will state two things:

  • The SSL ciphers accepted by the Hostagent include weaker ciphers (such as RC4).
  • The SSL protocols accepted by the Hostagent include TLS 1.0 and 1.1.

The above issues present opportunities for hackers that may allow them to more easily compromise a SAP Hostagent on a SAP server.
Whilst this may not sound too bad, it is just the Hostagent, when we realise that the Hostagent runs as the Linux root (or Windows SYSTEM user) and there are known vulnerabilities that allow remote exploitation, we can see that the Hostagent could be a window into the SAP system as the highest privileged user on the server!
It is therefore quite important to try and protect the Hostagent as much as possible.

How Can We Harden the Hostagent SSL Service?

To ensure that weak ciphers are not used, the server needs to be configured to not use them. In the context of SAP Hostagents, they are the SSL servers and they need to be configured to only use stronger ciphers.

The SAP Hostagent is really the same as the SAP Instance Agent in disguise.
Because of this, it is possible to find documented parameters that allow us to harden the SSL service of the Hostagent in the same way.

By following SAP note 510007, we can see two SAP recommended parameters and settings that can be used to harden the SSL ciphers used:

  • ssl/ciphersuites = 135:PFS:HIGH::EC_P256:EC_HIGH
  • ssl/client_ciphersuites = 150:PFS:HIGH::EC_P256:EC_HIGH

The SAP note 510007 includes an extremely good description of the SAP cryptographic library’s capabilities, the role of SSL and even some commentary on the probability of an older protocol being abused.
I feel that the note has been written by someone with a lot of experience.

The above two parameters apply a numeric calculation that selects an appropriate strength of cryptographic ciphers to be used for server and client connectivity.
With the Hostagent, we are more concerned with the server side, but the Hostagent can also do client calls, so we apply both parameters in unison.

The values assigned to the two parameters are described by the SAP note as being good, but also allow flexibility for backwards compatibility with the older SAP and non-SAP software. Again the SAP note stresses the importance of compatibility (and having stuff continue to work) versus security.

What is the Impact of the Parameters?

To be able to see the impact to the Hostagent, we first need to see what the Hostagent supports out-of-the-box.

Thanks to a great post here: www.ise.io/using-openssl-determine-ciphers-enabled-server
we can use a super simple shell script (on Unix/Linux) to call the OpenSSL executable, make a connection to the target server (the Hostagent) and check the list of ciphers and protocols that are advertised.
The code from the above site is here:

for v in ssl2 ssl3 tls1 tls1_1 tls1_2; do 
   for c in $(openssl ciphers 'ALL:eNULL' | tr ':' ' '); do 
      openssl s_client -connect localhost:1129 -cipher $c -$v < /dev/null > /dev/null 2>&1 && echo -e "$v:\t$c" 
   done 
done

You can see that I have placed “localhost” and “1129” in the code.
This is because I am running the script on a Linux host with a SAP Hostagent installed, and the SSL port is 1129 (default).

The output is something like this (depending on your version of the Hostagent):

tls1: ECDHE-RSA-AES256-SHA 
tls1: AES256-SHA 
tls1: ECDHE-RSA-AES128-SHA 
tls1: AES128-SHA 
tls1: RC4-SHA 
tls1: RC4-MD5 
tls1: DES-CBC3-SHA 
tls1_1: ECDHE-RSA-AES256-SHA 
tls1_1: AES256-SHA 
tls1_1: ECDHE-RSA-AES128-SHA 
tls1_1: AES128-SHA 
tls1_1: RC4-SHA 
tls1_1: RC4-MD5 
tls1_1: DES-CBC3-SHA 
tls1_2: ECDHE-RSA-AES256-GCM-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA 
tls1_2: AES256-GCM-SHA384 
tls1_2: AES256-SHA 
tls1_2: ECDHE-RSA-AES128-GCM-SHA256 
tls1_2: ECDHE-RSA-AES128-SHA 
tls1_2: AES128-GCM-SHA256 
tls1_2: AES128-SHA 
tls1_2: RC4-SHA 
tls1_2: RC4-MD5 
tls1_2: DES-CBC3-SHA

You can see that we have some RC4 and some DES ciphers listed in the TLS 1.0, TLS 1.1 and TLS 1.2 sections.
We now use SAP note 510007 to decide that we want to use the more secure settings that remove these weaker ciphers.

In the case of SAP Host Agents, we adjust the profile file /usr/sap/hostctrl/exe/host_profile (as root), and add our two SAP recommended parameters (mentioned previously):
ssl/ciphersuites = 135:PFS:HIGH::EC_P256:EC_HIGH
ssl/client_ciphersuites = 150:PFS:HIGH::EC_P256:EC_HIGH

NOTE: You should be running the latest SAP Hostagent, this is very important for security of your system. There are known vulnerabilities in older versions that allow remote compromise.

Once set, we need to restart the agent:

/usr/sap/hostctrl/exe/saphostexec -restart

We can re-execute our check script to see that we have a more secure configuration:

tls1: ECDHE-RSA-AES256-SHA 
tls1: AES256-SHA 
tls1: ECDHE-RSA-AES128-SHA 
tls1: AES128-SHA 
tls1_1: ECDHE-RSA-AES256-SHA 
tls1_1: AES256-SHA 
tls1_1: ECDHE-RSA-AES128-SHA 
tls1_1: AES128-SHA 
tls1_2: ECDHE-RSA-AES256-GCM-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA 
tls1_2: AES256-GCM-SHA384 
tls1_2: AES256-SHA 
tls1_2: ECDHE-RSA-AES128-GCM-SHA256 
tls1_2: ECDHE-RSA-AES128-SHA 
tls1_2: AES128-GCM-SHA256 
tls1_2: AES128-SHA

The more insecure ciphers are removed, but we still see those older protocols (TLS 1.0 and TLS 1.1) in the list.
We decide that we would like to further harden the setup by removing those protocols.

If we look at SAP note 2384290, we can see that an alternate set of parameter values are provided:

  • ssl/ciphersuites = 545:PFS:HIGH::EC_P256:EC_HIGH
  • ssl/client_ciphersuites = 560:PFS:HIGH::EC_P256:EC_HIGH

Let’s apply these and re-run the test for a final time.
We can see that we get a super refined list of protocols and ciphers:

tls1_2: ECDHE-RSA-AES256-GCM-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA384 
tls1_2: ECDHE-RSA-AES256-SHA 
tls1_2: AES256-GCM-SHA384 
tls1_2: AES256-SHA 
tls1_2: ECDHE-RSA-AES128-GCM-SHA256 
tls1_2: ECDHE-RSA-AES128-SHA 
tls1_2: AES128-GCM-SHA256 
tls1_2: AES128-SHA

Our Hostagent SSL service is now as secure as it can be at this point in time, within reason. If we try and adjust the ciphers any further, we may end up breaking compatibility with other SAP systems in your landscape.

Summary

We’ve seen how applying two SAP standard parameters to the SAP Hostagent and restarting it, can significantly strengthen the posture of the Hostagent’s SSL service.

However, we need to be cautious of compatibility with other SAP and non-SAP software in the landscape, which may talk to the Hostagent only with older protocols.

As a final note, you may be wondering if we can remove the HTTP service from the Hostagent? At this point in time I have not found a SAP note that would indicate this is possible or recommended. However, since the HTTP protocol is known to be insecure, just don’t use it. This is in comparison with SSL which should be secure, but might not be as secure as it could be.

HowTo: Check Netweaver 7.02 Secure Store Keyphrase

For Netweaver 7.1 and above, SAP provide a Java class that you can use to check the Secure Store keyphrase.
See SAP note 1895736 “Check if secure store keyphrase is correct”.
However, in the older Netweaver 7.02, the Java check function does not exist.

In this post I provide a simple way to check the keyphrase without making any destructive changes in Netweaver AS Java 7.02.

Why Check the Keyphrase?

Being able to check the Netweaver AS Java Secure Store keyphrase is useful when setting up SAP ASE HADR. The Software Provisioning Manager requests the keyphrase when installing the companion database on the standby/DR server.

The Check Process

In NW 7.02, you can use the following method, to check that you have the correct keyphrase for the Secure Store.
The method does not cause any outage or overwrite anything.
It is completely non-destructive, so you can run it as many times as you need.
I guess in a way it could also be used as a brute force method of guessing the keyphrase.

As the adm Linux user on the Java Central Instance, we first set up some useful variables:

setenv SLTOOLS /sapmnt/${SAPSYSTEMNAME}/global/sltools
setenv LIB ${SLTOOLS}/sharedlib
setenv IAIK ${SLTOOLS}/../security/lib/tools

Now we can call the java code that allows us to create a temporary Secure Store using the same keyphrase that we think is the real Secure Store keyphrase:
NOTE: We change “thepw” for the keyphrase that we think is correct.

/usr/sap/${SAPSYSTEMNAME}/J*/exe/sapjvm_*/bin/java -classpath "${LIB}/tc_sec_secstorefs.jar:${LIB}/exception.jar:${IAIK}/iaik_jce.jar:${LIB}/logging.jar" com.sap.security.core.server.secstorefs.SecStoreFS create -s ${SAPSYSTEMNAME} -f /tmp/${SAPSYSTEMNAME}sec.properties -k /tmp/${SAPSYSTEMNAME}sec.key -enc -p "thepw"

The output of the command above is 2 files in the /tmp folder, called sec.key and sec.properties.
If we now compare the checksum of the new temporary key file, to the current system Secure Store key file (in our case this is called SecStore.key):

cksum /sapmnt/${SAPSYSTEMNAME}/global/security/data/SecStore.key 
cksum /tmp/${SAPSYSTEMNAME}Sec.key


If both the check sum values are the same, then you have the correct keyphrase.

Is my GCP hosted SLES 12 Linux VM Affected by the BootHole Vulnerability

In an effort to really drag this topic out (it’s now a trilogy), I’ve taken my previous Azure specific post and also the AWS specific post and decided to do some further research into whether the same is true in Google Cloud Platform (a.k.a GCP).

Previously

(If I was writing this like a true screenwriter, it would get shorter and faster each recap).

In July 2020, a GRUB2 bootloader vulnerability was discovered which could allow attackers to replace the bootloader on a machine which has Secure Boot turned on.
The vulnerability is designated CVE-2020-10713 and is rated 8.2 HIGH on the CVSS (see here).

Let’s recap what this is (honestly, please see my Azure post for details, it’s quite technical), and how it impacts a GCP virtual machine running SUSE Enterprise Linux 12, which is commonly used to run SAP systems such as SAP HANA or other SAP products.

What is the Vulnerability?

Essentially, some evil input data can be entered into some part of the GRUB2 program binaries, which is not checked/validated.
By carefully crafting the data that is the overflow, it is possible to cause a specifically targeted memory area to be overwritten.

As described by Eclypsium here (the security company that detected this) “Attackers exploiting this vulnerability can install persistent and stealthy bootkits or malicious bootloaders that could give them near-total control over the victim device“.

Essentially, the vulnerability allows an attacker with root privileges to replace the bootloader with a malicious one.

What is GRUB2?

GRUB2 is v2 of the GRand Unified Bootloader (see here for the manual).
It can be used to load the main operating system of a computer.

What is Secure Boot?

There are commonly two boot methods: “Legacy Boot” and “Secure Boot” (a.k.a UEFI boot).
Until Secure Boot was invented, the bootloader would sit in a designated location on the hard disk and would be executed by the computer BIOS to start the chain of processes for the computer start up.

With Secure Boot, certificates are used to secure the boot process chain.
This BootHole vulnerability means a new CA certificate needs to be implemented in every machine that uses Secure Boot!

But the attackers Need Root?

Yes, the vulnerability is in a GRUB2 configuration text file owned by the root user. Additional text added to the file can cause the buffer overflow.
Anti-virus can’t remove the bootloader if the bootloader boots first and “adjusts” the anti-virus.

NOTE: The flaw also exists if you also use the network boot capability (PXE boot).

What is the Patch?

Due to the complexity of the problem (did you read the prior Eclypsium link?), it needs more than one piece of software to be patched and in different layers of the boot chain.

The vulnerable GRUB2 software needs patching.
To be able to stop the vulnerable version of GRUB2 being re-installed and used, three things need to happen:

  1. The O/S vendor (SUSE) needs to adjust their code (known as the “shim”) so that it no longer trusts the vulnerable version of GRUB2. Again, this is a software patch from the O/S vendor (SUSE) which will need a reboot.
  2. Since someone with root could simply re-install O/S vendor code (the “shim”) that trusts the vulnerable version of GRUB2, the adjusted O/S vendor code will need signing and trusting by the certificates further up the chain.
  3. The revocation list of Secure Boot needs to be adjusted to prevent the vulnerable version of the O/S vendor code (“shim”) from being called during boot. (This is known as the “dbx” (exclusion database), which will need updating with a firmware update).

What is SUSE doing about it?

There needs to be a multi-pronged patching process because SUSE also found some additional bugs during their analysis.

You can see the SUSE page on CVE-2020-10713 here, which includes the mention of the additional bugs.

How does this impact GCP VMs?

In the previous paragraphs we found that a firmware update is needed to update the “dbx” exclusion database.
Since GCP virtual machines are hosted in a KVM based hypervisor, the “firmware” is actually software.

Whilst looking for details on “Secure Boot” in GCP virtual machines, we come across the Google Compute Engine’s “Shielded VM” option.
You can read about it in detail here.
In brief, in GCP a Shielded VM is deployed using a pre-defined set of Google specific guest operating systems:

As noted above, the documentation specifically mentions that the “firmware” underpinning the virtual machine contains Google’s Certificate Authority (CA) certificate, as the root of the trust chain.
This is important because the Eclypsium description of the vulnerability is specifically citing a problem with the Microsoft CA.
What this means is that Google actually decide on the trust chain themselves and can probably more rapidly adjust the firmware with a new CA certificate.
To reiterate, this is specific to Google specific VM images that you deploy as a Shielded VM.

Another point worth noting is that when creating a Shielded VM, you can enable the vTPM (virtual trusted platform module), which allows integrity monitoring of the boot process. Any change to the boot process and a validation alert is triggered. Whilst this would not prevent compromise, it would at least alert an administrator.

Reading the Google infrastructure security document, we find that just like AWS, Google have designed and are implementing their own security chip called Titan, on the physical hosts. This is used to ensure that physical hosts boot securely, but it is not clear if this chip is used in anyway for Shielded VMs booted on the physical host.

If we delve further into the GCP documentation we find that we also have the option to create a custom image for deployment into a Shielded VM.
See the documentation on how to create a custom Shielded VM image:

The above states that you can create your own Secure Boot capable VM image for deployment in GCP as a Shielded VM.
If we read further down that page under section “Default certificates“, we find a slight difference compared to the Google “curated” images:

The above is telling us, by default the standard Microsoft CA certificates are used for the Secure Boot setup of VMs created using a custom image (remember non-custom Secure Boot images use Google’s root CA) in GCP.
When it says “default values”, right now, they are the only values because of a small note further up the page:

OK, so you can only use the defaults for now. The same compromised defaults that will need fixing. 🤷‍♂️

What do we think needs to happen once Google create the ability to replace the certificates?
From reading those previously mentioned documents, I would guess that to rebuild the certificate database used during the creation of the custom Shielded VM image, you are going to need to re-create the VM image and then re-deploy a VM from that image!

The question remains, is SLES 12 supported as a Shielded VM guest-OS on GCP?
According to the Shielded VM page here, it is not by default. You will need to therefore create your own image:

Summary:

The BootHole vulnerability is far reaching and will impact many, many devices (servers, laptops, IoT devices, TVs, fridges, cars?).
However, only those devices that actually *use* Secure Boot will truly be impacted, since the devices not using Secure Boot do not need to be patched (it’s fruitless).

If you run SLES 12 on GCP virtual machines, using public images, then by default you will not being using the Shielded VM instances, so there is no point patching to fix a vulnerability for which you are not affected.
You are only introducing more risk by patching.

If however, you do decide to patch (even if you don’t need to) then follow the advice from SUSE and patch to fix GRUB2, the “shim” and the other vulnerabilities that were found.

On a final closing point, you could be running a custom SLES image deployed in GCP as a Shielded VM. An image that your company has built and which uses Secure Boot. You would be wise to contact your cloud administrators to ensure that they are preparing for a VM rebuild and subsequent patching required to ensure that Secure Boot remains secure.

Useful Links: