This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

HowTo: Show Current Role of a HA SAP Cloud Connector

Cloud wisp on mountain

If you have installed the SAP Cloud Connector, you will know that out-of-the-box it is capable of providing a High Availability feature at the application layer.

Essentially, you can install 2x SAP Cloud Connectors on 2x VMs and then they can be paired so that one acts as Master and one as “Shadow” (secondary).

The Shadow instance connects to the Master to replicate the required configuration.

If you decide to patch the Cloud Connector (everything needs patching right?!), then you can simply patch the Shadow instance, trigger a failover then patch the old Master.

There is only one complication in this, and that is that it’s not “easy” to see which is acting in which role unless you log into the web administration console.

You can go through the log files to see which has taken over the role of Master at some point, but this is a not easy and doesn’t lend itself to being scripted for automated detection of the current role.

Here’s a nice easy way to detect the current role, and could be used (for example) as part of a Custom Instance monitor script for SAP LaMa automation of the Cloud Connector:

awk '/<haRole>/ { match($1,/<haRole>(.*)<\/haRole>/,role); if (role[1] != "" ) { print role[1]; exit } }' /opt/sap/scc/scc_config/scc_config.ini

Out will be either “shadow”, or “master”.

I use awk a lot of the time for pattern group matching because I like the simplicity, it’s a powerful tool and deserves the very long O’Reilly book.

Here’s what that single code line is doing:

awkThe call to the program binary.
Start the contents of the inline AWK script (prevents interpretation by the shell).
/<haRole>/Match every line that contains the <haRole> tag.
{On each line match, execute this block of code (we close with “}”).
$1Match against the 1st space delimited parameter on the line.
/<haRole>(.*)<\/haRole>/,Obtain any text “.*” between <haRole> tag.
roleStore the match in a new array called “role”.
if (role[1] != “” )Check that after the matching, the role array has 2 entries (zero initialised array).
{ print role[1]; exit }If we do have 2 entries, print the second one (1st is the complete matched text string) from the array and exit.
}’Close off the command and AWK script.
/opt/sap/scc/
scc_config/
scc_config.ini
The name of the input file for AWK to scan.

It’s a nice simple way of checking the current role, and can be embedded into a shell script for easy execution.

Java 8 SE – I Just Removed It

Remove Java 8 SE

I have just removed Java 8 SE from my computer.
I wrote a blog post a while back about how Oracle was changing the way it licenses the Java virtual machine 8 Standard Edition (SE).
You can read it here: SAP JVM and the Oracle Java SE 8 Licensing Confusion

At the time of the post, it was not very clear how the license changes were going to impact the use of the Java 8 SE virtual machine.

What’s Changed?

Briefly, at the end of January 2019, Oracle have essentially now stopped free updates to the Java 8 SE for non-personal customers.
This means that you as a personal user can continue to update Java 8 SE, but as a corporate user you may only apply updates to Java 8 SE if you have purchased a subscription to receive the updates.

I’m a Corporate User

If you are a corporate user of the Oracle 8 SE, unless you have a subscription, you can no longer update Java 8 SE.
If you wish to remain secure and remove security risks from your computers, you should de-install it if you do not want to purchase a subscription from Oracle.
If you do not uninstall Java 8 SE, but continue to update it, and you are audited by Oracle, then you may need to pay for a subscription.

Can Oracle Audit Me?

The Java 8 SE auto-update application now displays a prompt on machines that have the auto-update enabled and that have an internet connection.
If you choose to “Install” (you already have it installed) then at that point Oracle deem that you have accepted the license agreement and they can audit your company for the use of Oracle products.

How Do I Remove Java 8 SE?

For me it was easy.
My Java 8 SE installation has the auto-update function enabled, so it simply told me the license terms had changed and offered me the button to simply remove it. So I did.

You may need to uninstall it from within the Windows program uninstallation tool within Windows Control Panel.
Your IT teams may have already started the removal process automatically.

What If I Need an Up-to-date Java 8 VM?

If you need a Java 8 JVM, you can move to an open source version of Java, such as OpenJDK, or a number of others.

For SAP customers wanting to run their SAP tools, you can actually use the SAPJVM for use with your SAP tools such as SAP Software Download Manager, SAP HANA Studio, SAP ABAP tools on Eclipse and other Java based tools.

How Do I Download SAPJVM?

Downloading the SAPJVM is simple.
Take a look at SAP note 1442124 “How to download a SAP JVM patch from the SMP”.


References:

https://www.it-implementor.co.uk/2019/01/sap-jvm-and-oracle-java-se-8-licensing.html
https://blogs.oracle.com/java-platform-group/extension-of-oracle-java-se-8-public-updates-and-java-web-start-support
upperedge.com/oracle/top-3-reasons-oracle-java-users-are-unknowingly-out-of-compliance/
www.oracle.com/downloads/licenses/binary-code-license.html www.oracle.com/downloads/licenses/javase-license1.html www.oracle.com/technetwork/java/javase/terms/oaa.html

Writing a Simple Backup Script

Database Backup Script Disks

So you want a backup script to backup your databases to disk.
Sounds like a nice half-a-day scripting job, doesn’t it?
Let’s analyse this requirement a little more and you will start to get the idea that it’s never as simple as it sounds.

Requirements

Digging a little deeper we come up with the following requirements:

Backup Configuration

We need a standard backup location on all database servers.
When we backup a database we need a folder location on the server (unless you’re using a specific device/method like Tivoli, or backint for SAP).
If we are backing up to a local directory (let’s assume this) and you don’t have the same drive/folder structure, then you will need to create a list of locations for each database and have the backup script look at the list, or worst case, hardcode the config into the script.
The easy way to solve this is to move the configuration out of the backup script altogether.
Most databases support the use of a predefined backup configuration.
For example, in ASE we can use “dump configurations”.
In HANA we adjust the ini file of the respective indexserver or nameserver to set the backup path locations.
Having done the above, for each database, we can then “simply” pull out the config from the target database or during the backup command execution, include the name of the config profile to use, which will dictate the target backup location.
We are going to ignore other issues, such as the size of the disk location, type of disk storage tier and other infrastructure related questions.

Access to the Database

We need a way of pulling out the dump/backup path from the database.
As mentioned above, if we move the configuration of the backup location out of the databases, we need to access it from the script.
This is definitely possible, but if it’s stored in the database (let’s assume this), at this point we have no way of logging into the database.
We therefore need a way of logging into the database in a standard way across all databases.
Therefore, we should elect to use a harmonised username for our backups (not a harmonised password!).

Multiple Scripts

We need more than one script.
Pulling out the configuration from the database is never easy due to the differences in the way that the command line interpreters work.
For example, in HANA 1.0, the hdbsql command outputs a slightly different format and provides less capability to customise the output, compared to the HANA 2.0 hdbsql command line tool.
SAP ASE is completely different and again needs a specific script setup.
The same will be for combinations of Linux and Windows systems.  You may need some PowerShell in there!

Modularised Code Packages

We need it to be packaged and easy to read.
Based on the above, we can see that we will need more than one script to cater for different DB vendors and architectures/platforms.
Therefore, one script for backing up HANA databases and one for ASE databases.
What if you need a SQL Server one?  Well again, the DB call is slightly different, so another script is needed.
It’s possible to modularise the scripts in such a way that the DB call itself is a separate script for each DB, leaving your core script the same.
However, we are now into the realms of script readability and simplicity versus complexity and re-use.
But it’s worth considering the options within the limitations of the operating system landscape that you have.

Database Security

We need a secure method of storing the password for the harmonised backup user across all the databases.
Since we will need to log into the database to perform tasks such as getting the backup config settings and actually calling the backup commands, we need to think about how we will be storing the username and password.
In some cases, like HANA, we can just use the secure file system store (hdbuserstore) to add our required username and password.
There are some issues with certain versions of HANA (HANA 1.0 is different to HANA 2.0) in this area.
For ASE we may be able to use the sybxctrl binary to execute our script in an intelligent way, avoiding the need to pass passwords at all.
Recent versions of SAP ASE 16.0 SP03, include a new aseuserstore command, which I will highlight in another post.  It’s very similar to the HANA hdbuserstore, except it allows ASE (for Business Suite) and ASE Enterprise Edition, to use the same method of password-less access.

Scheduling

We need a common backup scheduling capability across the servers.
You can use a central system (e.g. an enterprise scheduler), or something like cron or Windows Task Scheduler, but centrally controlled from a task controller script.
This will rely on either a shared storage ability (e.g. NFS/CIFS) or some method of the scripts talking centrally to a control plane.
Not only does the central location/control provide easy control of the schedule, but you will also find this useful for capturing error situations, where the backup script may have failed and needs to notify the operators.

Logging

We need the logging output of the scripts to be accessible.
When you are backing up over 100 databases, your administrators are not going to want to go onto each individual server to look at logs.
You will need a central location for logging and aggregation of that logging.

O/S Users

We may need common O/S user accounts.
The execution environment of the script needs to include the capability to access the log areas.
The user account used to perform the execution of the script needs to have a common setup across all servers.
If you’re using CIFS or NFS for storing logs, you will have permissions issues if you use different users, unless you configure your NFS/SMB settings appropriately.
In Unix/Linux, it’s easy to create a specific user (could be linked into Windows Active Directory) with the same UID across many servers, or make the user a member of the same group.

Housekeeping

We need housekeeping for our logging capability.
During the execution of your scripts, the logs you generate will need to be kept according to your usual policies.
You may want to see a report of backups for the last month.
You may need to provide audit evidence that a backup has been performed.

Disk Space

We need a defined amount of backup space.
If you are backing up to disk, you may need a way of calculating how much disk space you will need.
In HANA this is fairly easy as it provides views you can query to estimate the backup disk requirements prior to executing a backup.
You will need to call these and check against the target disk location, before your script starts the backup.
How will you account for additional space requirements over time?  Will you just fail or can you provide a warning?
How many backups or backup files will you retain on disk and over how many days?
Will these be removed by your script once they are no longer needed?

Backup Strategies

We should consider different backup strategies.
What type of backup will your script need to handle?
For example, with ASE or SQL Server, you may need to run transaction log backups as well as normal full backups.
Will this be the same script?  Can they run together at the same time?
If you are dumping to disk, performing a full backup once a day, then will you need those transaction log dumps from the previous day?
As well as performing a backup of the databases, your script should also backup the recommended configuration files.
For example on ASE, it is recommended to include the configuration file and also the dumphist file.
On HANA it is recommended to include the ini files, the backup.log and useful to include the trace files.
Will your backup be encrypted and will you need to store the keys somewhere?

Validation

We may need backup validation.
Some databases provide post-backup validation, and some provide inline validation of the blocks.
Do you need to consider to these check on (most are by default turned off)?

Authenticity

We should consider backup file authenticity.
Do you need to know if the backup files have been tampered with?
Or maybe just check that what was sent over the network to the target storage location, is the exact same file that was originally created?
You may need to perform some sort of checksum on the original backup file to help establish and authenticate the backup files.
This process should be the same ideally, for all databases.

Pre-Execution Checks

We should performing checking of the environment.
Before your script starts to run the backup, you may wish to include a common set of pre-checks.
The reason is that common issues can be integrated into the pre-checks over time.

Examples Include:

  • Are you running as the correct O/S user?
  • Do you have execution access to all required sub-scripts/log directories?
  • Is the type of target database that your script supports, installed?
  • Is the target database running?
  • Are any other required processes running (e.g. ASE backupserver)?
  • Is a backup script already executing?
  • Is the version of the database supported by your script?
  • Is the target backup destination available?  (e.g. file/folder location).
  • Is there enough disk space for your backup to complete?
  • Was the last backup a success?  If not, can you remove the previous dump files?

Once you’ve got all the above decided, then it will be a simple task of writing the script.

Listing Azure VM DataDisks and Cache Settings Using Azure Portal JMESPATH & Bash

As part of a SAP HANA deployment, there are a set of recommendations around the Azure VM disk caching settings and the use of the Azure VM WriteAccelerator.
These features should be applied to the SAP HANA database data volume and log volume disks to ensure optimum performance of the database I/O operations.

This post is not about the cache settings, but about how it’s possible to gather the required information about the current settings across your landscape.

There are 3 main methods available to an infrastructure person, to see the current Azure VM disk cache settings.
I will discuss these method below.

1, Using the Azure Portal

You can use the Azure Portal to locate the VM you are interested in, then checking the disks, and looking on each disk.
You can only see the disk cache settings under the VM view inside the Azure Portal.

While slightly counter intuitive (you would expect to see the same under the “Disks” view), it’s because the disk cache feature is provided for by the VM onto which the disks are bound, therefore it’s tied to the VM view.

2, Using the Azure CLI

Using the Azure CLI (bash or powershell) to find the disks and get the settings.

This is by far the most common approach for anyone managing a large estate. It uses the existing Azure API layers and the Azure CLI to query your Azure subscription, return the data in JSON format and parse it.
The actual query is written in JMESPATH (https://jmespath.org/) and is similar to XPath (for XML).

A couple of sample queries in BASH (my favourite shell):

List all VM names:

az vm list --query [].name -o table

List VM names, powerstate, vmsize, O/S and RG:

az vm list --show-details --query '[].{name:name, state:powerState, OS:storageProfile.osDisk.osType, Type:hardwareProfile.vmSize, rg:resourceGroup, diskName:storageProfile.dataDisks.name, diskLUN:storageProfile.dataDisks.lun, diskCaching:storageProfile.dataDisks.caching, diskSizeG:storageProfile.dataDisks.diskSizeGb, WAEnabled:storageProfile.dataDisks.writeAcceleratorEnabled }' -o table

List all VMs with names ending d01 or d02 or d03, then pull out the data disk details and whether the WriteAccelerator is enabled:

az vm list --query "[?ends_with(name,'d01')||ends_with(name,'d02')||ends_with(name,'d03')]|[].storageProfile.dataDisks[].[lun,name,caching,diskSizeGb,writeAcceleratorEnabled]" -o tsv

To execute the above, simply launch the Cloud Shell and select “Bash” in the Azure Portal:

Then paste in the query and hit return:

3, A Most Obscure Method.

Since SAP require you to have the “Enhanced Monitoring for Linux” (OEM) agent extension installed, you can obtain the disk details directly on each VM.

For Linux VMs, the OEM creates a special text file for performance counters, which is used by the Saposcol (remember that) for use by SAP diagnostic agents, ABAP stacks and other tools.

Using a simple piece of awk scripting, we can pull out the disk cache settings from the file like so:

awk -F';' '/;disk;Caching;/ { sub(//dev//,"",$4); printf "/dev/%s %sn", tolower($4), tolower($6) }' /var/lib/AzureEnhancedMonitor/PerfCounters

There’s a lot more information in the text file (/var/lib/AzureEnhancedMonitor/PerfCounters) and my later post Checking Azure Disk Cache Settings on a Linux VM in Shell, I show how you can pull out the complete mapping between Linux disk devices, disk volume groups, Azure disk names and the disk caching settings, like so:

Useful Links

Korn Shell vs Powershell and the New AZ Module

Do you know Korn and are thinking about learning Powershell?

Look at this:

function What-am-I {
   echo “Korn or powershell?”
}

what-am-i
echo $?

Looks like Korn, but it also looks like Powershell.
In actual fact, it executes in both Korn shell and Powershell.

There’s a slight difference in the output from “$?” because Powershell will output “True” and Korn will output “0”.
Not much in it really. That is just another reason Linux people are feeling the Microsoft love right now.

Plus, as recently highlighted by a Microsoft blog post, the Azure CLI known as “az” which allows you to interact with Azure APIs and functions, will now also be the name of the new Powershell module used to perform the same operations and replacing “AzureRM”.

It makes sense for Microsoft to harmonise the two names.
It could save them an awful lot of documentation because currently they have to write examples for both “az” CLI and Powershell cmdlets for each new Azure feature/function.