This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

SAP’s Deeper Partnership with Red Hat

An announcement back in February 2023 from Waldorf tells us of a “deepening” partnership between SAP and the Enterprise Linux Operating System vendor Red Hat.

They have a long history together already, with the SAP Linux Labs encompassing the Red Hat tech team to ensure SAP on Red Hat Linux works and performs as it should.
 

Here are the lines of significance from the SAP news article: https://news.sap.com/2023/02/red-hat-and-sap-deepen-partnership/

…SAP is boosting support for the RISE with SAP solution using Red Hat Enterprise Linux as the preferred operating system for net new business for RISE with SAP solution deployments.

The platform builds on this trust by offering a consistent, reliable foundation for SAP software deployments, providing a standard Linux backbone to support SAP customers across hybrid and multi-cloud environments.

…building on Red Hat’s scalable, flexible, open hybrid cloud infrastructure.

…SAP’s internal IT environments and SAP Enterprise Cloud Services adopting Red Hat Enterprise Linux can gain greater flexibility to address modern and future technology requirements.

“…Red Hat Enterprise Linux offers enhanced performance capabilities to support RISE with SAP solution deployments across cloud environments…

There are a lot of points to cover and, as always, a little history is useful.
Grab a bagel (that’s what American’s eat right?) put some Obatzda cheese on it (it’s German, I’m trying to equate eating with the subject of this article) and settle in for a read.

Who is Red Hat?

You can read all about Red Hat on Wikipedia here: https://en.wikipedia.org/wiki/Red_Hat , but suffice to say:

  • It is owned by IBM since 2019.
  • It owns Ansible.
  • It owns Red Hat Enterprise Linux CoreOS (RHCOS), which is the production Linux Operating System beneath the container platform OpenShift.  RHCOS is built on the same Red Hat Enterprise Linux (RHEL) kernel.

What is RISE with SAP?

There are many views on why “RISE with SAP” came to fruition and who it benefits, but the official line is that RISE with SAP is a solution designed to support the needs of the customer’s business in any industry, with SAP responsible for the holistic service level agreement (SLA), cloud operations, and technical support and the partner (insert any Global SI) provides sales, consulting and application managed services (AMS).

…SAP is boosting support for the RISE with SAP solution using Red Hat Enterprise Linux as the preferred operating system for net new business for RISE with SAP solution deployments.

When the article talks about “net new” that just means any brand new RISE subscriptions.

Notice that one of the significant lines I pulled out of the article says:

…providing a standard Linux backbone to support SAP customers across hybrid and multi-cloud environments.

Since SAP are doing the hosting, the “multi-cloud” part is probably referring to SAP’s hybrid and multi-cloud.  i.e. SAP’s own datacentres and also the hyperscalers.

An enticing option that comes as part of the RISE deal (depending on the customer spend) is SAP Business Technology Platform (BTP).
SAP BTP is a PaaS solution under a subscription model, in which SAP customers can combine and deploy curated SAP services from SAP or third-parties, or use services to code their own solutions in a variety of languages including SAP’s proprietary ABAP language.

The SAP BTP environments are hybrid and multi-cloud, as they are hosted in Cloud Foundry (the newest) or Neo (currently sun-setting), with these being run from a combination of SAP’s own datacentres and/or on the main hyperscalers (Cloud Foundry).  There are two other environments Kyma, a micro-services runtime based on Kubernetes and the ABAP environment, hosted in Cloud Foundry.

In conclusion on this section, I suggest that the described “net new business” is actually internal business inside of SAP and not directly the hosting of customer’s S/4HANA systems.  In fact, S/4HANA is only very loosely mentioned in the article, which leads me to believe that this announcement is purely for BTP and other surround services.

SAP HANA and Compute Power

In one of the statements from SAP on this:

“deepening” partnership, we see “…Red Hat Enterprise Linux offers enhanced performance capabilities to support RISE with SAP solution deployments across cloud environments…
 

I can’t see anything specifically mentioned about how Red Hat’s Linux operating system is more performant than SUSE, other than an article from 2019 where a SAP Business Warehouse (BW) on HANA system (maybe, could be BW/4HANA, difficult to tell) holds a world record.

See here for more:  https://www.redhat.com/en/resources/red-hat-enterprise-linux-for-sap-solutions-datasheet   which links to here:  https://www.redhat.com/en/blog/red-hat-enterprise-linux-intels-newest-xeon-processors-posts-record-performance-results-across-wide-range-industry-benchmarks?source=blogchannel

The thing to note about those claims are that:

  • This was based on a 2nd Gen Intel Xeon (3rd Gen is already available).
  • The CPU used Intel Advanced Vector Extensions 512 (AVX-512) instruction set, which Intel says arrived in 3rd Gen chips (is the Red Hat article quoting the wrong chip generation?).
  • Generally we run HANA on hyperscalers on Intel Skylake or Cascade lake CPUs.  Only HANA on bare metal may allow Xeon CPUs.
  • The Red Hat Linux Operating System version was 7.2 for the world record, but 7.9 is the latest support pack version and  9.0 is out now.  Also, 7.2 is now only supported for older versions of HANA 2.0 (up to SPS03).
  • The use of Intel OptaneDC (Intel’s non-volatile memory persistence technology) was used in the world record, but recently announced in 2022 as defunct (superseded by another initiative).
  • 2019 was the year that the IBM acquisition of Red Hat concluded.  Coincidence?

My summary of this section is that I don’t believe performance is the reason for any switch by SAP from (mainly) SUSE to Red Hat.  The one article of relevance that I can find seems just too old and outdated.

What I think, is that the announcement from SAP is referring to something other than the Linux Operating System alone.

Red Hat’s Scalable, flexible, open hybrid cloud infrastructure

We maybe need to look past the Red Hat Linux Operating System and at the infrastructure eco-system that the Operating System is part of.

…building on Red Hat’s scalable, flexible, open hybrid cloud infrastructure.

When the article talks about “open” we are inclined to think about Open Source, freely available or even open APIs (sometimes just having APIs can make something “open”).

In my mind, something that can run seamlessly almost anywhere on hybrid cloud would involve containers.  Containers provide scalability (scale-out) and flexibility (multiple environments offered).

Let me introduce you to OpenShift.  Yeah, it’s got “open” in the name.

See here for a wiki article:  https://en.wikipedia.org/wiki/OpenShift

As a summary of OpenShift, the Red Hat Enterprise Linux CoreOS (RHCOS) underpins the OpenShift hybrid cloud platform and RHCOS uses the same kernel as Red Hat Enterprise Linux.

The orchestration of OpenShift containers is done using Kubernetes and Red Hat is the second largest contributor to Kubernetes after Google (Red Hat is a platinum member: https://www.cncf.io/about/members/).

I think you might be able to see where we are heading in this section.

Could SAP be adopting OpenShift internally for its future container hosting platform strategy?

IBM Cloud deprecated support for Cloud Foundry in mid-2022.  As suspected, Red Hat OpenShift is one of the touted solutions to replace it: https://cloud.ibm.com/docs/cloud-foundry-public?topic=cloud-foundry-public-deprecation#dep_nextsteps

Need greater efficiency and revolutionary delivery? Red Hat OpenShift on IBM Cloud might be your solution.

The above quote on the IBM Cloud site does provide some hint that operating Cloud Foundry platform services at scale, could be less efficient and less innovative compared to Red Hat OpenShift.


Maybe this is something that, internally, SAP have also concluded?

What Does SUSE Offer to Compete with Red Hat and it’s OpenShift offering?

The SUSE Linux Enterprise Server (SLES) Operating has been a solid foundation for running SAP systems.

Similar to Red Hat, SUSE has a varied portfolio of products in the Linux container technology space.
Rancher Labs is one of those products, and allows easier management of Kubernetes, especially once the quantity of containers accelerates.

SUSE is also a contributor to Kubernetes (it is a silver member).

SUSE also owns Rancher, which is an open source container management platform similar to Red Hat’s OpenShift. 

The SUSE Rancher product is open armed, in that it embraces many different operating systems and a number of license options, whereas Red Hat OpenShift supports only the Red Hat CoreOs and requires a SUSE subscription.

While being open is a good thing, it also adds complexity, since Red Hat’s CoreOs is a purpose built Operating System with all required features and it would appear to have a simpler method of deploying and maintaining it.

It’s possible that SAP’s announcement comes after some internal evaluation of the two products, with Red Hat’s being favoured the most.

Conclusions

We’ve looked at the article from the SAP site where the new “deeper” partnership with Red Hat was announced.

I think I ruled out performance as a reason for the Operating System change.  The article just didn’t have enough depth for my liking.

I have speculated on how this SAP and Red Hat partnership could be about the internal SAP hosting of PaaS and maybe SaaS related systems and not directly related to hosting of customer’s S/4HANA systems.

What we could be looking at, is the next generation of hosting platform for SAP BTP or possibly SAP S/4HANA Cloud public edition.
Red Hat’s OpenShift platform, underpinned with the Red Hat CoreOS and the Red Hat tools to monitor, automate and orchestrate, could all combine to provide a solid accompaniment to solve SAP’s internal strategic issues.

It’s one of the platforms chosen by IBM Cloud (a no brainer for them really), with the justification that Cloud Foundry was no longer the strategic platform.

The announcement has no impact on the certification of SUSE for running S/4HANA and therefore should not reflect any customer decisions during their RISE with SAP journey for their S/4HANA systems.

Resources:

https://news.sap.com/2023/02/red-hat-and-sap-deepen-partnership/
https://blogs.sap.com/2019/07/15/evolution-of-sap-cloud-platform-retirement-of-sap-managed-backing-services/
https://blogs.sap.com/2023/06/14/farewell-neo-sap-btp-multi-cloud-environment-the-deployment-environment-of-choice/
https://me.sap.com/notes/2235581
https://learn.microsoft.com/en-us/azure/virtual-machines/mv2-series
https://learn.microsoft.com/en-us/azure/virtual-machines/sizes-compute
https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-solution-brief.html
https://www.redhat.com/en/resources/red-hat-enterprise-linux-for-sap-solutions-datasheet
https://www.redhat.com/en/blog/red-hat-enterprise-linux-intels-newest-xeon-processors-posts-record-performance-results-across-wide-range-industry-benchmarks
https://docs.openshift.com/container-platform/4.8/architecture/architecture-rhcos.html#rhcos-key-features_architecture-rhcos
https://www.anandtech.com/show/14146/intel-xeon-scalable-cascade-lake-deep-dive-now-with-optane
https://www.sap.com/products/erp/s4hana.html
https://en.wikipedia.org/wiki/Red_Hat
https://en.wikipedia.org/wiki/Rancher_Labs
https://en.wikipedia.org/wiki/OpenStack
https://en.wikipedia.org/wiki/OpenShift
https://en.wikipedia.org/wiki/Cloud_Foundry
https://en.wikipedia.org/wiki/3D_XPoint
https://www.ibm.com/support/pages/sap-s4hana-red-hat-openshift-container-platform-business-perspective-cloud-hosting-provider
https://cloud.ibm.com/docs/cloud-foundry-public?topic=cloud-foundry-public-deprecation
https://www.cncf.io/about/members/

Preventing File System Corruption from Halting Boot Up of SLES in Azure

When you create a Linux VM in Azure, you don’t get to know the “root” user password.
By default, if a Linux VM detects journaled file system corruption at boot if will go into recovery mode, requiring the root password to be able to fix it.
Without the root password, the only other way to fix the issue is copying the O/S disk, mounting on another VM and fixing the issue.
If you don’t have Azure Boot Diagnostics enabled, you might not even know what the problem is! The VM will just appear to not boot.

In this post I show a simple way to prevent Debian based Linux distributions (I use SLES) from failing boot up due to file system corruption. Our example is an XFS file system just like in my previous post.
XFS is journaled and will check the integrity on mounting. If there are problems with the file system then Linux will fail to mount it, which will cause the O/S boot up process to stall.

In a production system, you can imagine the scenario where a simple restart of a VM causes an hour long downtime (or longer).

NOTE: In my scenario there is no Linux device encryption, which could make the job or repair even harder, and all the more important to prevent boot failure.

Preventing Boot Failure

To prevent our corrupt XFS file system from halting boot, we just need to add 1 single option to the mount options in file /etc/fstab.
We use the “nofail” option.

We could just go and write this straight out to the fstab file and expect it to work.
However, we can test it first to make sure that it is:

  • supported on your version/distribution of Linux.
  • supported for your file system type (mine is XFS).

We could use the “-f” (fake) mount option to the “mount” command, but in testing I cannot get this to actually show an error when it is passed an invalid mount point option.
Instead, let’s actually mount the file system to check if “nofail” is accepted.

As the root user (or with sudo) get the current mount options for your file system (the one you will be applying “nofail” to):

grep BIG /etc/fstab

/dev/volTMP/lvTMP1 /BIGSTRIPEDDISK xfs defaults 0 0

I can see that my /BIGSTRIPEDDISK is mounted from a volume group and has the “defaults” mount options. Yours may be different.
We can now create a new mount point location and temporarily mount the file system adding the “nofail” option to test it is accepted (adjust the mount options using your current mount point settings):

mkdir /mnt/tempmount
mount -o defaults,nofail /dev/volTMP/lvTMP1 /mnt/tempmount

If you got an error or warning, then the file system type or your Linux distribution does not support the use of “nofail”. Maybe check the man page for an equivalent option (“man mount”).

If you didn’t get an error, then you know that you can successfully apply the “nofail” option to the end of the options column (column number 4) in the fstab:

vi /etc/fstab
...
/dev/volTMP/lvTMP1 /BIGSTRIPEDDISK xfs defaults,nofail 0 0
...

Once applied, it is recommended that you always verify boot related changes, by taking some downtime to restart the machine. There is nothing worse than applying a change and not testing it.

With “nofail” in place, the next time the O/S boots and the file system is mounted if there are issues with the integrity or even if the device is missing, the O/S will move forward in the boot process and ignore the error.
There is obviously a small consequence of this, file systems may not be mounted after a boot has completed.
It is possible to mitigate this problem with monitoring (scripts that monitor file system free space, for example) or other checks after boot.
Of course there is also a second option to all of this, set the root user password on new VMs and store in your secure password location. You can use a 16 character random string like those generated from a password manager.
You will also need to ensure that you can use the Azure Serial Console to get to the VM command line, because in some configurations, security practices can indirectly prevent this.

Recovering From a Deleted Data Disk with XFS on LVM in Azure

It’s quite a hefty long title, and it still doesn’t quite convey what I want to write about in the post.
This post is about a specific situation that can occur whereby you may have accidentally deleted a data disk or recovered a VM that had “selective disk backup” enabled and you’re missing a data disk, of a Linux VM that had the data disk as part of a Logical Volume Manager (LVM) managed file system.

In this post I show how to recover the unbootable VM using a rescue VM, then repair the volume by adding a new data disk and eventually repairing the LVM volume group and the XFS file system.

The Setup

In our setup, we have a SLES 12 VM (the victim) with the following disk architecture:

I actually have 3 data disks, but we will only be working with 2 of them.
The 2 data disk LUNs map to Linux physical disks /dev/sdd and /dev/sde and are part of volume group volTMP, which contains a logical volume lvTMP1 striped over the two disks and on lvTMP1 is an XFS file system mounted as “/BIGSTRIPEDDISK”.

I actually created this setup as part of this post here, so you can follow the instructions on that post to get to the same state if you wish.

I also have, ready to start up, a Ubuntu VM created using a basic Azure VM type (it’s a B1s) and an Azure Ubuntu Server 18 LTS image.
This will be my rescue VM. It’s small, light and fast to boot up.
You don’t have to use a Ubuntu VM, but you will need another VM that is running Linux and able to mount the file systems that you use for your root file systems (mine is ext4).

We Do the Damage

In this scenario we are deleting one of the data disks of the SLES 12 VM, from inside the Azure Portal.
The same situation could occur if you restored a VM from backup, but the VM had “selective disk backup” enabled, and restored with missing data disks.

The first thing we do, with the VM already shutdown, is remove the data disk (LUN2) from the Portal:

NOTE: We are not actually deleting the disk here in our test setup. It just detaches it from the VM. But imagine that we did detach and delete it completely.

Save the change:

We then start the VM:

The VM May Not Boot

Depending on your file system mount options and your O/S (I’m using SLES 12), by default the Linux VM will refuse to boot fully.
It will actually get stuck trying to mount the file system /BIGSTRIPEDDISK because the data disk is now missing (we deleted it!).

NOTE: If you have “nofail” in the fstab mount options, then your VM may boot normally, with the file system missing. You’re lucky. Skip though to the section on adding a new data disk (after section “Swap O/S Disk”).

The Linux O/S will go into recovery mode. If you have Boot Diagnostics enabled, you can verify this in the “Serial Console” within Azure Portal on the VM resource details screen.
In recovery mode, you are prompted to enter the root password to give you access to a basic shell. However, when deploying from Azure images, you don’t get a root user password, so you won’t know it!

If you don’t have Boot Diagnostics enabled, then you will be waiting a some minutes until the VM boot hits a timeout and Azure Portal informs you it failed to start:

In either of the above cases, you may end up at this same point. The VM will not boot due to the failed disk.

What we need to do to recover from this situation and allow our SLES 12 VM to boot, is to comment out the failed file system from the /etc/fstab file on the SLES 12 VM’s O/S disk.
This will involve the use of the handy “swap O/S disk” button in the Azure Portal.

Create an Image of the O/S Disk

We have to create a snapshot image of the existing SLES 12 VM O/S disk, because we cannot detach the O/S disk from the existing VM.

Locate the SLES 12 VM in the Portal and click it’s O/S disk:

Click the “Create Snapshot” button, then give the snapshot a useful name:

I used standard HDD (cheaper), but you can choose SSD if you wish:

Click to go to the snapshot once it has been created:

We now have an image of the O/S disk, which we can use to create a new O/S disk.

Create New Disk from Image

We will create a new managed disk from the image of the O/S disk.
This will allow us to mount it on our Ubuntu VM (the rescue VM).

From the Azure Portal create a brand new disk same size and specification as the original O/S disk.
NOTE: The Ubuntu VM is limited and may not support higher performing disk types like Ultra Disk. In which case you may need to create the new disk as a lower performance disk.

Select the image you created as the source and give the new disk a recognisable name:

Attach New Disk to Rescue VM

We now attach the new disk to the rescue VM (my Ubuntu VM) from the “disks” section of the Ubuntu VM resource:

It’s the first data disk, so is going on LUN 0:

Mounting the Disk on Rescue VM

Start the rescue VM (Ubuntu) if it is not already started, log onto the VM and either as root or using “sudo” check the disk devices present by running “lsblk”:

In my example the new disk is visible as /dev/sdc.
Because the disk is an O/S disk, it has partitions (it’s not a whole disk). For this reason, we have to mount the specific partition that the root (“/”) file system was mounted from.
In my case I can easily see that partition 4 (sdc4) because it is the largest partition on the /dev/sdc disk at 28.8G in size.

We have to create a location to mount the partition (“mkdir /mnt/suse_os_disk”) then mount partition 4 from sdc using the “mount” command:

The mount command is intelligent enough to know what file system is on the disk.

Adjust Fstab File

With the new disk mounted on the rescue VM, we can use our favourite text editor to adjust the fstab file and comment out the affected file system, to prevent it from being mounted.

vi /mnt/suse_os_disk/etc/fstab

We comment out /BIGSTRIPEDDISK :

Save the file changes.

We can now safely unmount the disk and then disconnect it from the rescue VM:

From the Azure Portal, we delete the new data disk from the rescue VM:

Swap O/S Disk

In Azure Portal, go to the SLES 12 VM and in the disks view of the VM, click the “Swap OS disk” button:

Select the new disk that we have just unmounted from the rescue VM:

Start the SLES VM and it will boot off the new disk:

The VM will boot up successfully.
Great stuff. All that effort and so far we have a booting VM.
We still have the initial problem, we deleted one of our data disks. We need to create a new data disk.

Add New Data Disk

In Azure Portal on the SLES VM, create a new data disk to the same specification as it existed originally.
You can guess if you are not sure, but you have to remember that it should be the same tier and size as other disks in a striped LVM logical volume.

Save the change:

Repair Volume Group

With the new data disk added, we can now start the process of repairing the volume group.

We execute a pvscan to list physical volumes on the VM:

In the above we can see that LVM is reporting a missing physical volume. This is the one we deleted.

Using “lsblk” we can see the new device right at the end, it’s /dev/sde:

We can create the new physical volume and apply the previous UUID to the disk, to make LVM think this is the same disk, then we get LVM to write the configuration backup to the new disk.

First, let’s check what LVM configuration backups we have for our volume group:

ls -ltr /etc/lvm/archive/volTMP*

We choose the latest one available before we lost the disk:

We can now re-create the physical volume, applying the previously used UUID and LVM configuration (metadata):

pvcreate --uuid '<previous missing uuid from the pvscan output>' /
 --restorefile /etc/lvm/archive/volTMP_<latest>.vg /dev/sde

Now we tell LVM to restore itself into a working order using the configurations available on the disks:

Let’s check the status of our logical volume that exists in the volTMP volume group:

In the above we notice that the “a” (active) flag is not set, the logical volume is therefore not yet active.
Let’s activate it:

lvchange -ay /dev/volTMP/lvTMP1

You can see that it is now active. Great!
We have repaired LVM. We no longer get any warnings about missing disks when executing the LVM related commands like “pvs, lvs, vgs”.

Repair File System

If we were to try and mount the file system /BIGSTRIPEDDISK, it would show an XFS error, because our new disk does not yet have a file system on it.
The file system is in a strange status, because 50% of the blocks are on the disk that was not deleted, and 50% are non-existent, because they were on the disk that was deleted.
So we actually have to repair the file system.
Instead of repairing, we could have chosen to just apply a new file system with mkfs.xfs, but let’s do a repair and see what the process is.

xfs_repair -L /dev/mapper/volTMP-lvTMP1

We can now edit the fstab and uncomment our file system /BIGSTRIPEDDISK:

Finally, we try and mount the file system:

It worked, and it was a clean mount. Nice.

Where Are My Files

With our repaired file system mounted, we dive in and look for files:

Ah yes! It’s clean!
No files will exist because we lost the disk. The LVM striping that we use is for performance, not redundancy, which means when you have to re-create the disk and repair the file system, all files will be lost.

Summary:

  • Actually deleting data disks is not simple in the Azure Portal. Microsoft have done a good job to try and prevent you from doing it by mistake, but it is still possible to do it by accident and also through code.
  • Turn on boot diagnostics on your VMs, it helps to see what is going on during boot.
  • Add “nofail” to the mount options for the data disks on Debian based systems. This will allow them to boot even with missing data disks.
  • When a data disk goes missing, that is actively mounted at Linux boot, the VM may not boot at all.
    You could reset the all your root account passwords and securely store them, which would allow you to enter recovery mode, but this is not something that most companies do.
    Be prepared and have a rescue VM ready to start up. This is the best option and could help in a number of scenarios.
  • Once booting again, we can use LVM to help simply restore the state of the volumes and file systems. We don’t need to re-create the LVM setup.
  • In a striped logical volume, we stripe for performance, not redundancy, you will lose data if you lose one of the data disks of a striped logical volume.
  • Using the “selective disk backup” feature saves backup vault space, but it means you will need to use this process to restore the volume groups for missing disks! Be wary and plan ahead!
  • Test backup & restore processes!

In another blog post, I will show how to automate the root disk snapshot and disk creation followed by attaching to another VM. We will have a single script that can be run to automate the whole process. This is useful to help fix other issues such as when you have enabled Linux HugePages with more memory than the VM has!

New SAP ASE Audit Logging Destination in 16.0.4

Let’s face it, auditing in SAP ASE 16 is difficult to configure due to the requirement to produce your own stored procedure and correctly size the rotating table setup with multiple database segments for each of the multiple audit tables. Once configured, you then had the realisation that to obtain the records, you needed to extract them from the database somehow, and then the problem of who does this task, what privileges they need, should they themselves be audited etc etc.

Good news! With the introduction of ASE 16.0 SP04, configuring auditing in the SAP ASE database just got dramatically easier and way more useable!

Introducing “audit trail type”

In previous versions of the SAP ASE database, database audit records were stored only in the sybsecurity database in a specific set of tables that you had to size and create and rotate yourself (or with a stored procedure).

Once the records were in the database, it was then up to you to define how and when those records would be analysed.
Depending on whether your SIEM tool supports direct ODBC/JDBC access to SAP ASE or not, would depend on how complex the extraction process would be.

In SP04 a new parameter was introduced called “audit trail type” where you can now set the audit store to be “syslog”.

When setting the store to be “syslog”, the audit records are pushed out to the Linux syslogd daemon (or rsyslogd or syslog-ng) and written to the O/S defined location according to the configuration of syslogd:

Each audit record gets a tag/program name of “SAP_ASE_AUDIT”, which means you can define a custom syslogd log file to hold the records, and also then specify a custom rotation should you wish.
Your syslogd logs may already be pulled into your SIEM tools, in which case you will simply need to classify and store those records for analysis.

With the new parameter set to “syslog” and the audit records being stored as file(s) on the file system, you will need to ensure that the file system has adequate space and establish a comfortable file retention (logrotate) configuration to ensure that audit records do not cause the file system to fill (preventing persistence of additional audit records).

Of course, should you enjoy torture, you can always go ahead and continue to use the database to store the audit records. Simply setting the new parameter “audit trail type” to “table”, will store the audit records in the database just like the previous versions of ASE.

Useful Links

What’s new in ASE 16.0.4

Parameter: audit trail type

HowTo: Install Azure Enhanced Monitoring for Linux for SAP

One SAP support prerequisite for running SAP on Azure, is that you must have Azure Enhanced Monitoring for Linux installed onto the Azure Linux VMs where your SAP application runs (including DB servers). Details are in SAP note 2015553.

In this brief post I show how to check if it is already installed, then how to install it, without needing to install the Powershell Azure Cmdlets.

What is Azure Enhanced Monitoring for Linux?

Azure Enhanced Monitoring for Linux (AEM) is an Azure VM extension installed onto the target Linux VM.
The extension uses the Azure Instance Agent to pull additional telemetry information down onto the local VM, and places it into a file on the Linux file system called /var/lib/AzureEnhancedMonitor/PerfCounters.

This special file is pure ASCII text with data inside that is semi-colon separated.
You can use Linux command line utilities to query information from the file (it’s readable by any user).

The file is parsed by the SAP Host Agent (also installed on every SAP VM) and made available in the monitoring memory segment used by the Netweaver ABAP stack, with the data being visible in transaction ST06 (OS06).

How to Check if AEM Is Installed

There are a number of ways to check if Azure Enhanced Monitoring for Linux is installed on a VM:

  • Inside the VM in Linux we can check for the existence of file: “/var/lib/AzureEnhancedMonitor/PerfCounters”
  • Inside the VM in Linux we can check the extension home dir exists: “/var/lib/waagent/Microsoft.OSTCExtensions.AzureEnhancedMonitorForLinux-*”
  • In the Azure Portal, we can check the status of the extension in the Azure Portal:
  • In the Azure Cloud Shell, we can either Test or Get the AEM Extension to see if it is installed:
Get-AzVMAEMExtension -ResourceGroupName <RG-NAME> -VMName <VM-Name>
Test-AzVMAEMExtension -ResourceGroupName <RG-NAME> -VMName <VM-Name>

Installing AEM

There are two ways to install the Azure Enhanced Monitoring for Linux extension into a VM:

  • Using local PowerShell (on your computer) with the Azure Cmdlets installed.
    You will need to have the rights on the local machine to perform the install of the Azure Cmdlets.
    I will not cover this method as it is quite tedious to setup and the chances are that your PowerShell is locked down by your company and will not allow you to install the required Cmdlets.
  • Using Powershell in the Azure Portal Cloud Shell.
    This has all the required Cmdlets already installed, but to setup the Cloud Shell you will need rights in Azure to be able to create a Storage Account to use for your shell home location.

Out of the two options, I usually opt for the Cloud Shell. Once you have it setup, you will find you can use it for many other things and access it from anywhere!
In this post I will be using Cloud Shell to do the installation.

To install the AEM extension, we use Powershell commands to do the following sequence of tasks:

  • Obtain our subscription context.
  • Deploy the extension to the specific VM in the subscription.

Let’s start the Cloud Shell (NOTE: You will need a Storage Account for the Cloud Shell to work).
Go to the Azure Portal and click the button on the button bar:

Make sure that you are in a PowerShell Shell:

We may need to switch to a specific subscription.
We can list all subscriptions by calling Get-AzSubscription and filtering on the Id property:

Get-AzSubscription | Select-Object Id

We can then set the context of our Cloud Shell to the specific subscription Id as follows:

$context = Get-AzSubscription -SubscriptionId '<SubscriptionID>'
Set-AzContext -SubscriptionObject $context

Once the code has executed, we can check if the AEM extension is already installed:

Get-AzVMAEMExtension -ResourceGroupName <RG-NAME> -VMName <VM-Name>

If the AEM extension is already installed, then we will see output being returned from the Get command:

ResourceGroupName       : UK-West
VMName                  : vm01
Name                    : AzureEnhancedMonitorForLinux
Location                : ukwest
Etag                    : null
Publisher               : Microsoft.OSTCExtensions
ExtensionType           : AzureEnhancedMonitorForLinux
TypeHandlerVersion      : 3.0
Id                      : /subscriptions/mybigid/resourceGroups/UK-West/providers/Microsoft.Compute/virtualMachines
                          /vm01/extensions/AzureEnhancedMonitorForLinux
PublicSettings          : {
                            "cfg": [
                              {
                                "key": "vmsize",
                                "value": "Standard_D4s_v3"
                              },
                              {
                                "key": "vm.role",
                                "value": "IaaS"
                              },
                              {
                                "key": "vm.memory.isovercommitted",
                                "value": 0
                              },
                              {
                                "key": "vm.cpu.isovercommitted",
                                "value": 0
                              },
                              {
                                "key": "script.version",
                                "value": "3.0.0.0"
                              },
                              {
                                "key": "verbose",
                                "value": "0"
                              },
                              {
                                "key": "href",
                                "value": "http://aka.ms/sapaem"
                              },
                              {
                                "key": "vm.sla.throughput",
                                "value": 96
                              },
                              {
                                "key": "vm.sla.iops",
                                "value": 6400
                              },
                              {
                                "key": "wad.isenabled",
                                "value": 0
                              }
                            ]
                          }
ProtectedSettings       :
ProvisioningState       : Succeeded
Statuses                :
SubStatuses             :
AutoUpgradeMinorVersion : True
ForceUpdateTag          : 637516905202791108
EnableAutomaticUpgrade  :


If the AEM extension is not installed, not output will be seen from the “Get” command.
We can then install the AEM extension with the “Set-AzVMAEMExtension” command as follows:

Set-AzVMAEMExtension -ResourceGroupName <RG-NAME> -VMName <VM-Name>

The extension should be installed successfully.
If you need to remove it, you can use the “Remove-AzVMAEMExtension” command.

There is a “Test” command that you can call to test the AEM:

Test-AzVMAEMExtension -ResourceGroupName <RG-NAME> -VMName <VM-Name>

Finally, if you want to see the additional command line options, then use the standard “Get-Help” as follows:

Get-Help Set-AzVMAEMExtension -Full

Issues with AEM

There’s one known issue with Azure Enhanced Monitoring for Linux, the number of data disks reported in the PerfCounters file seems to be limited to 9.
This means that if you have more than 9 data disks, the performance data may not be visible in the file and therefore not visible in SAP.
It’s possible a fix is on the way.