This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Making saptune Actually Work & Patching to v2

Having recently spent some time analysing the performance of a HANA database system, I got down to the depths of Linux device I/O performance on an Azure hosted VM.

There was no reason to suspect any issue, because during the implementation of the VM image build process, we had followed all the relevant SAP notes.
In our case, on SUSE Enterprise Linux for SAP 12, we were explicitly following SAP Note 1275776 “Linux: Preparing SLES for SAP environments”.
Inside that SAP note, you go through the process of understanding the difference between sapconf and saptune, plus actually configure saptune (since it comes automatically with the “for SAP” versions of SLES 12).

Once configured, saptune should apply all the best practices that are encompassed in a number of SAP notes including SAP Note 2205917 “SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications 12”, which is itself needed during the HANA DB installation preparation work.
If you follow the note, there are a number of required O/S adjustments that are needed for HANA, which can be either applied manually, or (as recommended) automatically via saptune, provided the correct saptune profile is selected.

As part of our configuration, we had applied saptune solution profile S4HANA-DBSERVER (also noted in the SUSE documentation for SAP HANA).
This is applied using the standard:

saptune solution apply S4HANA-DBSERVER

You don’t get a lot of feedback from the saptune execution, but the fact there are no errors, indicates (normally) that it has done what has been requested.
You can check it has applied the profile by executing:

saptune solution list

The item that is starred in the returned list, is the profile that has been applied.
That’s it.

As part of my troubleshooting I even took the trouble of running the publicly available script sapconf_saptune_check (see here: https://github.com/scmschmidt/sapconf_saptune_check/blob/master/sapconf_saptune_check ), which just confirmed that saptune was indeed active/enabled and had a valid profile configured:

Back to the task of checking out the performance issue, and you can probably see where this is going now.
On investigation of the actual saptune profile contents, it was possible to see that a large majority of O/S changes had not been applied.
Specifically, we were not seeing the NOOP scheduler selected for the HANA disks devices.

By executing either of the following, you can check the currently selected scheduler:

grep -l ‘.*’ /sys/block/s??/queue/scheduler

or

cat /sys/block/s??/queue/scheduler

The selected scheduler will be in square brackets.
In my case, I was seeing “[cfq]” for all devices. Not good and not the recommendation from SAP and SUSE.
This setting should be automatically adjusted by the tuned daemon.

Looking at my version of saptune, I could see it was version 1.1.7 (from the output of the execution of the sapconf_saptune_check script).

Reading some of the recent blog posts from Soeren Schmidt here: https://blogs.sap.com/2019/05/03/a-new-saptune-is-knocking-on-your-door/
I could see that version 2 of saptune was now released.

Downloading the newer version (not installing directly!), reverting the old solution profile, installing the new saptune version and finally re-applying the same profile, confirmed that saptune was the culprit.

The new saptune2 fixed the issue, immediately activating a number of critical O/S changes, including the NOOP scheduler setting on each device.

The moral of the story, is therefore that as well as following the SAP processes, you still need to actually validate what it says it should have done.
The new saptune2 has been incorporated into our build process, plus the configuration check scripts will be specifically checking for it.
However, since the upgrade from saptune1 to saptune2 could cause issues if it just blindly re-applied the “new” profile settings, SAP have made saptune follow a backwards compatible upgrade process, whereby the O/S settings are retained as they were before the upgrade was executed.

Therefore, as per the SAP Note 2816790 “Differences between sapconf and saptune” links, the upgrade process for an already applied profile, is to revert it prior to the saptune upgrade, then applied the upgrade, then re-apply.
This could therefore not just be rolled out via our standard SLES patching routine. We had to develop an automated script that would specifically pre-patch saptune to saptune2 using the correct procedure, before we embarked on the next SLES patching round.

As a post-note, you should make yourself familiar with the coming changes to the SLES scheduler settings, with the introduction of the NONE scheduler (see below links for link to the blog).

Useful notes/links:
https://www.suse.com/c/sles-1112-os-tuning-optimisation-guide-part-1/
https://blogs.sap.com/2019/06/25/sapconf-versus-saptune-in-more-detail/
https://blogs.sap.com/2019/05/03/a-new-saptune-is-knocking-on-your-door/
https://www.suse.com/c/noop-now-named-none/

Locking HANA Database Users During Maintenance

Running SAP S/4HANA means there are now more direct HANA DB accesses through a variety of Analytics tools, development tools and external reporting systems.
This can present a problem when it comes to patching and maintenance of the system, since you would not want to officially release the HANA database back to end-users until you had performed your preliminary checks to conclude th patching was successful at all levels of the application stack.

Most BASIS administrators are familiar with the usual “tp locksys” command to be able to lock everyone except SAP* and DDIC out of the SAP ABAP application layer.
But what can be done to stop connections direct into the HANA database?
SAP note 1986645 “Allow only administration users to work on HANA database”, provides an attached SQL file which delivers a few new stored procedures and some new database tables.

The stored procedures include:
– 1 for “locking” out non-system users.
– 1 for “unlocking” non-system users (the exact reverse operation against the exact same set of users that was initially locked).
– 1 for adding users to an exception list table.
– 1 for removing users from an exception list table.

The tables are used to store an exception list of users to be excluded from the locking operation.
You will need to add the “SAPABAP1” S/4HANA schema, XSA DB user and cockpit user to the exception list!
Also add any backup operator user accounts needed to perform backups or if you need to leave enabled a specific set of test user accounts.
There is also a table used for storing the list of users on which the last “locking” operation was performed.

As well as “locking” (the HANA DB accounts are actually disabled) the user accounts, any active sessions for those user accounts are kicked off the database instantly.
This feature is useful in other ways (for example, emergency access to a severely overloaded/failing HANA database system).
Of course if you are running something other than S/4HANA on HANA (maybe Solman), then direct database access may not be a requirement, therefore this set of SQL stored procedures are not so relevant.

How do you implement the SQL?
– Download the SQL from the SAP note and save to a file.
– Either execute the SQL using in the TenantDB as the SYSTEM user in HANA Studio, HANA Cockpit or use hdbsql in batch mode (hdbsql doesn’t like the code to be pasted at the prompt).

How do you add users to the exception list:
– As SYSTEM in the TenantDB, simply execute the store procedures:

CALL SESSION_ADMINS_ADD_TO_EXCEPTED_USER_LIST (‘SAPABAP1’);

How do you utilise the feature?
– As SYSTEM in the TenantDB, simply execute the store procedures:

CALL START_SESSION_ADMINS_ONLY;

When you’ve finished and wish to “unlock” the previously locked accounts:

CALL STOP_SESSION_ADMINS_ONLY;

HANA 2.0 – Calc View – SAP DBTech JDBC 2048 Column Store Error

Scenario: During DB access of a HANA 2.0 SPS3 Calculation View from S/4HANA ABAP stack (via ABAP) or even directly in HANA Studio (via “Raw Data”), an error is displayed in a short dump or on screen along the lines of “SAP DBTech JDBC (2048: column store error: search table error: (nnnn) Instantiation of calculation model failed: exception 30600. An Internal error occurred”.

After investigation you observe the following error inside the indexserver trace log: “Could not get template scenario <SID>::_SYS_BIC:_SYS_SS_CE_<nnnn>_vers2_lang6_type1_CS_2_2_TMP (t -1) of calculation index <SID>::_SYS_BIC:<PACKAGE>/<CALCVIEW> (t -1). reason: CalculationEngine read from metadata failed.; Condition ‘aScenarioHandle.is_valid()’ failed.”.

The error clearly references the name of your Calculation View (calculation index) but it also references another object with a name like “_SYS_SS_CE_*”.

SAP note 1646743 explains that objects with a naming convention of “_SYS_SS_CE_<guid>_TMPTBL” are temporary tables created during compilation of procedure objects. Whilst our objects naming convention is not an exact match, the assumption is that the object is temporary in nature and created during the compilation of our Calculation View.

To backup the above theory, SAP note 2717365 matches the initial error message in some respects and shows the method to recompile the temporary object.
The note explains that to reproduce its described situation you must “Create a script calculation view which is created based on a procedure.”.

With this in mind, after checking our erroring Calculation View, it is clearly possible to see that ours utilises a “Script” as part of its design.

Therefore, we can assume that the temporary object with naming convention “_SYS_SS_CE_<nnnn>_vers2_lang6_type1_CS_2_2_TMP” is the temporary representation of the script from within our Calculation View.

Following the SAP note, we can recompile the object via its source Calculation View as follows using HANA Studio SQL execution (or hdbsql command line):

(NOTE: in our case the object is owned by user SAPABAP1, so we login/connect as that user in order to execute)

ALTER PROCEDURE “_SYS_BIC”.”<PACKAGE>/<CALCVIEW>/proc” RECOMPILE;

The execution succeeds.
However on retrying to access the data within the view, we still get an error.
What happened, well looking again at our Calculation View, it appears that it references another Calculation View!
So we must recompile all referencing Calculation Views also.

To cut a long story short, it turned out that we had over 4 levels Calculation Views before I decided to just recompile all procedures (if existing) of all known Calculation Views. Some of the views were even in different schemas.

How do we obtain a list of all Calculation Views that use a script and would have temporary procedures?

We can use this SQL string to create the required list of “type 6” objects:

SELECT ‘ALTER PROCEDURE “‘||schema||'”.”‘||name||'” RECOMPILE;’ FROM sys.p_objects_ WHERE type=6 and name like ‘%/proc’

How did I find this? All (or most) HANA objects are represented in the SYS.P_OBJECTS table.

Even temporary SQL objects need to be accounted for in the general administrative operations of the database, they need to be listed somewhere and have a corresponding object ID.
By executing the SQL as the SAPABAP1 user, we get output in a similar fashion as to that shown below, with the first line being a column header:

‘ALTER PROCEDURE “‘||SCHEMA||'”.”‘||NAME||'” RECOMPILE;’

ALTER PROCEDURE “_SYS_BIC”.”sap.erp.sfin.co.pl/FCO_C_ACCOUNT_ASSIGNMENT/proc” RECOMPILE;

ALTER PROCEDURE “_SYS_BIC”.”sap.erp.sfin.rtc/RTC_C_FISCMAPA/proc” RECOMPILE;

We can then simply execute the output SQL lines for each object to be recompiled.
On attempting access to the Calculation View, it now correctly returns data (or no data), and does not show an error message.

The next question is, why did we get this problem?

Looking back at SAP note 2717365 it says “This error happens because the temporary created objects were not cleared up properly when this happened with an error.”.
Remember that this is not an exact match for our error, but I think the explanation is good enough.

An error occurred during the creation of the temporary procedures that underpin our scripted Calculation Views.

We don’t know what the error or issue was, but subsequently recompiling those Calculation View temporary procedures fixes the issue.

Forcefully Prevent HANA Tenant DB From Starting

Scenario: Your HANA system is having some serious memory problems.
In the memory you have remaining you can’t get the SYSTEM database to run, so you can’t stop a particular tenant database.

Here’s how you can use the hdbnsutil utility to prevent hdbdaemon from trying to start a tenant database.
As the <sid>adm user, kill off all remaining HANA system processes:

> hdbdaemon -quit

Export the topology:

> hdbnsutil -exportTopology /tmp/topology_export.txt

Edit the file and change the “status” of the tenant database to be “no”:

> vi /tmp/topology_export.txt

n3>name
v>HT1
n3>status
v>no
n3>timestamp

Save the file and quit vi.
Reload the topology using hdbnsutil:

> hdbnsutil -importTopology

Now restart hdbdaemon:

> hdbdaemon -start

HowTo: Install SAP HANA 2.0 in a VM in less than 30minutes – Part #3

This is the third part of my (quite large) post on how to install an SAP HANA 2.0 database into a SUSE Linux for SAP 12 SP3 virtual machine.

See Part #1 of the post here.
See Part #2 of the post here.

We continue from where we left off in part 2, just after we created a new 50GB disk volume for our new HANA install.
Check the new partition:

# df -h /hana
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/volHANA-lvHANA1   50G  33M   50G   1% /hana

Unmount the CDROM and install VMWare tools (I need it for access to my VMWare shared folder):

# umount /mnt/dvd

Select the option to re-install VMWare tools:


Mount the CD and extract the TAR file:

# mount /dev/sr0 -t iso9660 /mnt/dvd
# cd /tmp
# tar -xvf /mnt/dvd/VMwareTools-10.1.15-6627299.tar.gz
# cd vmware-tools-distrib
# ./vmware-install.pl

Choose YES to ALL prompts (especially to the ones to replace existing files).
Disable some SUSE Linux services that are more than likely not needed (in this specific case) and just consume precious memory:
Disable VMware thin printing:

# chkconfig vmware-tools-thinprint off

Disable Linux printing:

# chkconfig cups off

Disable Linux auditing:

# chkconfig auditd off

Disable Linux eMail SMTP daemon:

# chkconfig postfix off

Disable sound:

# chkconfig alsasound off

Disable NFS ( you might need it…):

# chkconfig nfs off

Disable the Machine Check Events Logging capture:

# chkconfig mcelog off

Double check the IP address of your VM:

# ifconfig | grep inet


Your IP address should be listed (you can see mine is 192.168.174.129).
If you don’t have one, then your VM is not quite setup correctly in the VMWare properties or your networking configuration is not correct, or you don’t have a DHCP server on your local network, or your network security is preventing your VM from registering it’s MAC address.  It’s complex.
Assuming that you have an IP address, check that you can connect to the SSH server in your VM using PUTTY :

Enter the IP address of your VM server:

Log into the server as root:

Now we’ve got access to the VM and disk space to create our HANA database and put the software.
To perform the HANA install, I’ve extracted my HANA patch/install media into a VMWare Shared Folders folder and simply extract the SAR file to my PC using SAPCAR.exe, sharing the directory location through VMware to the guest O/S.
Since I’ve used VMWare shared folders, I need to mount my folder (it’s shared via the VMWare Tools(:

# cd /mnt/hgfs/Downloads      [my VMware share name is “Downloads”]
# cd SAP_HANA_DATABASE      [this is my extracted SAR file]
# ./hdbinst –ignore=check_diskspace,check_min_mem

You will be prompted for certain pieces of information.  Below is what was entered:
Local Host Name: hana01
Installation Path:   /hana/shared
System ID:             H10
Instance Number: 10
Worker Group: default
System Usage: 3 – development
System Administrator Password:  hanahana
System Administrator Home Dir:  /usr/sap/H10/home
System Administrator User ID:  10001
System Administrator Login Shell:  /bin/sh
ID of User Group (sapsys): [I selected any]
Location of Data Volumes:  /hana/shared/H10/global/hdb/data
Location of Log Volumes:   /hana/shared/H10/global/hdb/log
Restrict maximum memory allocation? N
Database SYSTEM user password:   Hanahana1
Restart instance after reboot:  N

Summary before execution:
   Installation Path: /hana/shared
   SAP HANA System ID: H10
   Instance Number: 10
   Database Isolation: low
   System Usage: development
   System Administrator Home Directory: /usr/sap/H10/home
   System Administrator Login Shell: /bin/sh
   System Administrator User ID: 1001
   ID of User Group (sapsys): 79
   Location of Data Volumes: /hana/shared/H10/global/hdb/data
   Location of Log Volumes: /hana/shared/H10/global/hdb/log
   Local Host Name: hana01
   Worker Group: default

Installation will begin:



Installation & instance startup time was around 45 minutes due to the memory swapping.

That’s it for now.
We have a basic SYSTEM database (SYSTEMDB).

Some things to note at this point:
– SYSTEM database data and log files reside in /usr/sap/H10/SYS/global/data and /usr/sap/H10/SYS/global/log directories (linked to /hana/shared/H10/global).
– Initial usage of disk is around 4GB for data and 1 GB for logs.
– Used memory is around 6GB.
– The HANA Cockpit URL would be (if it was installed) https://192.168.80.2:4310/sap/hana/admin/cockpit   or port 8010 for non SSL.
– The above two URLs are served from the xsengine via the webdispatcher.
– You cannot permanently stop the webdispatcher or xsengine (but I can…).
– SAP note 2517761 tells you how to connect via HANA Studio to the system DB.
– You will need to add the h10adm username and password into HANA Studio to allow you to stop/start the system.
– You may need to add the hana01 and it’s FQDN to your PC’s hosts file to be able to successfully stop/start the system from HANA Studio.
******  OPTIONAL ********

We can slightly reduce the memory requirements of the statisticsserver (now embedded into the indexserver process) by following SAP note 2147247 to disable the inifile_checker service in the global.ini:
Switch to h10adm Linux user:

# su – h10adm
> hdbsql -i 10 -u SYSTEM -p Hanahana1 -d SYSTEMDB
hdbsql SYSTEMDB=> ALTER SYSTEM ALTER CONFIGURATION (‘global.ini’, ‘system’) SET (‘inifile_checker’, ‘enable’)=’false’  WITH RECONFIGURE;
hdbsql SYSTEMDB=> quit

******  OPTIONAL ********
We also reduce slightly the system global allocation limit to 12GB, so that we can consequently reduce the VM memory from 24Gb to 18GB:
NOTE: When you do this, you will not be able to run a Tenant Database because the Tenant DB indexserver process will need at least 8GB of memory to start.

> hdbsql -i 10 -u SYSTEM -p Hanahana1 -d SYSTEMDB
hdbsql SYSTEMDB=> ALTER SYSTEM ALTER CONFIGURATION (‘global.ini’, ‘system’) SET (‘memorymanager’, ‘global_allocationlimit’) = ‘12288’ WITH RECONFIGURE;
hdbsql SYSTEMDB=> quit

Restart the HANA system:

> sapcontrol -nr 10 -function Stop

Wait for it to be stopped:

> watch sapcontrol -nr 10 -function GetProcessList

Press CTRL+C once everything is shutdown (apart from the HDB Daemon).
Exit back to root:

> exit

Shutdown the server:

# shutdown -h now

Adjust the VM memory to be 18GB:

Power on the VM:

Log in as h10adm and start the HANA system:

> sapcontrol -nr 10 -function Start

******  OPTIONAL ********
We can create a new tenant database as follows (we would need at least 24GB of memory for SUSE in order to create and run the SYSTEM DB and the Tenant DB):

# su – h10adm
> hdbsql -i 10 -u SYSTEM -p Hanahana1 -d SYSTEMDB
hdbsql SYSTEMDB=> CREATE DATABASE HT1 SYSTEM USER PASSWORD Hanahana1;
hdbsql SYSTEMDB=> quit

If you wish to stop the Tenant database from starting, you can use SQL as per the help.sap.com, or if your SYSTEM DB will not start also, then you can use the temporary method (probably not recommended by SAP) of exporting the topology using hdbnsutil, adjusting the export file to set the Tenant DB “status” to “no” and then re-import the file using hdbnsutil.
Should you need to quickly (and nastily) kill off the SYSTEM DB and Tenant DB processes, you can use the hdbdaemon command: “hdbdaemon -quit”.