This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Listing & Tagging Orphaned Azure Disks using Azure CLI from Bash Cloud Shell

Nobody likes wastage, but it happens.
Finding orphaned Azure disks is super simple using the Cloud Shell.

In this post I show how you can use the CLI, with a little Bash scripting in the Cloud Shell, to find unattached disks and also tag them for future removal.
There are plenty of PowerShell examples in the PowerShell Runbook gallery, but I want to show a CLI example.
I then go on to show how this could be scheduled, but it’s not as simple as first thought.

Using Bash Cloud Shell & the Az Command (CLI)

We can use the az command (CLI) in a bash Cloud Shell to list the disks, then use JMESPath to filter and find the ones that are not attached (“Unattached”) to any VM:

az disk list --query "[?diskState=='Unattached'].{name:name,state:diskState,size:diskSizeGb,sku:sku.name,tag:to_string(tags)}" -o tsv

ase01_datadisk_2        Unattached      256     Standard_LRS    null
ase01_OsDisk_1_8e31587ee6604463ada5167f91e6345f Unattached      30      StandardSSD_LRS null

Notice I have also included the “tags” column with some processing.
If you wanted to search for disks that are not attached, and filter by tag, then you can do the following.
First I apply a tag; I’m going to create a tag called “testTag”, with a value “testTagValue”:

az disk update --name ase01_datadisk_2 --resource-group UK-West --set tags.testTag=testTagValue

Now I have set a value for “testTag”, let’s query based on that specific tag, for “Unattached” disks:

az disk list --query "[?diskState=='Unattached'&&tags.testTag=='testTagValue'].{name:name,state:diskState,size:diskSizeGb,sku:sku.name,tag:to_string(tags)}" -o tsv

ase01_datadisk_2        Unattached      256     Standard_LRS    {"testTag":"testTagValue"}

You can search for a specific value, contains a part of a value, or alternatively, search for a value that is “null”.

Adjusting the Disks

Now we have a list of disks output from our query, we could just delete them.
But it’s probably better to tag them for deletion, allowing a deletion at some point after a more detailed review. I would look to produce a report that could go to a CAB review, ready for deletion.

Here’s how we can use the power of the Cloud Shell, the Azure CLI and Bash scripting to tag those disks that are Unattached:

set updated=0
set failed=0
set tabchar="$(printf "\t")"
set deldate="$(date -d "+ 30 days" "+%Y%m%d")"
az disk list --query "[?diskState=='Unattached'].{name:name,rg:resourceGroup}" -o tsv | while read line
do
   diskname="${line%%${tabchar}*}"
   rgname="${line##*${tabchar}}"
   echo -n "Updating: $diskname ... "
   az disk update --name ${diskname} --resource-group ${rgname} --set tags.deleteDate=${deldate} >/dev/null
   if [[ $? -eq 0 ]] ; then 
      echo "[SUCCESS]"
      (( updated++ ))
    else
      echo "[FAILED]"
      (( failed++ ))
   fi
done
printf "### SUMMARY ### \n Updated: %s\n Failed: %s\n" $updated $failed

Let’s look at the above script in detail below:

  1. We define the variable to hold our updated count.
  2. We define the variable to hold our failed count.
  3. We execute the CLI query to get the list of disks, outputting the disk name and resource group into a “do” loop a line at a time.
  4. The start of the loop body.
  5. We have to capture the character that represents a TAB character.
  6. We get the disk name out of the line by splitting the line by TAB from the left.
  7. We get the disk resource group out of the line by splitting the line by TAB from the right.
  8. A little output text to say which disk is being worked on.
  9. The call to the CLI to update the disk, applying the tag “deleteDate” with a value of today + 30 days in the format yyyymmdd.
  10. We detect the successful (or not) execution of the CLI command.
  11. Output “success” for a successful execution.
  12. Update the success variable count.
  13. Alternatively, if we failed.
  14. Output “failed” for a failed execution.
  15. Update the failed variable count.
  16. Closure of the “if” statement.
  17. Closure of the loop.
  18. Output a summary count of success vs failed.

Here is the execution sample:

Reporting on the deleteDate Tag

Now we have our disks tagged ready for deletion, we might want to find disks that have a deleteDate older than a specific date. These disks would then be ripe for deletion.
Here’s how we can do that,:

az disk list --query "[?tags.deleteDate<'20210402'].{name:name,rg:resourceGroup,tag:to_string(tags)}" -o tsv

We can even go as far as using Bash to inject the current date into the query, thereby allowing us to look for any disks due for deletion from today without adjusting the query each time:

az disk list --query "[?tags.deleteDate<'$(date "+%Y%m%d")'].{name:name,rg:resourceGroup,tag:to_string(tags)}" -o tsv

How About Scheduled Execution?

Well, using the CLI and the Bash shell inside an Azure Automation account is not possible right now.
Instead we would need to convert our disk update scripted CLI example to PowerShell to make it Runbook compatible, then we can schedule inside an Azure Runbook as a PowerShell Runbook.
The code needs to change slightly because we need to use the automation “RunAs Account” feature to connect from our Runbook, but it looks very much like our CLI code in Bash:

IMPORTANT: All the “simple” help guides on creating a PowerShell Runbook fail to mention the need to actually import the required PowerShell modules that you will need to run your code. In the Automation Account section, look at the “modules”. For the code below you need 2 additional modules: Az.Accounts and Az.Compute which you can import from the Gallery. Seems simple, but as I said, not obvious.

$Conn = Get-AutomationConnection -Name 'AzureRunAsConnection'
try{
    $auth = Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
} Catch {
    throw $_.Exception
}
$updated=0
$failed=0
$deldate=(get-date -Format "yyyyMMdd" -date $([datetime]::parseexact($(get-date -format 'yyyyMMdd'), 'yyyyMMdd', $null)).AddDays(+30))
$updateConfig = New-AzDiskUpdateConfig -tag @{ "deleteDate" = "$deldate" }
Get-AzDisk |? { $_.DiskState -eq "Unattached" } |% { 
   write-output "Updating: $($_.Name) ..."
   Update-AzDisk -ResourceGroupName $_.ResourceGroupName -DiskName $_.Name -DiskUpdate $updateConfig >$null 2>&1
   if ( $? -eq $true ) { 
      echo "   [SUCCESS]"
      $updated++
    }
    else {
      echo "   [FAILED]"
      $failed++
   }
}
echo "### SUMMARY ### `n Updated: ${updated}`n Failed: ${failed}`n"

With the above code in a PowerShell Runbook, we can test it:

Obviously if you will be scheduling the code above, then you may wish to change it slightly so that it excludes any orphaned disks already with a deleteDate tag.

Summary

Using the CLI, Cloud Shell and some Bash scripting, we have a simple mechanism to tag unattached disks, then use that tag to report on disks due for deletion after a specific date (or we could use today’s date).
This is a great solution for those with Bash shell scripting skills and as shown it is reasonably simple.

We have also looked at the possibility of scheduling the code and found that it is not possible for CLI. Instead a PowerShell Runbook is a possible solution that allows the scheduling of PowerShell code.
PowerShell could be your pain point, but it really isn’t that far from shell.

Cleaning Up /tmp for SAP On SLES On Azure

In an Azure SLES 12 Linux VM the default installation image mounts the /tmp file system as a regular file system off the root (/). Historically, for many Unix/Linux environments, this is not “normal”.

In this post I will discuss what the impact is for this irregular setup of /tmp and what you can do to work around it, ensuring SAP continues to work as usual.

What is “Normal”?

In traditional Linux installations the /tmp file system is usually mounted as a temporary file system (tmpfs), which means it would be cleaned on O/S reboot.
This has been the case for many years. There’s a recent post here that highlights 1994 for Solaris.
Plus, you can find a detailed explanation of tmpfs here: https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html

With the default SLES setup, any files placed into /tmp will not be automatically be cleaned up on reboot and as per the previous links, there can be performance reasons to use a tmpfs.

From memory, there are differing standards on what /tmp should be used for (again see here), and it is possible that the traditional setup is no longer following a newly agreed standard. I really am not certain why SLES does not mount /tmp as tmpfs.
All I know from over 20 years of working with different Unix/Linux products, is that it is generally accepted that /tmp is a dumping ground, that gets cleaned on reboot and from what I can see, SAP think the same.

What is the Impact of /tmp Not Being tmpfs?

When you have /tmp and it is not cleaned on reboot and is not a tmpfs, then it can cause issues when using software that expects some form of clean up to be performed.
When I look at some of the SAP systems in Azure on SLES 12, I see a build up of files in the /tmp directory, which results in the need for a scripted job to clean them up on a periodic cycle.

If some of the more prolific files are not cleaned up regularly, then they can build up into many thousands of files. While this shouldn’t impact the day-to-day running of the SAP system, it can impact some ad-hoc operations such as patching the SAP system or the database.
The reason is that sometimes the patching tools write out files to the /tmp area, then crudely perform a “ls” to list files or find files in that location. If there are many thousands of files, then those listing operations can fail or be delayed.
A perfect example is the patching of the SAP ASE database, which can be affected by thousands of files in the /tmp location.

Finally, with the /tmp directory mounted off the root disk, any filling of /tmp will fill your root disk and this will bring your VM to a halt pretty quickly! Be careful!

What Sort of Files Exists in /tmp ?

In the list below, I am looking specifically at SAP related files and some files that are culprits for building up in the /tmp directory.

File Name PatternDescription
.saphostagent_nnnnnSAP Host Agent run files.
.sapicmnnnSAP ICM run file.
.sapstartsrv##_sapstartsrv.logSAP Instance Agent run file.
.sapstreamnnnnSAP IPC files.
.theagentlives.tmpOwned by Sybase O/S user, is related to SAP ASE instance. Maybe JS Agent.
ctisql_*Temporary iSQL executions using sybctrl.
sap_jvm_nnnn_nnnnnn
sapjvm_profiling_server_nnnn_nnnnn
sap_jvm_monitoringboard_nnnn_nnnnn
SAP JVM execution.
sapinst_instdirFrom an execution of SWPM (contains sapinst).
saplg*Owned by sapadm and are part of the SAP Instance Agent logon ticket generated from the Hostagent.
sb*From an ASE installation.
tmp*Owned by root, lots and lots, possibly Azure agent related as they contain the text “Windows Azure CRP Certificate Generator” when passed through a base64 decoder.
tmp.*Lots and lots, seem to be Kerberos related.

How Can We Clean Up these Files?

The most common way is to use a script.
Within the script will be a “find” statement, which finds the specific files and removes each one.
It needs to be done this way, because if there are too many files, then trying to do “rm /tmp/tmp*” will exceed the number of lines in the shell space for globbing and it will either error or produce no output at all and no files will be removed.

The script will need to be executed as root frequently (maybe weekly or even daily) to ensure that the file quantities are kept consistently low. This can be achieved using an enterprise scheduler or a crontab on each server.

Here’s an example of how to clean up the /tmp/tmp* files with a very specific criteria. The files are removed if they are:

  • located in the /tmp directory
  • with a name length of at least 7 chars beginning with ‘tmp’ followed by A-z or 0-9 at least 4 times.
  • last modified more than 7 days ago.
  • owned by root, with a group of root.
find /tmp -type 'f' -regextype posix-awk -regex '/tmp/tmp[A-z0-9]{4,} -mtime +7 -user root -group root -delete -print

The above will remove the files due to the “-delete”. To test it, just remove the “-delete”.

In summary, you should check how /tmp is setup in your VMs, and then check the files that are created in /tmp.

HowTo: Script ASE Configuration Check

Maybe you have lots of ASE database instances and you would like to ensure that they are all in alignment with regards to configuration.

In this post I show how to output a generic line of text for each configuration line in the ASE <SID>.cfg file.
The line includes the following items:

  • The filename (e.g. <SID>.cfg)
  • The section name of the file (e.g. [somesection])
  • The name of a parameter within the section.
  • The value of the parameter.

The output is in this format:
<filename>:<section>~<parameter> = <value>

This allows you to combine multiple ASE instance configuration files into a single log file, which can then either be downloaded into Excel for value comparison across the landscape or you can use tools such as “grep” to compare individual parameters.

The Process

First you need to gather up all the configuration files from across your landscape and place them all in one directory:

<SID1>.cfg
<SID2>.cfg
etc

For the above, I can recommend a custom operation via HostAgent and either a website with upload capability, an FTP site, SCP script, shared NFS location or some other common shared area where you can upload the files from each server.

Now we execute the code against the directory where the configuration files have been collected.

Switch to bash or ksh:

ksh

Run the code (change “your_dir” for your config files location):

ls -1 /your_dir/???.cfg | while read file 
do 
   awk '/^\[/ { print $0 }' $file | sort | while read line 
   do 
      awk -v input="$line" '{ if ( $0 == input ) { getline; while ( $0 !~ /^\[/ ) { if(match($0,/\t(.*)$/,aval)) { if (length(lval)>0){ lval=lval"\n" }; lval=lval""FILENAME":"input"~"aval[1] }; if (! getline) break; }}}END{if (lval) print lval}' $file 
   done 
done

This will list all output to the screen.
To make it go to a new file, append “ > newfile.log” to the very end like this:

...
   done 
done > newfile.log

You will see inside the newfile.log, that you have a great file format for use with “grep“, and can query the value of parameters like so:

NOTE: Replace “<section>” and “<parameter>” with your query values.

grep ':<section>~<parameter> ='  newfile.log

The above will list all matching parameters in a specific section, for each of the ASE instances, for comparison on the screen.
You can use this information to align parameters across your landscape or just as a configuration check.

Capture HTTP POST Using Simple Python Script

In this post I show a simple and quick way to capture a basic HTTP POST using Python to provide a basic HTTP Web Server with cgi capability in just a few lines of code and in most cases, it is executable on almost any Python capable server.

Why I Used This Code

I used this successfully to test an interface which wasn’t particularly clear exactly what data it was going to POST to a target web server.
Usually a developer could use real developer tools to do this analysis, however, the server doing the POST is POSTing to another server, and both servers sit behind firewalls. It was far quicker to create something simple that could be executed direct on the server.

What Does the Code Do?

The code simply creates a HTTP web server on the server on which it is executed.
The web server serves the content that exists in the directory structures from the current working directory and below.
The web server is also able to execute CGI scripts written in Python and stored in the cgi-bin subdirectory.

The Code

#!/usr/bin/python 
# 
import sys, os, cgi, cgitb

# Enable easy error reporting output. 
cgitb.enable()

# Create a custom log output to print to stdout and a log file. 
class CustomLogger(object): 
   def __init__(self): 
      self.terminal = sys.stdout 
      self.log = open("logfile.log", "a")

   def write(self, message): 
      self.terminal.write(message) 
      self.log.write(message)

   def flush(self): 
      pass

# Swap stdout for our custom log class. 
sys.stdout = CustomLogger() 
sys.stderr = sys.stdout

# Call the standard CGI test. 
cgi.test()

### END OF SCRIPT ###

Deploying the Code

To install the code, we need to create a new temporary directory on our host server, I used ssh to do this:

mkdir -p /tmp/dmg/cgi-bin

Put the code into the file called form.py in the cgi-bin directory:

cd /tmp/dmg/cgi-bin

vi form.py
[insert code then press shift-ZZ]

chmod 755 form.py

Still on an ssh session, switch back to the dmg directory and execute the Python CGI handler to listen on port 8080:

cd /tmp/dmg

python -m CGIHTTPServer 8080

Call your HTTP tool to POST to the address:
http://<your-server>:8080/cgi-bin/form.py

If the tool returns output, then you will see the output on your ssh session screen.
The output response from the CGI script is also stored in the /tmp/dmg/logfile.log file.

To quit/end the HTTP web server, simply press CTRL+C multiple times until you are returned to the command prompt.

The output will look like:

Content-type: text/html

Current Working Directory:
/tmp/dmg

Command Line Arguments:
['/tmp/dmg/cgi-bin/form.py', '']

Form Contents:
parameter: <type 'instance'>
MiniFieldStorage('parameter', 'test')

Shell Environment:
...

You will see the POST content in the “Form Contents” section of the output.
The values of fields are pre-fixed with “MiniFieldStorage“.

Also included in the output, is the execution environment which contains the environment variables that contain CGI related variables and their respective values such as HTTP_METHOD.

A Test Form

You can also deploy a simple form in order to test the CGI capability manually from a web browser (although this was not required in my case).
The form is simple HTML that POSTs two text input fields to our form.py CGI script:

<html><body>
<div style="text-align: center;">
<h1>Test Form</h1>
<form action="/cgi-bin/form.py" method="POST">
f1 : <input style="text-align: center;" name="f1" type="text" />
f2 : <input style="text-align: center;" name="f2" type="text" />
<input type="submit" value="Submit" />
</form>
</div>
</body></html>

The form should be saved to a new file called index.html in the /tmp/dmg directory.
You can then manually access the test web server using http://<your-server>:8080 and you will see the form.
Enter two values into the form and click submit, to see the output from your CGI script.

Korn Shell Calling SAP HANA – Hello Hello!

So you’ve automated SAP HANA stuff huh?
What tools do you use?
Python? Chef? Puppet? Ansible? DSC/Powershell?

No. I use Korn shell. ¯\_(ツ)_/¯

Me, Trying to Support Korn…

I find Korn shell is a lowest common denominator across many Linux/Unix systems, and also extremely simple to support.
It does exactly what I need.

For me it’s readable, fairly self-explanatory, easily editable and dependable.

…and Failing?

But I do know what you’re saying:

  • there’s no built in version control
  • it’s not easy to debug
  • it’s definitely not very cool 🤓
  • you can’t easily do offline development
  • my Grandad told me about something called Korn shell ?

Have you Considered

If you have no budget for tools, then you can start automating by using what you already have. Why not.
Don’t wait for the right tool, start somewhere and only then will you understand what works for you and doesn’t work for you.

Sometimes it’s not about budget. There are companies out there that do not allow certain sets of utilities and tools to be installed on servers, because they can provide too much help to hackers. Guess what they do allow? Yep, Korn shell is allowed.

Let’s Get Started

Here’s a sample function to run some SQL in HANA by calling the hdbsql (delivered with the HANA client) and return the output:

#!/bin/ksh
function run_sql {
   typeset -i l_inst="${1}" 
   typeset l_db="${2}" 
   typeset l_user="${3}" 
   typeset l_pw="${4}" 
   typeset l_col_sep="${5}" 
   typeset l_sql="${6}" 
   typeset l_output="" 
   typeset l_auth="" 
   typeset -i l_ret=0

   # Check if PW is blank, then use hdbuserstore (-U). 
   if [[ -n "${l_pw}" && "${l_pw}" != " " ]] ; then 
      l_auth="-u ${l_user} -p ${l_pw}" 
    else l_auth="-U ${l_user}" 
   fi

   l_output="$(hdbsql -quiet -x -z -a -C -j -i ${l_inst} ${l_auth} -d ${l_db} -F "${l_col_sep}"<<-EOF1 2>>/tmp/some-script.log 
		${l_sql}; 
		quit 
EOF1 )"
   
   l_ret=$?

   # For HANA 1.0 we need to trim the first 6 lines of output, because it doesn't understand "-quiet". 
   #if [[ "$(check_major_version)" -lt 2 ]] ; then 
      # l_output="$(print "${l_output}"| tail -n +7)" 
   #fi

   print "${l_output}" 
   return $l_ret 

}

To call the above function, we then just do (in the same script):

l_result="$(run_sql "10" "SystemDB" "SYSTEM" "SYSTEMPW" " " "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence','log_mode')='overwrite' WITH RECONFIGURE")"

We are passing in the HANA instance number 10, you can use whatever your instance number is.

We can check the function return code (did the function return cleanly) like so:

if [[ $? -ne 0 ]] ; then 
   print "FAILED" 
   exit 1; 
fi

Here’s what we’re passing in our call to hdbsql (you can find this output by calling “hdbsql –help”):

-i instance number of the database engine
-d name of the database to connect
-U use credentials from user store
-u user name to connect
-p password to connect
-x run quietly (no messages, only query output)
-quiet Do not print the welcome screen
-F use as the field separator (default: ‘,’)
-C suppress escape output format
-j switch the page by page scroll output off
-Q show each column on a separate line
-a do not print any header for SELECT commands

If you wanted to return a value, then the “l_result” variable would contain the output.

Ideally, the function we wrote would be put into a chunk of modular code that could be referenced time and again from other Korn shell scripts.

You would also be looking to create some sets of standard functions for logging of messages to help with debugging. You can make it as complex as you wish.

In the call to “run_sql” we pass a column separator.
I usually like to use a “|” (pipe), then parse the returned values using the “awk” utility like so:

l_result="$(run_sql "10" "SystemDB" "SYSTEM" "SYSTEMPW" "|" "SELECT file_name,layer_name,section,key, value FROM SYS.M_INIFILE_CONTENTS WHERE layer_name='SYSTEM'")"

echo "${l_result}" | /bin/awk -F'|' '{ print $2" "$3" "$4 }'

When we execute the script we get the first 3 columns like so:

daemon.ini SYSTEM daemon 
diserver.ini SYSTEM communication 
global.ini SYSTEM auditing 
configuration global.ini SYSTEM 
backup global.ini SYSTEM
...

Obviously we don’t really embed the password in the script; it gets passed in.
You can either pass it in using the command line parameter method (./myscript.ksh someparam) or via the Linux environment variables (export myparam=something; ./myscript.ksh).
If you want you can even pipe it in (echo “myparam”| ./myscript.ksh) and “read” it into a variable.
You can also take a look at the “expect” utility to automate command line input.
Also, take a look at the “hdbuserstore” utility to store credentials for automation scripts (remember to set appropriatly secure privs on these database users).

That’s all there is to it for you to get started using Korn shell to call HANA.