This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

SAP Netweaver ICM Fast Channel Architecture

SAP Netweaver has been around for many, many years now. In fact we have had very nearly 20 years of Netweaver.
Back in March 2001, SAP acquired TopTier and went on to use TopTier’s application as the underpinning to the SAP Netweaver application server (WebAS).
Now this would not have been the Netweaver Java stack, that was to come later in the form of WebAS 6.30.
My point is, you would imagine by now that Netweaver is known inside and out by most BASIS professionals, but this is just not the case. It’s a complex and very capable application server and there are things that we know and things that we know in detail.
One of the things that seems to be little known is the FCA and it’s role within the ICM of the Netweaver Java stack.

In this post I want to explain the function of the SAP Netweaver Internet Communication Manager (ICM) Fast Channel Architecture (FCA) and how this is responsible for routing the HTTP communications to your Netweaver Java stack.

As usual, a little context will help set the scene.

A History of Netweaver Java

Before Netweaver 7.1, the Java stack did not have an Internet Communication Manager (ICM). This was reserved only for the Netweaver ABAP stack.
Instead, these old Netweaver Java versions had additional Java nodes (JVMs) called dispatcher nodes (in addition to the server0 node).

The dispatcher node was responsible for receiving and dispatching the inbound HTTP requests to the server nodes of the instance.

The ICM Was Added

Since Netweaver 7.1, the Java stack was given the ICM, which runs from the Kernel binaries, instead of a JVM.


The benefits of this change were:

  • Faster startup and response time (Kernel is C++ compiled binary code).
  • Smaller memory requirements.
  • Same ICM in Netweaver ABAP and Netweaver Java (same Kernel DB independent part).
  • Use of profile files for configuration (for SSL, security, memory params) instead of ConfigTool.

Identifying the FCA

We know the ICM is visible as a separate binary executable process at the operating system level.
In Windows we see “icman.exe” and in Unix/Linux we see “icman”.
At execution, the icman program reads the instance profile to determine it’s configuration.

The Fast Channel Architecture (FCA) is a specific, dedicated set of memory pipes (MPIs) in the shared memory region, accessible by both the ICM and the Java server nodes and used as a method of super fast inter-process communication between the ICM and the Java server nodes.
In Linux, shared memory segments are visible using the “ipcs -m” command, in Windows these are memory mapped files and you cannot see them so easily, you would need a 3rd party tool.

By using shared memory and the concept of memory pipes, it avoids the need for the data in a HTTP request/response to be sent from the ICM to the Java Server node. Instead of sending the actual data, a simple memory pointer can be sent (smaller and consistent in size), telling the Java Server node where to look in memory, for the data.
Effectively what this means is that the shared memory area for the MPIs, sits logically between the ICM and the Java Server nodes.

According to the Netweaver AS Java documentation, the FCA is itself just another MPI, that acts as a FIFO queue.
The HTTP requests coming into the ICM via a TCP port, travel through a regular (anonymous) MPI, before the ICM dispatches the request into a specific FCA queue.
If you have two server nodes on your Java stack (server0 and server1), then the ICM will query the server node to determine the back-end load, then push the request to the specific FCA queue of the target server node that has capacity to handle the request.
Therefore, if you have two server nodes, you will have a dedicated FCA queue for each.
It is the responsibility of the Java server node, to create the FCA queue in the ICM shared memory during start-up.

Once the HTTP request (or rather, the memory pointer to the request) hits the FCA, it becomes the responsibility of the Java server node to pull the request off the queue into a thread for processing.
Inside the Java Server node, these threads are known as the FCA threads or HTTP Worker Threads.
If you run a SAP PI/PO system, then you may already be familiar with these threads and their configuration.
You may have seen these threads when running thread dumps for SAP support incidents.

There are two methods to actually see the FCA Queues:

  • Within the SAP ICM Web Administration page.
  • Using the “icmon” command line tool.

We can call the icmon tool as follows:

icmon pf=<path-to-instance-profile>

then from the menu select "m"
then from the menu select "y"

Once the MPI list is dumped (option “y”), the the FCA queues are visible at the end of the output:

...
MPI<174>: 4d50494d 'ANON' 11 50 0 0 0 0(4996) 1(30001) 1(30001)
MPI<173>: 4d50494d 'ANON' 10 50 0 0 0 0(4996) 1(30001) 1(30001)
MPI<60>: 4d50494d 'TS1_00_1234650_HTTP_WAIT' 5 -1 20 0 0 0(4996) 1(10002) 0(-1)
MPI<5f>: 4d50494d 'TS1_00_1234650_HTTP' 4 -1 20 0 0 0(4996) 1(10002) 1(30001)
MPI<58>: 4d50494d 'TS1_00_1234651_HTTP_WAIT' 2 -1 20 0 4406 0(4996) 1(10003) 0(-1)
MPI<57>: 4d50494d 'TS1_00_1234651_HTTP' 7 -1 20 0 0 0(4996) 1(10003) 1(30001)
MPI<52>: 4d50494d 'TS1_00_1234650_P4' 6 -1 20 0 0 0(4996) 1(10002) 1(30001)
MPI<4d>: 4d50494d 'TS1_00_1234651_P4' 3 -1 20 0 0 0(4996) 1(10003) 1(30001)
MPI<4>: 4d50494d 'ANON' 1 1 0 0 0 0(4996) 1(30001) 1(30001)
MPI<2>: 4d50494d 'ANON' 0 1 0 0 0 0(4996) 1(30001) 1(30001)
 
    q - quit
    m - menue 

NOTE: For those interested, the 4d 50 49 4d at the beginning of each line, translates from HEX to ASCII as “MPIM”.

In my example, you can see I have 2 Java server nodes registered at this ICM: 1234650 and 1234651.
You will notice that there are 3 queues for each Java server node.
The P4 queue is self explanatory, it is used to talk to the Java server node on it’s P4 port (SAP proprietary protocol) and is probably used to acquire capacity/load information from the server node.
Of the other 2 queues, one queue is the “WAIT” queue and is where (I think) the inbound requests (destined to the Java server node) are held, before they enter the other request queue which is where (I think) the Java server node is waiting to process the requests.
(There is not a great deal of documentation on the above, but I have seen instances where the WAIT queue fills, which makes me believe it’s a holding area).

In the dev_icm trace we can also see the joining of the server nodes to the ICM for the HTTP protocol (other protocols are supported, such as Telnet, P4):

[Thr 140608759801600] Wed Mar 17 22:59:32:934 2021
[Thr 140608759801600] JNCMIHttpCallLBListener: node 1234650, service Http joins load balancing
[Thr 140608759801600] HttpJ2EELbPut: server 1234650 started protocol HTTP, attached to request queue TS1_00_1234650_HTTP
[Thr 140608759801600] JNCMIHttpMsPutLogon: set http logon port (port:50000) (lbcount: 2)
[Thr 140608759801600] JNCMIHttpMsPutLogon: set https logon port (port:50001) (lbcount: 2)

In the Java server node developer trace files (e.g. dev_server0 and dev_server1), we can see the name of the node (JNODE_10002 for server0) which is also visible in the dev_icm trace output in column 10:

F [Thr 139637668607872] Wed Mar 17 22:53:49 2021
F [Thr 139637668607872] JSFSetLocalAddr: using NI defaults for bind()
I [Thr 139637668607872] MtxInit: JNODE_10002 0 2

The relevant dev_icm output:

MPI<60>: 4d50494d ‘TS1_00_1234650_HTTP_WAIT’ 5 -1 20 0 0 0(4996) 1(10002) 0(-1)
MPI<5f>: 4d50494d ‘TS1_00_1234650_HTTP’ 4 -1 20 0 0 0(4996) 1(10002) 1(30001)

Sizing the FCA

The size of the FCA is not directly configurable.
Instead, we can configure the size of the shared memory area (total area) for all the MPIs using parameter “mpi/total_size_MB“, then from this total size, the maximum possible size of any individual MPI is fixed to 25% of the total area size.

In later Netweaver versions (7.40+), it is not recommended to adjust “mpi/total_size_MB“, instead, adjust the “icm/max_conn” parameter, which is then used to calculate “mpi/total_size_MB“.
The internal formula is described as:
mpi/total_size_MB = min(0.06 * $(icm/max_conn) + 50, 2000)

There is another undocumented (apart from SAP notes) parameter, which can allow you to increase the max size of an MPI. However it means any one MPI can consume more of the total area than the default 25%.
It is therefore not advised to be adjusted.

We can see the value of the parameter “mpi/total_size_MB” in the ICM developer trace file (dev_icm) during it’s start up. This is useful as it shows us the calculation based on the formula mentioned above.
We are looing at “total size MB” right at the end of the line:

[Thr 140610607359872] MPI init, created: pipes=40010 buffers=19985 reserved=5995 quota=10%, buffer size=65536, total size MB=1250

Common FCA Errors

There are a dedicated set of SAP notes for FCA errors, such as 1867119.
Based on the architecture we can see that they describe issues with throughput (through the FCA Queue), and with issues in the Java server node threads causing the FCA Queues to fill.
They also show issues with sizing of the MPIs, and the number of the worker threads (for high throughput scenarios).

In my experience the following types of FCA errors can be seen in the Java server developer traces “dev_server<n>” files:

  • “-3” error: The Java server node is unable to put a response back onto the FCA Queue, probably because the MPI area is full from a full FCA Queue. This can happen if one of the Java server node HTTP Worker threads has become stuck (waiting) for resources or for the database.
    As you will see from my previous diagram, a full MPI area will then start to affect HTTP access to both Java server nodes as they share the ICM (it’s a single point of failure).
  • “-7” error: This affects one individual Java server node and prevents it from pulling requests off the FCA queue in a timely manner. This specific issue is usually a timeout mismatch between the HTTP provider and the ICM.

Both of the above errors look similar, but one is a lack of resources in the Java stack and the other is a full FCA Queue (in shared memory) due to inaction (stuck threads) in the Java stack.
The “-7” error can therefore present itself as an issue in the ICM or in the Java stack, but it is usually a problem in the Java stack that causes it to close the connection early.

Summary

There you have it, the simple FCA queue that serves HTTP requests to your Java Server nodes.
We learned:

  • Netweaver Java was given the ICM in 7.1 onwards.
  • The ICM in the Netweaver Java and ABAP stacks is the same binary.
  • The ICM uses shared memory for the MPIs.
  • The shared memory area is controlled via a parameter of which it’s value is controlled via 1 parameter (in NW 7.40+).
  • The FCA queues are MPIs.
  • Only memory pointers are passed through the FCA Queues.
  • The Java server nodes are responsible for creating the FCA queues in the ICM shared memory.
  • There are 2 FCA queues for each server node.
  • The developer traces store information about the size of the ICM shared memory and the registration of the Java Server nodes to a queue.
  • There are a known set of errors that can occur and are documented in SAP notes.
Useful SAP References
  • SAP Note 1867119 – No more memory for FCA
  • SAP Note 2417488 – Resource leak for MPI buffers in FCA communication
  • SAP Note 1945745 – How to increase HTTP Worker (FCA) threads in PI
  • SAP Note 2579836 – AS Java system has performance problem – FCAException – Best practices and tuning recommendations.
  • SAP Note 2997765 – AS Java system has performance problem – FCAException – Best practices for analysis
  • SAP Note 2276273 – AS Java – How to identify the largest MPI buffer consumer by MPI dump

SAP PI/PO Performance Statistics Data Extract and Analysis using Excel & Power Query

End-to-End performance analysis of a specific SAP PO/PI interface is a tricky business using the out-of-the-box tools. For example, you might be thinking that SAP Solution Manager can provide this detail, but it is not able to correlate inbound SAP Web Dispatcher HTTP logs or Netweaver Java ICM logs to the PI/PO interface.
Instead you need to implement your own monitoring to bring these components together, unless you implement a complete tool like AppDynamics.

In this post I will show how to extract the SAP PI/PO performance data and import into Excel, using a Power Query to transform it from the XML format into something that we can report on.
After seeing my example, you can take the same design away and, using PowerBI, construct a dashboard for reporting. You can also choose to incorporate the HTTP logs, to give you an the end-to-end performance analysis by interface.

This post is similar to another set of blog posts I published, showing how to pull data about your virtual machines directly from Azure, see: List Your Azure VMs in Excel. That other post used Power Query to transform JSON.

Data Sources

Before we go and start Excel, we need to understand where our data will be coming from.
There are 4 locations that we can report on in a SAP Netweaver AS Java stack running SAP PI/PO:

  • SAP Web Dispatcher (load balancer across multiple App instances).
  • SAP ICM (load balancer across multiple NW java server nodes).
  • SAP NW HTTP Provider.
  • SAP PI/PO performance data servlet.

The last item is the key one, which will require some Power Query goodness to transform the data.
We use the 2014 blog post from Vadim Klimov to see how to pull this data direct from PI/PO using the servlet “PerformanceDataQueryServlet”.

For the Web Dispatcher, the ICM and the HTTP Provider, we really need to choose just one of those points to collect the log information.
Since our source system is handing over processing to “us” at the Web Dispatcher, then that would be the logical location to collect the HTTP logs.
However, some companies use a load balancing appliance like an F5 at the entry-point, in which case, you would be better gathering the HTTP logs from each of the ICM server processes.

The reason for using the HTTP logs from the front-end of the architecture stack, is because you want to capture any HTTP 50x messages caused by unavailability of the back-end parts.
For example, if SAP Netweaver is down, then the Web Disp logs would show a HTTP 503 (service unavailable).
If the PO application is not started inside SAP Netweaver, then the ICM logs would show a HTTP 503.
You want to collect the logs from the closest point of the handover between your source system and the Netweaver stack.

For the HTTP log sources, we have a little bit of an issue.
In most cases, logging is not enabled in Web Dispatcher and ICM configurations. To enable the logging we need to look at the parameter “icm/HTTP/logging_<xx>“.
Ideally what we need for the log format is: %h %u %t “%r” %s %H %L
This will give:

  • %h = Source IP.
  • %u = BASIC Auth username or cert common name.
  • %t = Date/time.
  • %r = The request with query string !IMPORTANT!
  • %s = The HTTP response code e.g. 200 or 500 etc.
  • %H = The name of the server host.
  • %L = Response time in milliseconds.

The log file should be switched DAILY (SWITCHF=day) to prevent it from growing too large.
We will need to transform the data in the log, but we can do this in Power Query or in a script at source.

Data Transfer

We now need to understand how we will transfer the data from the data source to Excel (or PowerBI).
Natively, Excel (and PowerBI) can query a HTTP target to obtain data in a variety of formats.
Since XML is supported with no problems, this enables us to call the PI/PO PerformanceDataQueryServlet directly from Excel.

For those feeling adventurous, the HTTP logs can actually be picked up by Azure Log Analytics. You may need to adjust the date/time format with a script, but then it will be possible to have them stored in your workspace for pulling into PowerBI.

Alternatively, you will need to place the HTTP logs into a storage location on a regular interval, somewhere accessible from Excel/PowerBI. This could be Sharepoint or an Azure Storage Account.
Another option is to have them placed into a location that serves HTTP, such as the SAP instance agent like I have shown before. For the Web Dispatcher you may have 2 logs (in an active/active setup) for the ICM you will have a log file for each Application server instance.
By naming the log files in an intelligent manner, you can ensure that your Power Query can always find the files (e.g. don’t include the date/time in the file name).

Data Aquisition

With your data accessible via HTTP, you can use Excel or PowerBI to process it.
In this example, we will go through the processing for the PerformanceDataQueryServlet, since that is the hardest to process in its raw XML format, with multiple nested tables. The nested tables is the reason we use Power Query to transform it.

Open Excel and create a new workbook, then select the “Data” tab:

Click “Get Data” and select “From Other Sources”, then click “Blank Query”:

Click “Advanced Editor”:

Remove any existing text from the query box:

At this point, we can paste in the code necessary to obtain our data, but first we need to understand the URL composition correctly.
For this we can refer to the 2014 blog post from Vadim Klimov to understand the required parameters.

Here’s my sample HTTP call:
https://sapts1app1:50001/mdt/performancedataqueryservlet?component=af.ts1.sapts1db01&begin=2021-01-10T00:00:00.000Z&end=2021-01-11T00:00:00.000Z

This can be broken down as follows:

EntryValue
Https://sapts1app1:50001Host address of PO App server (this can be any one of the Java instances).
/mdt/performancedataqueryservletURI for the PerformanceDataQueryServlet
component=af.ts1.sapts1db01The name of our AAEX
begin=2021-01-10T00:00:00.000ZThe begin time of our data selection period.
end=2021-01-11T00:00:00.000ZThe end time of our data selection period.

Something you will notice about our URL is that we are using the HOURLY data selection period, listing data for a 24 hour period aggregated by hour.
We don’t really have much choice with the PerformanceDataQueryServlet, as we can only choose from MINUTE, HOURLY or DAILY with aggregation levels being 15mins, 1hour or 1day.

If we were to decide not to pull the data via HTTP, then we could save it to a flat file.
The data format that will be returned from the HTTP call could be pre-saved.
Here’s my sample data returned from the call to the PerformanceDataQueryServlet:

<?xml version="1.0" encoding="UTF-8" ?>
<PerformanceDataQueryResults xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sapts1app1:50001/mdt/monitor/PerformanceDataQuery.xsd">
<Result>
<Code>OK</Code>
<Details>OK</Details>
<Text>Successfully&#x20;read&#x20;performance&#x20;data.</Text>
<Component>af.ts1.sapts1db01</Component>
<PeriodType>DAILY</PeriodType>
<BeginTime timezone="UTC">2021-01-10&#x20;00&#x3a;00&#x3a;00.0</BeginTime>
<EndTime timezone="UTC">2021-01-11&#x20;00&#x3a;00&#x3a;00.0</EndTime>
</Result>
<Data>
<ColumnNames>
<Column>INBOUND_CHANNEL</Column>
<Column>OUTBOUND_CHANNEL</Column>
<Column>DIRECTION</Column>
<Column>DELIVERY_SEMANTICS</Column>
<Column>SERVER_NODE</Column>
<Column>FROM_PARTY_NAME</Column>
<Column>FROM_SERVICE_NAME</Column>
<Column>TO_PARTY_NAME</Column>
<Column>TO_SERVICE_NAME</Column>
<Column>ACTION_NAME</Column>
<Column>ACTION_TYPE</Column>
<Column>SCENARIO_IDENTIFIER</Column>
<Column>MESSAGE_COUNTER</Column>
<Column>MAX_MESSAGE_SIZE</Column>
<Column>MIN_MESSAGE_SIZE</Column>
<Column>AVG_MESSAGE_SIZE</Column>
<Column>MAX_RETRY_COUNTER</Column>
<Column>MIN_RETRY_COUNTER</Column>
<Column>AVG_RETRY_COUNTER</Column>
<Column>AVG_PROCESSING_TIME</Column>
<Column>TOTAL_PROCESSING_TIME</Column>
<Column>MEASURING_POINTS</Column>
</ColumnNames>
<DataRows>
<Row>
<Entry>HTTPSender_TS1</Entry>
<Entry>SOAPReceiver_Proxy</Entry>
<Entry>OUTBOUND</Entry>
<Entry>BE</Entry>
<Entry>1234567</Entry>
<Entry></Entry>
<Entry>BSR_SAP</Entry>
<Entry></Entry>
<Entry>TS_SERVER</Entry>
<Entry>ProcessingIn</Entry>
<Entry>urn&#x3a;in2db.com&#x3a;ERP&#x3a;Test</Entry>
<Entry>&#x7c;BSR_SAP&#x7c;ProcessingOut&#x7c;&#x7c;</Entry>
<Entry>3</Entry>
<Entry>2396</Entry>
<Entry>1391</Entry>
<Entry>1817</Entry>
<Entry>0</Entry>
<Entry>0</Entry>
<Entry>0.00</Entry>
<Entry>524476</Entry>
<Entry>1573428</Entry>
<Entry> <MeasuringPoints><MP> <Name>MS&#x3a;module_in&#x3a;CallSapAdapter</Name> <Sequence>1</Sequence> <Max>394</Max> <Avg>349</Avg> <Min>261</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;SI</Name> <Sequence>2</Sequence> <Max>12</Max> <Avg>9</Avg> <Min>8</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;BI</Name> <Sequence>3</Sequence> <Max>73</Max> <Avg>60</Avg> <Min>52</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;VI</Name> <Sequence>4</Sequence> <Max>12</Max> <Avg>8</Avg> <Min>7</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;MS</Name> <Sequence>5</Sequence> <Max>1266</Max> <Avg>1050</Avg> <Min>771</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Store</Name> <Sequence>6</Sequence> <Max>155</Max> <Avg>112</Avg> <Min>90</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Disp_Queue</Name> <Sequence>7</Sequence> <Max>2328</Max> <Avg>836</Avg> <Min>82</Min></MP><MP> <Name>MS&#x3a;Message_Wait_In_Disp_Queue</Name> <Sequence>8</Sequence> <Max>1445</Max> <Avg>630</Avg> <Min>203</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Queue</Name> <Sequence>9</Sequence> <Max>44</Max> <Avg>42</Avg> <Min>42</Min></MP><MP> <Name>MS&#x3a;Message_Wait_In_Queue</Name> <Sequence>10</Sequence> <Max>323</Max> <Avg>263</Avg> <Min>195</Min></MP><MP> <Name>MS&#x3a;Message_Update_Status</Name> <Sequence>11</Sequence> <Max>233</Max> <Avg>166</Avg> <Min>128</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;AM</Name> <Sequence>12</Sequence> <Max>114891</Max> <Avg>41811</Avg> <Min>2755</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;SO</Name> <Sequence>13</Sequence> <Max>59</Max> <Avg>40</Avg> <Min>24</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;VO</Name> <Sequence>14</Sequence> <Max>44</Max> <Avg>33</Avg> <Min>25</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;AT</Name> <Sequence>15</Sequence> <Max>468</Max> <Avg>364</Avg> <Min>304</Min></MP><MP> <Name>MS&#x3a;module_out&#x3a;sap.com&#x2f;com.sap.aii.af.soapadapter&#x2f;XISOAPAdapterBean</Name> <Sequence>16</Sequence> <Max>1008279</Max> <Avg>478000</Avg> <Min>131434</Min></MP><MP> <Name>MS&#x3a;Resp&#x3a;stage&#x3a;BI</Name> <Sequence>17</Sequence> <Max>575</Max> <Avg>481</Avg> <Min>395</Min></MP><MP> <Name>MS&#x3a;Resp&#x3a;Message_Put_In_Store</Name> <Sequence>18</Sequence> <Max>157</Max> <Avg>136</Avg> <Min>121</Min></MP><MP> <Name>MS&#x3a;Resp&#x3a;Message_Update_Status</Name> <Sequence>19</Sequence> <Max>89</Max> <Avg>86</Avg> <Min>81</Min></MP> </MeasuringPoints></Entry>
</Row>
<Row>
<Entry>SOAP_Sender</Entry>
<Entry>SOAPReceiver_Proxy</Entry>
<Entry>OUTBOUND</Entry>
<Entry>EO</Entry>
<Entry>1234567</Entry>
<Entry></Entry>
<Entry>BSR_SAP</Entry>
<Entry></Entry>
<Entry>TS_SERVER</Entry>
<Entry>Confirmation_In</Entry>
<Entry>http&#x3a;&#x2f;&#x2f;sap.com&#x2f;xi&#x2f;IS-U&#x2f;Global2</Entry>
<Entry>&#x7c;BSR_SAP&#x7c;Confirmation_Out&#x7c;&#x7c;</Entry>
<Entry>48</Entry>
<Entry>672</Entry>
<Entry>672</Entry>
<Entry>672</Entry>
<Entry>0</Entry>
<Entry>0</Entry>
<Entry>0.00</Entry>
<Entry>89338</Entry>
<Entry>4288227</Entry>
<Entry> <MeasuringPoints><MP> <Name>MS&#x3a;SOAPHandler.processSOAPtoXMB</Name> <Sequence>1</Sequence> <Max>488</Max> <Avg>296</Avg> <Min>190</Min></MP><MP> <Name>MS&#x3a;module_in&#x3a;CallSapAdapter</Name> <Sequence>2</Sequence> <Max>521</Max> <Avg>211</Avg> <Min>144</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;SI</Name> <Sequence>3</Sequence> <Max>55</Max> <Avg>6</Avg> <Min>5</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;BI</Name> <Sequence>4</Sequence> <Max>195</Max> <Avg>37</Avg> <Min>26</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;VI</Name> <Sequence>5</Sequence> <Max>28</Max> <Avg>5</Avg> <Min>4</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;MS</Name> <Sequence>6</Sequence> <Max>7495</Max> <Avg>2675</Avg> <Min>1340</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Store</Name> <Sequence>7</Sequence> <Max>28648</Max> <Avg>8891</Avg> <Min>6457</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Disp_Queue</Name> <Sequence>8</Sequence> <Max>12290</Max> <Avg>6102</Avg> <Min>3558</Min></MP><MP> <Name>MS&#x3a;Message_Put_In_Queue</Name> <Sequence>9</Sequence> <Max>191</Max> <Avg>46</Avg> <Min>21</Min></MP><MP> <Name>MS&#x3a;Message_Wait_In_Queue</Name> <Sequence>10</Sequence> <Max>401</Max> <Avg>229</Avg> <Min>153</Min></MP><MP> <Name>MS&#x3a;Message_Wait_In_Disp_Queue</Name> <Sequence>11</Sequence> <Max>18855</Max> <Avg>5289</Avg> <Min>8</Min></MP><MP> <Name>MS&#x3a;Message_Update_Status</Name> <Sequence>12</Sequence> <Max>25237</Max> <Avg>9398</Avg> <Min>5056</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;AM</Name> <Sequence>13</Sequence> <Max>390</Max> <Avg>183</Avg> <Min>124</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;SO</Name> <Sequence>14</Sequence> <Max>102</Max> <Avg>17</Avg> <Min>16</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;VO</Name> <Sequence>15</Sequence> <Max>155</Max> <Avg>22</Avg> <Min>17</Min></MP><MP> <Name>MS&#x3a;stage&#x3a;AT</Name> <Sequence>16</Sequence> <Max>1813</Max> <Avg>332</Avg> <Min>205</Min></MP><MP> <Name>MS&#x3a;module_out&#x3a;sap.com&#x2f;com.sap.aii.af.soapadapter&#x2f;XISOAPAdapterBean</Name> <Sequence>17</Sequence> <Max>91602</Max> <Avg>55588</Avg> <Min>46038</Min></MP> </MeasuringPoints></Entry>
</Row>
</DataRows>
</Data>

</PerformanceDataQueryResults>

The XML data is complex and contains nested tables for the “MeasuringPoint” elements. This is not something that is possible to extract using the Excel data import GUI alone. You will need to use my code 😉
In the code there are two points that do the required pre-processing to transpose, fillUp and then remove some data parts, returning it in the required format so that you can report on it with all the “MeasuringPoints” if you need them.
Could the above be done in another tool? Probably. But everyone has Excel.

Let’s put my Power Query code into the Excel query editor:

let
    // Uncomment to use a URL source,
    // Source = Xml.Tables(Web.Contents("https://sapts1app1:50001/mdt/performancedataqueryservlet?component=af.ts1.sapts1db01&begin=2021-01-10T00:00:00.000Z&end=2021-01-11T00:00:00.000Z")), 
    Source = Xml.Tables(File.Contents("C:\Users\darryl\Documents\Projects\po-perf-metrics\performancedataqueryservlet-1.xml")),
    Data = Source{1}[Table],
    DataRows = Data{1}[Table],
    Row = DataRows{0}[Table],
    #"Expanded Entry" = Table.TransformColumns(Row, {"Entry", each Table.RemoveLastN(Table.FillUp(Table.Transpose(_), {"Column22"}),1)}),
    #"Expanded Entry1" = Table.ExpandTableColumn(#"Expanded Entry", "Entry", {"Column1", "Column2", "Column3", "Column4", "Column5", "Column6", "Column7", "Column8", "Column9", "Column10", "Column11", "Column12", "Column13", "Column14", "Column15", "Column16", "Column17", "Column18", "Column19", "Column20", "Column21", "Column22"}, {"Entry.Column1", "Entry.Column2", "Entry.Column3", "Entry.Column4", "Entry.Column5", "Entry.Column6", "Entry.Column7", "Entry.Column8", "Entry.Column9", "Entry.Column10", "Entry.Column11", "Entry.Column12", "Entry.Column13", "Entry.Column14", "Entry.Column15", "Entry.Column16", "Entry.Column17", "Entry.Column18", "Entry.Column19", "Entry.Column20", "Entry.Column21", "Entry.Column22"}),
    #"Renamed Columns" = Table.RenameColumns(#"Expanded Entry1",{{"Entry.Column1", "INBOUND_CHANNEL"}, {"Entry.Column2", "OUTBOUND_CHANNEL"}, {"Entry.Column3", "DIRECTION"}, {"Entry.Column4", "DELIVERY_SEMANTICS"}, {"Entry.Column5", "SERVER_NODE"}, {"Entry.Column6", "FROM_PARTY_NAME"}, {"Entry.Column7", "FROM_SERVICE_NAME"}, {"Entry.Column8", "TO_PARTY_NAME"}, {"Entry.Column9", "TO_SERVICE_NAME"}, {"Entry.Column10", "ACTION_NAME"}, {"Entry.Column11", "ACTION_TYPE"}, {"Entry.Column12", "SCENARIO_IDENTIFIER"}, {"Entry.Column13", "MESSAGE_COUNTER"}, {"Entry.Column14", "MAX_MESSAGE_SIZE"}, {"Entry.Column15", "MIN_MESSAGE_SIZE"}, {"Entry.Column16", "AVG_MESSAGE_SIZE"}, {"Entry.Column17", "MAX_RETRY_COUNTER"}, {"Entry.Column18", "MIN_RETRY_COUNTER"}, {"Entry.Column19", "AVG_RETRY_COUNTER"}, {"Entry.Column20", "AVG_PROCESSING_TIME"}, {"Entry.Column21", "TOTAL_PROCESSING_TIME"}}),
    #"Expanded MP" = Table.ExpandTableColumn(#"Renamed Columns", "Entry.Column22", {"MP"}, {"Entry.Column22.MP"})
in
    #"Expanded MP"

In the code above, you will notice the “Source=” is using a local file. You can uncomment the “Web” source and comment out the “File” source if you are pulling the data direct via HTTP.

With the Power Query code entered into the editor, check there are no syntax errors and click “Done”:

When querying the data directly over HTTP you will need to edit the credentials at this point.
In the credentials screen, enter the “Basic” username and password to use.
The data will be displayed.
In my sample I see two rows of data:

At the initial top-level, you will see we have the Average Processing Time (in milliseconds) for each interface:

We also have an additional column at the end which contains embedded tables of additional metric data for each specific stage of processing within PI/PO:

By clicking the double arrows in the top of the header for the “Entry.Column22.MP” we can expand the embedded table (should you wish to), and you will see that it presents the following additional columns of data:

When you click “OK” it adds those columns to the main list, but it will create additional rows of data for each of those additional columns that have been expanded:

With the above data expanded, we can really produce some nice graphs.
Here’s an example showing the breakdown of average response time for each of those processing stages.
First I put them into a pivot table and apply an average to the “Avg” column for each of the “Name” column values :

Then I create a pie chart for the data and we can report on which processing stage inside PI/PO is consuming the most time:

By applying additional graphs and filters we could report on individual interfaces’ overall response times, then allow drill-down into the specifics.

Any Potential Issues?

There is one big caveat with the above process of pulling the data from the servlet.
The servlet is extracting data from a memory cache area inside PI/PO.
This cache is an LRU cache, meaning it has a limited size and gets overwritten when it becomes full.
The quantity of data is therefore limited.

It is possible that you could switch on the database persistence (logging) of successful messages in PI/PO, but this has a detrimental impact to message throughput performance and is not recommended by SAP for production systems.

To try and get around the cache limitations, my advice would be to extract the data using the smallest granular frequency that the servlet allows (MINUTE), and save to a disk file which could be accessible from Excel somehow.
Another method could be to use Microsoft Power Automate (previously MS Flow) to pull the file into Sharepoint or other storage.
By extracting the data frequently, you are trying to ensure that you have it stored before the cache is cleared, but also you are building a time-series from which you could potentially include into your reporting tool.
A time-series would allow you to scroll through data in windows of at least 15 mins in size. Not bad.

Summary

We identified the important areas of data collection in PI/PO (and Netweaver in general) to allow response times to be seen.
We also noted that HTTP response codes such as 503 should be obtained from the first outermost point of the Netweaver stack (if possible).

We saw an example of using the “PerformanceDataQueryServlet” to pull data from the PI/PO memory cache and transformed it using Power Query to allow detailed reporting on the response times.
I created a demonstration graph from a pivot table using my sample data, which showed a possible drill-down in the response time of individual processing stages within SAP PI/PO.

Hopefully I have given you some ideas on for how you can solve your PI/PO performance reporting requirement.

HowTo: Extract SAP PI/PO Message Payload from SAP ASE DB

Sometimes you may need to directly the extract the SAP PO message payload from the underlying database tables such as BC_MSG_LOG in SAP ASE 16.0 database.
This could also potentially be called: extracting hex encoded ASCII data from an ASE image column. Because the SAP PO tables use an ASE image data type to store the payload as an off-row LOB.

There are plenty of examples for doing this extraction in Oracle, but in ASE it is not so easy because the message size could be larger than that supported by the page size of ASE (usually 16k in an ASE for BusSuite).
This means you won’t be able to store it into a T-SQL variable and use the ASE functions.

Instead, we can use the below simple approach to extract the raw hex, and then use Python 2 to convert it to ASCII:

1, Execute the selection SQL using isql at the Linux command prompt on the database server:

isql -USAPSR3DB -S<SID> -w999 -X

select MSG_BYTES
from [SAPSR3DB.BC_MSG_LOG]
where MSG_ID='<YOUR MSG ID>'
and DIRECTION='OUTBOUND'
and LOG_LOCATION='MS'

go

The output will consist of hexadecimal output, which starts with “0x” and should look something like this:

0x2d2d5341505f6

Copy & paste into a text file on the Linux server (use your favourite text editor) and call the file data.txt.

Edit the data.txt file and remove the first “0x” characters from the data.
Remove all newlines and carriage returns in the file.

Now create a simple Python script to read the data from our file data.txt and translate from hex to ASCII then print to the screen:

with open('data.txt', 'r') as file:
    data = file.read()
print data.decode('hex')

Run the Python script:

python ./myscript.py

The output should contain a header and a footer which start with:  “–SAP_”.
If you get an error from the Python script, then it could be because there are additional newlines or carriage returns in the data.txt file.

Simple in-Cloud SAP LaMa DR Setup

When running the SAP Landscape Management tool (LaMa) in the cloud, you need to be aware of the tool’s importance in your SAP landscape in the context of disaster recovery (DR).

In this post I will highlight the DR strategies for hosting SAP LaMa with your favourite cloud provider.

What is SAP LaMa?

For those not yet accustomed to SAP LaMa, it is SAP’s complete SAP/non-SAP landscape management and orchestration tool for both on-premise and cloud.

SAP LaMa comes in two guises:

  • Standard Edition
  • Enterprise Edition

The Enterprise edition comes with many additional features, but crucially, it includes the “Cloud Connectors” for all the mainstream cloud vendors.
A “Cloud Connector” allows seamless start/stop/provisioning of cloud hosted VMs.

Using SAP LaMa to execute a pre-configured, ordered startup of VMs and the applications on those VMs can be a huge time saving during a disaster.

What Installation Patterns Can We Use with SAP LaMa?

SAP LaMa is a software component installed inside a standard SAP Netweaver Java stack. Therefore, you may use the standard Netweaver Java installation patterns such as single-system or distributed system.
SAP LaMa will work in either pattern.

What is a Normal Installation Pattern in the Cloud?

In the cloud (e.g. Azure, GCP, AWS etc), when installing SAP Netweaver, you would usually want to use the distributed system architecture pattern, to prevent a single VM outage from disrupting the SAP Netweaver application too much. The distributed system pattern is preferred because you have slightly less control over the patching of the physical host systems, so it afford you that little bit extra up-time.

This usually means having: a Web Dispatcher tier, at least 2 application servers in the application tier, the Central Services (SCS) instance having failover and using Enqueue Replication Server (ERS), plus database replication technology on the database tier.


How is DR catered for in SAP LaMa?

For large organisations with business critical SAP systems like SAP S/4HANA, SAP ECC etc, you would usually have a “hot” DR database server (i.e. running and actively replicating from the primary database) in your designated DR cloud region.
This means there is minimal data-loss and as the DR database is mere minutes behind the primary database in transactional consistency.
The application tier and Web Dispatcher tier would use the cloud provider’s VM replication technology (e.g in Azure this is called Azure Site Recovery), ensuring that the application patching and config is also replicated.

I would designate the above pattern as a “hot” DR architecture pattern.

For SAP LaMa the situation is slightly more flexible because:

  1. It is not business critical, only operations critical.
  2. The database is only a repository for configuration and monitoring data. Therefore, transactional data loss is not critical.
    In fact, the configuration data in SAP LaMa can be exported into a single XML file and re-imported into another LaMa system.

Due to the above, we have some different options that we can explore for Disaster Recovery.
Excluding the “hot” DR architecture pattern, we could classify the DR architecture pattern options for SAP LaMa as “restore”, “cold”, “cool” and finally “warm”. (These are my own designators, you can call them what you like really).

What is a “restore” DR pattern for SAP LaMa?

A “restore” DR setup for SAP LaMa, is when you have no pre-existing VM in your cloud DR region. Instead you are replicating your VM level backups into a geo-replicated file storage service (in Azure this is called Azure Vault).

In this setup, during a DR scenario, the VM backups from your primary region would need to be accessible to restore to a newly built VM in the DR region.

This is the most cost friendly option, but there is a significant disadvantage here. Your system administrators will not have the benefit of LaMa to see the current state of the landscape and they will not be able to make use of the start/stop technology.

Instead they will need a detailed DR runbook with start/stop commands and system/VM startup priority, to be able to start your critical systems in a DR scenario. You are also placing your trust in the VM backup and restore capability to get LaMa back online.

The VM backup timing could actually be an issue depending on the state of the running database at the time of backup. Therefore, you may need to also replicate and restore the database backup itself.

During a DR scenario, the pressure will be immense and time will be short.

Cost: $
Effort: !!!! (mainly all during DR)
Bonus: 0

What is a “cold” DR pattern for SAP LaMa?

A “cold” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, but the duplicate system is completely shutdown, including the VM(s).

In this setup, during a DR scenario, the VM would need to be started using the cloud provider tools (or other method) and then the SAP LaMa system would be started.

Once running, the latest backup of the LaMa configuration would need restoring (it’s an XML file) and the cloud connectors connecting to the cloud provider. After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Compared to the “restore” pattern, we can have our DR LaMa system up and running and start using it to start the VMs and applications in a pre-defined DR operation template (like a runbook).
However, we need a process in place to export and backup the export of the configuration from the primary LaMa system, so that the configuration file is available during a DR scenario.

In Azure, for example, we would store the configuration file export on a geo-replicated file storage service that was accessible from multiple-regions. We also have the associated hosting costs and the required patching/maintenance of the DR VM and LaMa system. As an added bonus, this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Cost: $$
Effort: !!! (some during DR, patching)
Bonus: +

What is a “cool” DR pattern for SAP LaMa?

A “cool” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, and the duplicate system is frequently started (maybe daily) and the configuration synchronised with the primary SAP LaMa system.

The synchronisation could be using the in-built configuration synchronisation of the LaMa software layer, or it could be a simple automated configuration file import from a shared file location where the configuration file has previously been exported from the primary LaMa system.

In this setup, during a DR scenario, the VM *may* need to be started (depends on when the failure happens), using the cloud provider tools (or other method) and then the SAP LaMa system *may* need to be started. Once running, the latest backup of the LaMa configuration would probably not need restoring (it’s an XML file), because for your business critical systems, they would already exist and be configured as a result of the frequent synchronisation. The cloud connectors would need connecting to the cloud provider.
After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Compared to the “cold” pattern, we save a little time by having the frequent configuration file synchronisation already done. We can choose to also have a process in place to export and backup the export of the configuration from the primary LaMa system, should we choose to also use that configuration file.
There is an obvious cost to the frequent starting of the VM. Since you pay for the VM to be running.

As an added bonus, this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Cost: $$$
Effort: !! (a little during DR, patching)
Bonus: +

What is a “warm” DR pattern for SAP LaMa?

A “warm” DR setup for SAP LaMa, is when you have a duplicate SAP LaMa system that is installed in the DR cloud region, and the duplicate system is constantly running with frequent (could be hourly) synchronisation with the primary SAP LaMa system.
The synchronisation could be using the in-built configuration synchronisation in the LaMa software component, or it could be a simple automated file import from a shared file location where the configuration file has been exported from the primary LaMa system.

In this setup, during a DR scenario, the cloud connectors would need connecting to the cloud provider. After connecting to the cloud provider, LaMa can then be used to start/provision the other software components of the SAP landscape, into the DR cloud region.

Like the “cool” pattern, we get an added bonus that this pattern allows us to apply patches first to the DR LaMa system, which could remove the need for a Development LaMa system.

Compared to the other patterns, we gain the immediate advantage of being able to start/stop VMs and SAP systems in the DR region. However, there is a constant cost for the VM to be running (if using a PAYG VM pricing model).

Cost: $$$$
Effort: ! (hardly any during DR, patching)
Bonus: +

Summary

Depending on your strategy, you may choose to stick to your existing architecture patterns.

You could choose to use a “hot” DR pattern, and ensure that your DR LaMa system is in-synch to the primary.
However, for the most risk averse, I would be inclined to calculate the costs/benefits for the “warm” pattern.
A “warm” pattern also means you could forgo the distributed system installation pattern for the DR system. Choosing the more cost-effective single-system pattern and removing the extra complexity of database level replication.

For SMEs, I would favour more the “cool” pattern. This could remove the need for a Development system, allowing testing of patching on the DR system instead. I feel it represents the middle ground between using the technology vs cost.

Patching SAP LaMa 3.0 to SP17

On the 9th of November 2020 SAP released support package 17 of SAP Landscape Management 3.0.
If you already run SAP LaMa 3.0 SP11 and above, then you can quite easily patch to SP17 by installing the 3 SCA files into your existing Netweaver 7.5 Java stack.

However, things are never so easy, as I will show.

Required Netweaver Stack

Before you can patch SAP LaMa you must always read the support package release note.
For SP17, it is SAP note 2908399 in component “BC-VCM-LVM”.

In the SAP note, it states that a minimum of Netweaver 7.5 SP15 is required for LaMa 3.0 SP17, with a recommendation of Netweaver 7.5 SP17.

That’s good for me, I have Netweaver 7.5 SP16, I should be good to patch with no issues. Right?
No, after applying the 3 SCA files for LaMa 3.0 SP17, the Netweaver stack starts and stops successfully, but when I try to log into LaMa I see the message on the screen “SAP Landscape Management is loading, please wait…“, but it does not progress any further.
When accessing Netweaver Administrator, it works perfectly.

The Error

For the sake of clarity, I also took a look at the Java stack log viewer and I could see an error:

error binding ExecuteCustom/RMI …“, which didn’t mean a lot to me and produced no results in SAP notes.

The error record details mentioned: “com.sap.tc.vcm.engine.operation.handler.customop.CustomRMIOperationHandler.CustomRMIOperationProvider” in application: “sap.com/tc~vcm~engine~app“.

None of the above produced any SAP notes that looked vaguely related.

Let Groundhog Day Commence

I’ve been working with LVM and LaMa for a while now. When I actually started looking how long, I was surprised to see my knowledge went back to 2014.
I was sure at the back of my mind there was a slight recollection of this same issue.

I started searching the SAP notes and with this recollection of a problem in mind, I decided to search for the exact message that was staring at me on the LaMa post-login screen.
It was a direct hit on the SAP note search.
SAP note 2662354 “SAP Landscape Management is loading, please wait…” is an old SAP note for SAP LaMa 3.0 SP07, back in 2018.
The SAP note described the exact same symptoms, and the failure to progress into LaMa past the loading screen.

Inside SAP note 2662354, it referenced the support package release note 2542159 “SAP Landscape Management 3.0 SP07” which states: “Install at least SAP NetWeaver Application Server 7.5 for Java Support Package 11. If you use a lower Support Package, you have to update the SAPUI5 component“.

It was all coming back to me now. In the past, to apply LaMa 3.0 SP07, you needed to patch the NW stack, or an alternative was to simply apply a higher SAPUI5 software component (SCA).

SAP UI5 – Skipping Ahead

Once I understood the potential solution (apply a later SAPUI5 SCA), I needed to validate what I had already validated in the past.
Was it still supported to apply a later SAPUI5 software component to a SAP Netweaver 7.5 Java stack?

In SAP Note 2541677 “How to switch SAPUI5 versions in NW Java 7.50 SP07 and higher SPs“, it confirms that from SAPUI5 7.5 SP07, more than one version of the UI5 library will be included. The SCA effectively becomes cumulative as each SAPUI5 version is released.

More importantly the note says: “SAP recommends that you always implement the latest released Support Package. You are save to apply UI5 patches of higher SPs to your systems, as there as no direct dependencies.“.

That is exactly the confirmation I was looking for.

Patching SAPUI5

My Netweaver stack level is SP16. The recommended Netweaver stack level (based on that LaMa SP17 note) is SP17.
That left two options which could fix the problem:

  • The latest SCA patch level for SAPUI5 7.5 SP16
    or
  • The latest SCA patch level for SAPUI5 7.5 SP17 (taken from the NW 7.5 SP17 stack).

I decided that I would take the “slowly, slowly” approach and patch to the latest SAPUI5 7.5 SP16 patch first.
After patching and restarting the Netweaver stack, I still had the exact same problem.

Moving onto the second option, I applied the latest SAPUI5 7.5 SP17 patch level (UISAPUI5JAVA17P_18-80000708.SCA).
After patching and restarting the Netweaver stack, the issue was finally fixed!

As of Nov 11th, there is still no official documentation for this process.

Improving the Patching Efficiency

During the above problem resolution, I did not use SUM to apply the patches.
When patching SAP LaMa we are talking (usually) of only 3 software component archives.
For this reason, I prefer to patch using the Telnet deployment method.

As the Linux <sid>adm user on the JPAS host, log into the NW Telnet server port:

telnet 127.0.0.1 5##08

[enter administrator user/pw]

deploy /<path-to>/<SCA-file>

The Telnet deployment order for SP17 is:

  • VCMCR4E17_0-70001063.SCA
  • VCM17_0-70001062.SCA
  • VCM_ENT17_0-70001064.SCA
  • [the SAPUI5 7.5 SP17 patch – if you are on NW SP16]

Once deployed, the NW stack needs a full restart.

Summary

  • Patching SAP LaMa should be simple, sometimes it has issues.
  • LaMa depends on the SAPUI5 component version.
  • You may need to patch SAPUI5 to make LaMa work.
  • SAPUI5 support packages include prior versions (after 7.5 SP07).
  • SAP permits you to use a higher SP of SAPUI5 compared to the NW stack SP level.
  • It is possible to use Telnet to deploy the patches, providing you follow the correct order of deployment.
Useful Links
  • SAP Note 2908399 “SAP Landscape Management 3.0 SP17” v7
  • SAP Note 2662354 “SAP Landscape Management is loading, please wait…” v1
  • SAP Note 2541677 “How to switch SAPUI5 versions in NW Java 7.50 SP07 and higher SPs” v7
  • SAP Note 2542159 “SAP Landscape Management 3.0 SP07” v6