This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

SAP User Groups

Apart from the roles, profiles and authorisation objects involved in SAP security controls, there is also an additional level.  User groups.
The user groups in an SAP system can be used to control access to certain authorisation objects (i.e as a restriction in a profile), or used as a method of tagging different types of users to permit certain types of administrative delegation.
Therefore permitting a super set of users to administer passwords for a smaller sub-set of users of a certain user group.

So how do you create user groups?  Use transaction SUGR to define the groups, then assign the users in SU01 or SU10.
Take a look at authorisation object S_USER_GRP.

Basic Performance Tuning Guide – SAP NetWeaver 7.0 – Part IV – ABAP Runtime Analysis

This is the final part of my SAP NetWeaver 7.0, Basic Performance Tuning Guide.
You may wish to review Part I, Part II or Part III.
In this part I will focus on the use of ABAP Runtime Analysis to check performance of ABAP programs/reports.  This will help you locate areas of ABAP that are eating up valuable response time.

Execute transaction SE30.
Enter the transaction (or program or function module) that you wish to analyse:

Edit the “Measurement Restrictions” variant to either create your own, or simply adjust the DEFAULT.
This will allow us to display additional information which can be very useful when trying to determine performance problems.

Tick the “DB-level ops” option on the Statements tab:

On the Duration/Type tab, select “None” for Aggregation and tick “With memory use”:

The transaction or program you entered will be run, but you should see a prompt at the bottom of the screen telling you that the measurement process has started:

Perform the actions you need in the transaction/program.
Once you have finished, simply click Back (exit from the transaction) to the SE30 main screen:

The SE30 screen will display a prompt showing the analysis has finished:

The trace data is stored in a file on the SAP instance server.
This can be seen by clicking the “File Info…” button:

You can see the file name and path:

If you need to load the trace data again for analysis, you can simply click the “Other File…” button on the main screen.
It’s also possible to transfer the analysis files via RFC to another SAP system from the “Other File…” sub-screens.
This is very useful if you want to compare performance between two systems.
NOTE: The performance data files are only retained on the server for eight days.  I haven’t found a way of extending this.

Let’s analyse the stats.
Click the Evaluate button:

You will see a basic breakdown of the transaction runtime performance analysis in histogram form.
The three different areas are shown (ABAP, Database and System) against time (as a percentage):

More details on this screen can be found here: https://help.sap.com/saphelp_nw70/helpdata/en/c6/617d27e68c11d2b2ab080009b43351/content.htm

The values shown correlate to the response times measured in the single record statistics plus the time required for analysis.
Note that DATABASE time above, also includes the time the system takes to open and close cursors etc, not just the SQL runtime.
For reference, the STAD output for the same operation (notice the slightly lower DB req time):

SAP STAD output DB request time

The most interesting buttons at the top of the SE30 screen are “Hit List”, “Table Hit List” and “Call hierarchy”.
 Click the Hit List button:

STAD output hit list

The Hit List will be displayed in descending order (gross time):

STAD output hit list output

The columns are defined as follows:

No.” – Shows the number of times a call was made.
If a “SELECT” is performed in a loop, then this will show the number of times the call was made in total in the loop.

Gross” – Total execution time in microseconds (running bottom to top of the hit list).
Net” – The time in microseconds for the specific call(s)
Call” – The program component that was being executed.
In program” – The main program of the component.
Type” – The type of call (DB, System, Non-system).

SAP says here https://help.sap.com/saphelp_nw70/helpdata/en/c6/617d27e68c11d2b2ab080009b43351/content.htm:
Gross time
The total time required for a call. This includes the runtime of all modularization units and ABAP statements called by the subject.

Net time
The net time is the gross time less the time required for modularization units (MODULE, PERFORM, CALL FUNCTION, CALL SCREEN, CALL TRANSACTION, CALL METHOD, CALL DIALOG, SUBMIT), and for the ABAP statements listed separately. For statements such as APPEND, the gross time is the same as the net time.

In this view, you are looking for records in the list with the larger “Net” value.
This indicates that a large amount of overall time was spent in the call/program listed.

If the Gross and Net time fields do not hold the same value for a record, then double clicking the record will display the sub-components and switch to the hierarchy view:

STAD output hit list sorting

At any time, you can single click a record, then jump to the ABAP source by clicking the “Display Source Code” button:

STAD output hit list call point in ABAP

STAD output hit list call point in ABAP

NOTE: I always recommend you enable the “New” ABAP editor.  It gives you visibility of line numbers and highlights the results of text “Find” operations so you can see what’s been found.

Back on the main analysis screen, select the “Table Hit List” button:

The list of tables used in the execution of the transaction/program are displayed in descending order of runtime:

STAD output hit list tables used

The #Acces column, displays the number of times the table was accessed (read/write) and corresponds to any loops you may have seen in the “Hit List” screen No. column.

The Class column shows TRANSP (Transparent Table), VIEW or POOL (cluster table).

Double click a table to display the call points, from which you can jump to the ABAP source.

Finally, from the main analysis screen, click the “Call hierarchy” button:

The Hit List that you have already seen, will now be displayed in a hierarchy with sub-calls indented under the parent call:

STAD output hit list call hierarchy

The only real difference between this view and the Hit List table view, is the indenting.
You can still double click to display the execution time overview for that sub-call, or use the “Display Source Code” button to jump to the ABAP source.

In my example screen shots, you will have seen that the PMSDO fetch is taking a large amount of the response time.
I was able to find that this operation was inside the GET_DATA function and check the ABAP code.

The main thing you are looking for in poor performing ABAP code, is where does the execution time get eaten up.  As a side note, you may also check the amount of memory used if you are looking to resolve TSV PAGE ALLOC errors.

Try sorting the Hit List by “Net Time” to find the individual components with the highest net execution time.
Check if the operation is a database related call or not.  You can then use Part III to trace this further with an SQL trace.
Check the number of loop iterations (The “No.” column) in the “Hit List”. Are you performing an expensive routine too many times in a loop?

Validate that the code is not doing too much work using iTabs, when some of the work could be done in the database.
The database is the ultimate place to do sorting, grouping and record elimination.
Don’t pull back more records than you need, only to loop through an iTab in ABAP and remove them, it’s more expensive.

That’s it for now.  Thanks for reading.

Basic Performance Tuning Guide – SAP NetWeaver 7.0 – Part III – Using SQL Trace

This is Part III, you may like to see Part I and Part II (or skip to part IV).

Once you have traced the poor performing program (see Part I of this guide), and you have identified that your problem is in the database response time (see Part II of this guide), then you can start digging deeper into the potential SQL problem by running an SQL trace using transaction ST05.
In ST05, click the “Activate Trace with Filter” button:

SAP ST05 Performance Analysis, activate trace with filter

NOTE: We are only concerned with an SQL statement that has directly accessed the database (https://help.sap.com/saphelp_nw04/helpdata/en/d1/801f89454211d189710000e8322d00/frameset.htm).
Unless you select the additional tick box “Table Buffer Trace” on the main ST05 screen, to show table buffer accesses, all the trace details in ST05 will be for direct database accesses (bypassing the table buffer).
Since direct database accesses will take longer than the buffer accesses, it is generally accepted that the longer runtimes will be associated with direct database access.

Enter your user id and the name of the report/transaction.  This is so that we can try to limit the records in the trace.  If you want to trace another user, simply enter their user id:

SAP ST05 trace by transaction name

Click Execute to start the trace:

ST05 trace activated

Now in a separate session (preferably) run the transaction/program with the performance problem.
Once you’re happy you have experienced the problem, you can go back into ST05 and click “Disable Trace”  (by default it will remember the trace details and disable the running trace, you won’t need to re-enter all the details):

ST05 trace deactivated

Still within ST05, click “Display Trace“:

ST05 trace display

The system remembers the last trace recorded and should automatically populate the details, but you can always enter them for date/time, user id etc:

ST05 trace display for user

The SQL trace records are displayed.
The poor performing SQL has it’s time (duration) in microseconds (millionths of a second) highlighted in red if it exceeds 10000 microseconds, 10 milliseconds or 0.01 seconds:

(1 second = 1000 milliseconds = 1000000 microseconds)

SAP ST05 SQL Trace output

At the top of the window we can see the transaction name (ZW39), the work process responsbile for executing the dialog step in which our statement was contained, the type of work process (DIA or BTC), the client and user and transaction id.
The OP column shows that this session has reused an existing cursor in the DB for this query.
This is because it has a “PREPARE” operation as the first OP before the cursor “OPEN” and not a “DECLARE” (see here https://help.sap.com/saphelp_nw04/helpdata/en/d1/801fa3454211d189710000e8322d00/content.htm).
If the session re-executed this query at another point in the trace, then it is likely (depending on code and available database resources), that the query would have re-used the existing cursor and even no “PREPARE” would have been visible, just OPEN and FETCH (https://help.sap.com/saphelp_nw04/helpdata/en/d8/a61d94e4b111d194cb00a0c94260a5/content.htm)
The deep yellow colour of each line will alternate between a dark and lighter colour when the table (obj name) changes in the list.

The RECS column shows the number of records retrieved in the “FETCH” operation.
The RC column shows the Oracle database return code 1403 (ORA-1403) which you get when you call “FETCH” in a loop and reach the end of the results (see SAP note 1207927).

The STATEMENT column shows the pre-parsed SQL including bind variables (:A1, :A2 etc).
The second line of the STATEMENT column shows the statement in the “OPEN” database call where all the bind variables have been normalised.
Double clicking on the STATEMENT column delves into the SQL statement a little more:

SAP ST05 SQL Trace output SQL statement with bind variables

Here you will see the SQL statement in full (above) and the bind variables, their data types (CH,3 = CHAR(3)) and respective values.
Back on the main trace screen, with the SQL statement selected (single click), you can click the “Explain” button:

SAP ST05 SQL Trace output SQL statement explain plan

Again, you will see the SQL statement with bind variables (remember you can get the values from the other screen) but more importantly, you will see the Oracle Explain Plan.
The Explain Plan shows how the Oracle optimizer *may* choose to execute the statement and the relative costs associated with performing the query (I’ll explain more on this in a later chapter of this guide).

SAP ST05 SQL Trace output SQL statement explain plan

In the above example, you can see that Oracle has chosen to perform an Index Range Scan on index “IHPA~Z1” before going to the table IHPA to get the matching data rows.
Single click the “Access Predicates” on the index and you can see what will be applied to the index scan:

SAP ST05 SQL Trace output SQL statement explain plan access predicates

SAP ST05 SQL Trace output SQL statement explain plan access predicates

Again, you will need to reference the very first SQL view screen to get the bind variables values.
Clicking the name of the index will show the statistics values for the index:

SAP ST05 SQL Trace output SQL statement explain plan index analysis

SAP ST05 SQL Trace output SQL statement explain plan index analysis

You can see the two columns that make up the index (objnr and parvw).
The Analyze button can be clicked to update the index database statistics real-time or as a background job.
WARNING: Be aware that SAP collects statistics in Oracle using it’s own set of predefined requirements as per SAP notes 588668, 1020260 and that you *must* disable the standard Oracle stats gathering jobs as per note 974781. Therefore, the stats may be out of date for a good reason.

The same detail can be seen when clicking the table name.

Back on the main screen, for those who have more knowledge of reading Oracle Explain Plans, you can choose the “EXPLAIN: Display as Text” button, which will display a more detailed Explain Plan that can be copied to a text editor also (I find this very useful):



Plan hash value: 1636536768

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 82 | 2 (50)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| IHPA | 2 | 82 | 1 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IHPA~Z1 | 2 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
2 - SEL$1 / IHPA@SEL$1
3 - SEL$1 / IHPA@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MANDT"=:A0)
3 - access(("OBJNR"=:A1 OR "OBJNR"=:A2 OR "OBJNR"=:A3) AND "PARVW"=:A4)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6], "PARNR"[VARCHAR2,36],
"ERDAT"[VARCHAR2,24]
2 - "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6], "PARNR"[VARCHAR2,36],
"ERDAT"[VARCHAR2,24]
3 - "IHPA".ROWID[ROWID,10], "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6]

Finally, it’s possible to see the point in the ABAP program where the SQL statement was run.
Single click the statement column, then click the “Display Call Positions in ABAP Program” button or press F5:

The ABAP code will be displayed, and if you are using the new ABAP editor (I think you should), you will be positioned on the statement:

SAP ST05 SQL Trace output SQL statement call in ABAP

Looking at the ABAP statement, you can see how it has been converted from an OPEN SQL statement into an Oracle SQL statement.
The use of the “IN” keyword has been translated into multiple “OR” statements at the Oracle level.

Once at the ABAP statement level, it is sometimes required to know both Oracle SQL and ANSI SQL, since Oracle generally uses it’s own dialect and SAP uses ANSI SQL.
Therefore, you may see the original statement in a somewhat different form, especially when a table join is required on two or more tables.
Lastly, you will notice that the MANDT field is not present in the original ABAP, but it gets added to the predicate list by the ABAP OPEN SQL processor unless you use the “CLIENT SPECIFIED” keyword.

Armed with the SQL performance facts, the knowledge of how to perform an SQL trace and how to link this back to the piece of ABAP code; you should be able to use your database level investigative skills to work out why the statement is causing poor performance.

Make adjustments in the test system and see how the changes affect the performance.
I like to modify the Oracle SQL captured in the SQL trace, then enter it into ST05 (Execute SQL option) with the Explain button, to see if the explain plan is any better.

Some pointers:
– How much data is in the database tables being referenced?
– Is the Explain Plan showing that the query is making best use of available indexes?
– Can you replicate the problem in the test system?
– If this is a custom table, do you need to add additional indexes?
– Could the ABAP OPEN SQL statement be re-written to be more effective (use one query instead of two)?
– Are you SELECTing more fields than you actually need?

You may wish to read the master class in reading Explain Plans from Oracle.
I have blogged before about cardinality in Explain Plans here.

Find part IV of this blog post, here.

SAP BI – ORA-04031 – Oracle 10.2.0.5 – Bug 9737897 – End Of.


If you use SAP BI 7.0 on an Oracle 10.2.0.5 64bit database you may experience this issue.
Our BI system performs tha delta loads for FI data overnight.
These are large loads of records into the cubes.

The problem is Oracle error 04031:
----------------------------------------------------------------------------------
Mon Jul 11 00:46:26 GMT Daylight Time 2011
Errors in file j:oraclebwpsaptraceusertracebwp_ora_3716.trc:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

Mon Jul 11 00:46:26 GMT Daylight Time 2011
Errors in file j:oraclebwpsaptracebackgroundbwp_smon_8756.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select file#, block#, ts# fr...","Typecheck","seg:kggfaAllocSeg")

Mon Jul 11 00:46:27 GMT Daylight Time 2011
Errors in file j:oraclebwpsaptracebackgroundbwp_smon_8756.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select file#, block#, ts# fr...","Typecheck","seg:kggfaAllocSeg")

----------------------------------------------------------------------------------


Once it hits this problem, it causes the rest of the chain to fail on the index rebuilds.
Sometimes the user queries also fail in the Bex Web front-end. The user sees a similar error as those shown above.

Using V$SGASTAT I was able to take snapshots of the Shared Pool usage via a Windows scheduled task.
I traced the problem to the memory allocation of “obj stat memo” in the Shared Pool of the database.
It’s constantly growing, and although the “free memory” reduces slightly, it doesn’t seem to correlate to the increase in “obj stat memo” usage.

So next step was looking on Oracle Support.
I found bug 9737897, which is valid *upto* 10.2.0.4. But it does not state that it has actually been fixed in 10.2.0.5.
The bug description says: “An SGA memory leak is possible for “obj stat memo” and “obj htab chun”
memory if a segment is dropped and recreated many times.”

The bug note recommends querying x$ksolsfts (see here about this table https://yong321.freeshell.org/computer/x$table.html).
The following SQL is recommended by Oracle:

select distinct fts_objd, fts_objn from x$ksolsfts x
where not exists (select 1 from obj$ o
where o.dataobj#=x.fts_objd and o.obj#=x.fts_objn);

The note says that a high number of records returned could indicate that you are seeing the symptoms of the bug.
I was seeing over 160,000 records in a database that had been restarted 24 hours prior.

I produced a lovely Excel chart showing the snapshot times against the memory use of the “obj stat memo” component.
The chart showed significant increases (step ups) of approximately 10MB at specific points in time.
I was then able to search SM37 using the “extended” option, to show background jobs active during the memory “step ups” identified by the chart.
Generally I saw 10-20 jobs active at the time, but the correlation showed that more often than not, the “step ups” were at near enough the same time as a BI_PROCESS_INDEX job.

Looking in the background job log, I was able to identify the process chain and the type of index/load that was being performed.
The indexes in question were FI cube related. They were quite large indexes and they were bitmap indexes (“CREATE BITMAP INDEX…”).
Each night these indexes got re-built after a load.
The re-build command was visible in the background job log and included “COMPUTE STATISTIC”.

So I had successfully identified a segment(s) that was being dropped and re-created regularly.
This met with the Oracle bug description.

At the bottom of the bug note, it states:

“Note:
If you suffer “obj stat memo” memory leak problem in 10.2.0.5, you can try
_object_statistics = false as a workaround, or following init parameter to see
if it solves problem.
_disable_objstat_del_broadcast=false”

So what does SAP say about this.
Well the following SAP notes were/are available:
1120481 – ORA-4031 – memory allocation errors for object-level stat
1070102 – High memory allocation by ‘obj stat memo’
869006 – Composite SAP note: ORA-04031

The only note of relevance is 1120481 which has a short reference to Oracle 10.2.0.4.
The other notes tend to point the read at just blindly increasing the Share Pool. I tried this, it didn’t work when 900MB was used by “obj stat memo”!

So, stuck with only one possible solution (other than upgrading to 11g), I can confirm that setting “_object_statistics = false” (alter system set “_object_statistics”=false scope=spfile;) and bouncing the database solved the problem as it completely removes the use of the “obj stat memo” memory area (it’s no longer visible in V$SGASTAT).

Analysing SAP Java Memory Usage in Windows

Running SAP systems on a Windows operating system provided me with some difficulties when it came to understanding memory usage. I was used to Unix based systems and using terms like “swap file”, “virtual memory” etc.
Hopefully this article will help you diagnose SAP Java system memory issues on Windows, especially “Out Of Memory” (OOM) error situations in the SAP Java stack.
It has been written based on Windows Server 2003 64bit with SAP Netweaver 7.0 (2004s) and Oracle 10g (10.2.x).

REFERENCES:

Understanding memory usage in Windows 2000.
SAP Note 723909  v33 – Java VM Settings For J2EE 6.40/7.0
SAP Note 879377 – Adjusting SDM Java Memory Usage
SAP Note 716604  v56 – Sun J2SE Recommended Options
SAP Note 313347 – Windows Editions Memory Support
Netweaver 7.0 SR2 Java Windows Oracle Installation Guide.

WINDOWS MEMORY TERMINOLOGY

Before we go into the memory details of an SAP system on Windows, it is useful to understand the memory terminology of the Windows operating system.
This is useful if you come from a UNIX background.

RAM – Random Access Memory, physical installed memory in the server.

Page File – An area of hard disk (or other storage medium) used for temporary storage of memory blocks. Generally this should be manually set according to the installation guide for the SAP system.

Virtual Memory – The logical memory accessed by user space program (calculator or any other app) this is roughly the summary of physical RAM plus page file size.

Kernel Memory – This is the area of virtual memory and RAM used by the operating system for device driver memory (non-user space programs).

Page Faults – When a process tries to access a portion of virtual memory that has been swapped to disk, a page fault occurs and the swapped portion is read back into RAM from disk potentially displacing an existing memory page).

Page – A block of memory usually around 4K in size.

DETERMINE OS MEMORY

We need to first determine how much memory is actively being used at the operating system level and how much memory is available to us.
On the Windows server, open Windows Task Manager and click on the “Performance” tab:

This will show the current memory usage of the entire server.
Ignore the line graphs and histograms as the bottom section of the pane is more important.
The following sections are described:

Physical Memory – Total Total amount of RAM installed in server.
Physical Memory – Available RAM available for CPU processes.
Physical Memory – System Cache RAM being used by the file cache.
Commit Charge – Total Size of virtual memory in use.
Commit Charge – Limit Max amount of virtual memory.
Commit Charge – Peak Virtual memory used high water mark.
Kernel Memory – Total Paged and non-paged mem used by the operating system’s kernel e.g. device drivers etc.
Kernel Memory – Paged Virtual memory set aside for the kernel.
Kernel Memory – Nonpaged Physical RAM set aside for the kernel.

The most important to watch when measuring memory usage of an SAP system are the following:

Commit Charge – Peak: This is the high water mark for memory usage on the server. If this is close to the maximum available virtual memory (Commit Charge – Total) then you could be suffering performance issues caused by excessive paging to/from hard disk.

Physical Memory – System Cache: This is the file system cache that stores data accessed from the hard disk in RAM for more efficient data retrieval.
This value is maintained by the operating system but can be influenced by setting the performance options in “My Computer -> Properties -> Advanced -> Settings (Performance) -> Advanced” shown below:

Which is also related to (or rather overridden by) the setting of “Maximize data throughput for network applications” in the “File and Printer Sharing for Microsoft Networks” of the local network connections in control panel:

MANAGE OS MEMORY

Before you try and tune the memory in the SAP system, you should first explore all avenues of optimising the amount of RAM available in the operating system.
We can see that there are three factors that affect the amount of RAM available for use with programs:

1, Other programs running.
2, The file system cache.
3, Device drivers loaded in the system (kernel).

Reducing the number of running programs and services is a good way of freeing up memory.
The amount of memory used by a specific program or service can be seen in Task Manager on the “Processes” tab:

Only the needed applications for the application and its management should be running.
The svhost.exe binary signifies services that are visible in the services applet in the Administrative Tools applet in Windows Control Panel.
Disabling redundant services will improve available RAM.

You should seek to reduce the file system cache as much as possible.
Since the SAP system is not a file server, any file system level buffering of data is only wasting resources. This is especially true for the Oracle database which buffers read data in memory anyway, which is then potentially buffered again in SAP.

You can reduce the system cache by setting the “Maximize throughput for network applications” as discussed in the previous section. This should be done on all servers within the SAP landscape (apart from specific cache servers, subject to SAP recommendations).
This is generally recommended by SAP in the installation guide for the Netweaver system.

Lastly, disable any redundant devices from the device manager.
These devices require kernel memory, some of which is taken directly from RAM.
You can see the memory ranges allocated to devices in the device manager accessed from the “Computer Management -> Device Manager” option from the Administrative Tools applet in Windows Control Panel.
Simply select “View -> Resources by Connection” in the Computer Management panel:

Expand the “Memory” item from the tree on the right and the devices will be listed with their memory address ranges:

The address ranges are in hexadecimal notation and in the form [RegionLetter][HEX Values] e.g. A0000000 – AFFFFFFF. The value in hex is bytes and is calculated as follows: FFFFFFF = 4294967295 bytes in decimal. Divide the decimal by 1024 accordingly to get a value in Kilobytes, Megabytes or Gigabytes.

Devices which are not required can be disabled (right click and choose “Disable”).

DETERMINE SAP JAVA SYSTEM MEMORY

The memory allocated to an SAP system comprises of three parts:

1, The Database instance memory.
If the database is on the same server as the SAP system, then some of the operating system memory will be taken up by the database itself.

2, The ABAP system memory.
If the SAP system is a dual stack system with an ABAP instance also on the same server then some memory will also be taken up by the ABAP instance.

3, The Java stack memory.
The Java stack comprises of mainly three (sometimes four) running Java Runtime Environments (Server, Dispatcher & SDM).

The main memory of the server must be divvied up between these 3 main components of the SAP system.

Since we are concentrating on the Java stack only, we will assume that SAP recommendations were followed correctly for the setup of the database and ABAP components.

Using Windows Task Manager it is possible to determine the amount of memory in use by the SAP Java system.
The jlaunch.exe processes represent the Java Runtime Environments of the Java stack and therefore the memory usage for each process equates to the current virtual memory usage for the SAP processes:

In the Windows Task Manager you must change the view to be able to see the process ID of a process:

Ensure that you select the following columns:

PID,
Memory Usage,
Peak Memory Usage,
Page Faults
Virtual Memory Size,
Paged Pool,
Non-paged Pool

With these columns selected, we can get a far better overview of the memory usage for the SAP components:

You can now see that the actual virtual memory usage (VM Size) of the PID 5144 (what we think is the SAP Java server process) is ~1.7GB in size (1,678,188 K).

The summary of the VM Size column should roughly add up to the size of the “Commit Charge – Total” value displayed on the “Performance Tab”.

Generally you can guess that the jlaunch processes with the highest memory usage will be the SAP Java server processes (since they have the largest memory settings out of the box), followed by the dispatcher and finally the SDM (with the smallest default settings).

For confirmation, it is possible to find the exact process id using the jcmon.exe program.
Use option 30 (Shared Memory Menu), then select option 2 (Display Process Data):

The running components will be displayed and we can confirm that PID 5144 is in fact the server0 process:

DETERMINE SET MEMORY VALUES

We have determined what virtual memory the SAP Java processes are actually using but we need to see the configuration settings of the Java processes so we know what to expect at startup and maximum use.

There are a number of different areas of memory for a Java Runtime Environment.
The following is a very brief overview:

Heap Size The total amount of memory stack available to a Java Runtime Environment process (excludes the resident memory for the binary). The minimum heap memory is set using parameter “–Xms”.

New Size Also known as the “young generation”, is the amount of heap memory allocated by default for new java classes created within the JRE. The new size is set using parameter “–NewSize”.

Perm Size Also know as the “old generation” or “tenured generation” is the amount of heap memory allocated by default for java classes that have survived a number of garbage collections and are deemed “permanent” in the JRE. The perm size is set using parameter “–PermSize”.

Any java classes that are no longer referenced are cleared from memory during the garbage collection cycles which are either time-triggered or based on a percentage of free heap size.

The sections of heap memory and the heap memory total size itself can be set within bounds using the “-MaxXXXXSize” arguments or “-Xmx” (for the heap itself) to the JRE.
The JRE will not break these bounds.
If the amount of Heap allocated is not enough for the running JRE it will constantly try to garbage collect more aggressively. This will be seen in the developer trace files e.g. dev_server0, dev_dispatcher and dev_sdm.

If the JRE is still unable to reference enough memory, then a Java.Lang.OutOfMemoryError will be thrown. If this is visible in the dev_server0 trace file, then this is the JRE that threw the error and was suffering the problem (more than likely).

If no maximum heap size is set using the –Xmx parameter, then the JRE will attempt to expand the heap dynamically when required (it may not shrink back).
If the Windows virtual memory is exhausted, then the JRE will eventually throw a Java.Lang.OutOfMemoryError and Windows may throw its own error.

Now we understand roughly how the JRE memory is utilised, we can see the settings for the respective SAP JREs for the server, dispatcher and SDM.

The JRE server0 memory allocation can be seen either through ConfigTool by selecting “Cluster-Data -> Instance ID XXXXXX” -> Server” and checking the parameters on the right hand side:

Or by looking in the jlaunch process trace file:

This concludes the analysis of memory usage for the Java stack.
Using this information you should now be able to determine if this meets the recommendations by SAP.

SAP JAVA STACK MEMORY RECOMMENDATIONS

During the creation of this article a number of SAP notes became of significance.
Generally SAP try to recommend certain settings during installation of the SAP software, and then at a later date, they will release a note that details changes to the installation settings.

The base install of SAP Netweaver 7.0 Java (SR3) seems to install with the following settings:

Server:
-Xms 1024m
-NewSize 171m
-MaxNewSize 171m
-PermSize 256m
-MaxPermSize 256m

Dispatcher:
-Xms 170m
-NewSize 57m
-MaxNewSize 57m

SDM:
-Xms 256m

The installation documentation specifically states that the sizing of the Windows page file is essential for correct operation of the SAP system.
The document states in the section “3.1.2 Requirements Checklist for a Central System”:

Minimum RAM:
1.5 GB
To check RAM, in the Windows Explorer choose Help About Windows.
Paging File Size: 1 times RAM plus 8 GB

This would infer that on a system with 10GB of RAM, the page file should be 18GB in size. This would give a total virtual memory size of 28GB.

According to note 723909, SAP’s latest recommendations for sizing the JRE memory is as follows:

For 32 bit platforms we recommend to start the server nodes with 1GB
whereby the initial and the maximal heap size should be equal: -Xmx1024m
-Xms1024m
Higher values may cause troubles, for example see notes 736462 and 664607
about Windows DLLs preventing big java heap.
We recommend using additional server nodes instead of huge heaps.

For 64 bit platforms (that means not only your OS but also your JDK
supports 64 bit) we recommend using 2GB: -Xmx2048m -Xms2048m
Take into account while planning your productive landscape: for NW 7.0
(04s) there is a general recommendation to use 64bit systems, see note
996600.
2.2 For dispatcher nodes it’s sufficient to set the heap size to 171m on 32
bit and to 256m on 64 bit platforms. There is no need to apply the
parameters mentioned under 4-9 below.
Default size of the young generation (one third of the heap) is ok for
dispatchers.
If the instance contains more than three server nodes we recommend to add
50MB of dispatcher heap for each additional server node.
2.3 The max heap size (plus max perm size) of all Java server nodes
must fit completely into the physical memory in order to avoid heavy
paging on OS level. You must also consider the space needed for OS
and other processes.
For each JavaVM on the server, all Java memory must fit completely
into physical memory (no paging)!
Therefore the rule of thumb for number of server nodes for SUN and HP
JavaVM with 1 GB Heap each will be
#ServerNodes = (AvailableMemory / 1.5 GB)
and factor 2.5 should be used instead of 1.5 for 64 bit with 2GB heap size.

The additional note 716604 recommends some specific settings for the SUN JVM:

For 6.40/7.00:
For the heapsize and the size of the new generation please take care of the
recommendations of note 723909.
For Windows (ia64 and x86 (64Bit)) and Linux/ia64 (Itanium) platforms set
the stack size to at least 2MB for each serverID, the dispatcher and the
Java based tools:
-Xss2m
Additionally, increase the stack size of the JIT compiler threads at least
on the server nodes of the J2EE Engine to 4MB on these platforms with
following parameter (effective on Windows starting with 1.4.2_17 b12 resp.
1.4.2_18):
-XX:CompilerThreadStackSize=4096

Any modifications to the Java stack should be proven in the DEV and QA environments before application to production.