This blog contains experience gained over the years of implementing (and de-implementing) large scale IT applications/software.

Secure ReadyNAS Duo (v1) ADMIN Share

If you have a ReadyNAS Duo and you’re happy with your setup and are now sharing your shares out through your router over the internet, you need to be aware that any old hacker can try and access your ADMIN share (e.g. https://<your-readynas>/admin).

I use mine in exactly that way but don’t want Mr A.Hacker trying out a myriad of passwords on my ADMIN share just because my public shares have “Netgear ReadyNAS” plastered all over the front page (a tip for another day I feel).

Instead, if you’re comfortable using SSH, (there is a way to do this by using the FrontView config backup, edit the file and put back in place) then you can edit your Apache httpd.conf configuration file so that access to the ADMIN share is restricted to a host or hosts on your local home network only.

Steps:

1, Log into your readynas via SSH as root.
2, Backup your old config file:

# cp -p /etc/frontview/apache/httpd.conf  /etc/frontview/apache/httpd.conf.bak
3, Use ‘vi’ to edit the httpd.conf:

# vi /etc/frontview/apache/httpd.conf
4, Change the sections as follows:

<Location /admin>
DirectoryIndex index.html
Options ExecCGI
AuthType Basic
AuthName “Control Panel”
require user admin

# block external admin.
Order Deny,Allow
Deny from all
Allow from 192.168 <<< INSERT YOUR LOCAL NETWORK IP ADDRESS SUBNET HERE
</Location>

and

<Location /get_handler>
SetHandler perl-script
PerlHandler get_handler
PerlSendHeader On
Options ExecCGI

# Order allow,deny
# Allow from all
AuthType Basic
AuthName “Control Panel”
require user admin

# block external admin.
Order Deny,Allow
Deny from all
Allow from 192.168 <<< INSERT YOUR LOCAL NETWORK IP ADDRESS SUBNET HERE
</Location>

plus

<Location /dir_list>
AuthType Basic
AuthName “Control Panel”
require user admin
Options ExecCGI
#Allow from all

Order Deny,Allow
Deny from all
Allow from 192.168 <<<– Insert your subnet here.
</Location>

5, Save the changes with:

<shift + ‘ZZ’>

6, Restart your readynas:

# shutdown -r now
7, Test from your local network that you can access the ADMIN share:

https://<readynas IP>/admin

8, Test from the internet that you can’t access the ADMIN share:

https://<ISP IP>/admin

You should see a HTTP 403 FORBIDDEN error.

That’s it.
If you made an error, you can restore your config from the backup file you took:

# cp -p /etc/frontview/apache/httpd.conf.bak /etc/frontview/apache/httpd.conf
and then restart your readynas.
Don’t forget to check the config after you make any changes to shares / firmware etc.

HANA Memory Allocation Limit Change With hdbcons for HANA

The SAP HANA command line utility hdbcons can be used to administer the HANA system directly from the Linux operating system command line.

It’s a very powerful utility and I’m sure as time goes by, it will provide invaluable information and functionality for HANA DB administrators to help fine-tune the database.

Log in as the <sid>adm Linux user and run the hdbcons command.
It will automatically connect the indexserver for that instance (if it’s running).
You can then list the available server commands:

hana01:/usr/sap/H10/HDB10> hdbcons
SAP HANA DB Management Client Console (type ‘?’ to get help for client commands)
Try to open connection to server process ‘hdbindexserver’ on system ‘H10′, instance ’10’
SAP HANA DB Management Server Console (type ‘help’ to get help for server commands)
Executable: hdbindexserver (PID: 4092)
[OK]

>
> help

Available commands:
ae_checksize – Check and Recalculate size of columns within the Column Store
authentication – Authentication management.
bye – Exit console client
cd – ContainerDirectory management
checktopic – CheckTopic management
cnd – ContainerNameDirectory management
context – Execution context management (i.e., threads)
converter – Converter management
crash – Crash management
crypto – Cryptography management (SSL/SAML/X509).
deadlockdetector – Deadlock detector.
debug – Debug management
distribute – Handling distributed systems
dvol – DataVolume management
ELF – ELF symbol resolution management
encryption – Persistence encryption management
event – Event management
exit – Exit console client
flightrecorder – Flight Recorder
help – Display help for a command or command list
log – Show information about logger and manipulate logger
mm – Memory management
monitor – Monitor view command
mproxy – Malloc proxy management
mutex – Mutex management
output – Command for managing output from the hdbcons
pageaccess – PageAccess management
page – Page management
pcm – Performance Counter Monitor management
profiler – Profiler
quit – Exit console client
replication – Monitor data and log replication
resman – ResourceManager management
runtimedump – Generate a runtime dump.
savepoint – Savepoint management
snapshot – Snapshot management
statisticsservercontroller – StatisticsServer internals
statreg – Statistics registry command
stat – Statistics management
tablepreload – Manage and monitor table preload
tracetopic – TraceTopic management
trace – Trace management
version – Version management
vf – VirtualFile management

For each of the above commands, additional help can be obtained by re-issuing “help” plus the command name e.g.:

> help mm

Synopsis: mm <subcommand> [options]: Memory management Available subcommands:
   – list|l [-spf] [-l <level>] [<allocator_name>]: List allocators recursively
      -C: work on composite allocators
      -s: Print some allocator statistics (use stat command for full stats)
      -p: Print also peak usage statistics
      -f: Print allocator flags
      -S: Sort output descending by total size
      -l <level>: Print at most <level> levels
   – flag|f [-C] <allocator_name> [-rsdt] <flag_set>: Set allocator flags
      -C: work on composite allocators
      -r: Set flags recursively
      -s: Set given flag(s), default
      -d: Delete given flag(s)
      -t: Toggle given flag(s)
      -a: Also apply changes to associated composite allocators (not allowed in context with ‘-C’)
   – info|i [-f] <address>: Show block information for a given address
      -f: Force block info, even if block not known
   – blocklist|bl [-rtsS] <allocator_name> [-l <count>]: List of allocated blocks in an allocator
      -C: work on composite allocators
      -r: Show blocks in sub-allocators recursively
      -s: Show also allocation stack traces, if known. Cannot be combined with optiont ‘-t’.
      -S: Show only blocks with known allocation stack traces. Cannot be combined with optiont ‘-t’.
      -t: Show top allocation locations sorted by used size (descending).
            Cannot be combined with options ‘-s’ or ‘-S’.
            The default number of printed locations is 10, this can be changed by ‘-l’ option.
      -l <count>: Limit to <count> locations. Valid only if combined with option ‘-t’. Unlimited in cas <count> = 0.
   – maplist|ml: List all mappings used by the allocator
   – mapcheck|mc [-sln]: Check all mappings used by the allocator
      -s: Also show all system mappings
      -l: Also show own mappings as in maplist
      -n: Suppress output of known alloc stack traces for unaccounted memory
   – mapexec|me [-u]: List all known executable module mappings
      -u: Update list of known executable modules
   – reserveallocator|reserveallocators: Print information about OOM Reserve Allocators
   – top [-C] [-sctr] [-l <count>] [-o <fn>] [<allocator_name>]: List top users of memory, <allocator_name> is needed except for option ‘-t’
      -C: work on composite allocators
      -s: Sort call stacks by used size (default)
      -c: Sort call stacks by call count
      -t: Throughput, sort nodes by sum of all allocations
      -r: work recursively also for all suballocators
      -l <count>: Only output top <count> stack traces (0=unlimited)
      -o <fn>: Specify output file name
   – callgraph|cg [-sctrk] [-l <count>] [-o <fn>] [<allocator_name>]: Generate call graph, <allocator_name> is needed except for option ‘-t’
      -C: work on composite allocators
      -s: Sort nodes by used size (default)
      -k: Ouput in kcachegrind format
      -c: Sort nodes by call count
      -t: Throughput, sort nodes by sum of all allocations
      -r: Work recursively also for all suballocators
      -d: Do full demangle (also parameters and template arguments)
      -l <count>: Only output top <count> functions (0=unlimited)
      -o <fn>: Specify output file name (for .dot graph)
   – resetusage|ru [-r] [<allocator_name>]: Reset call stack usage counters, <allocator_name> is needed except for option ‘-r’ or ‘-C’
      -C: work on composite allocators
      -r: Work recursively also for all suballocators
   – limit [-s <size>[K|M|G|T]]: Get current allocation limit in bytes
      -s <size>[K|M|G|T]: Set current allocation limit in bytes, KB, MB or GB
   – globallimit [-s <size>[K|M|G|T]]: get current global (if IPMM is active) allocation limit in bytes
      -s <size>[K|M|G|T]: Set current global (if IPMM is active) allocation limit in bytes, KB, MB or GB
   – garbagecollector|garbagecollection|gc [-f]: Return free segments/blocks to
     operating system
      -f: Also return free fragments in big blocks
   – ipmm: Print Inter Process Memory Management information
      -d: Print detailed information.
   – compactioninfo, ci: Print information about last compaction
   – virtual: Print information about virtual but not resident memory (linux only)
   – requested: Print information about requested allocations (reporting no overhead at all), iterates over all instances of ReportRequestedAllocators
   – blockedmemory [-s <size>[K|M|G|T]]: Get current blocked memory.
      -s <size>[K|M|G|T]: Set current blocked memory in bytes, KB, MB or GB and try to reserve this memory. Common options and arguments:
   – <allocator_name>: Name of the allocator to address
   – <flag_set>: Comma-separated list of following flags: ffence (fence front, writes the pattern 0xaaccaacc in front of the allocated block),
     bfence (fence back, writes the pattern 0xaaccaacc behind the allocated block), astrace (stack trace at allocation),
     dstrace (stack trace at deallocation), areset (overwrite at allocate with pattern 0xd00ffeed),
     dreset (overwrite at deallocate with pattern 0xdeadbeef), all, none, default, !emptyok (allow
     non-empty destruction), preventcheck (prevent changing check flags)
     atrace (trace at allocation), dtrace (trace at deallocation),
     malf (malfunction) or their 2-letter shortcuts [OK]

>

As an example, the current global allocation limit can be displayed:

> mm globallimit
Current global allocation limit=15032385536B.
[OK]

We can adjust the global allocation limit online by issuing the additional “-s” parameter:

> mm globallimit -s 16G
Current global (if IPMM active) allocation limit: 17179869184B
[OK]

>

Now re-check:

mm globallimit
Current global allocation limit=17179869184B.
[OK]

>

What’s the current global allocation limit  in the global.ini you might ask?

hana01:/usr/sap/H10/HDB10> grep ‘^global’  /usr/sap/H10/SYS/global/hdb/custom/config/global.ini

global_allocation_limit = 14336

It hasn’t changed.

So we have confirmed that we can affect the configuration of the HANA system in real-time using hdbcons, but we don’t necessarily preserve the configuration.
You can also check in HANA Studio on the landscape page.
Since each service (process) takes it’s memory allocation percentage from the global allocation, this will automatically change in real-time too.
This means that for analysing “what-if” style scenarios or for operational acceptance testing, you can effectively present a set of configuration values and work through them almost automatically.  Invaluable for those test automation guys in the field.

HowTo: Find Version of SAP BWA/BIA (Accelerator)

The SAP BWA (BW Accelerator) is based on the TRex search service and uses dedicated hardware to provide an additional in-memory index search capability for an existing SAP BW system.  NOTE: This is not to be confused with the SAP HANA DB, which is also in-memory, except that HANA is a more advanced and fully rounded product and not related to TRex.

Scenario: You may know there is a BWA connected to your BW system, but you don’t know where it is and what version it is.  You may need to consider this information in preparation for an upgrade.
The BWA details can be seen from the BW system via transaction TREXADMIN.
The “Summary” tab shows all the revision details and the make and model of the dedicated hardware:

image

Additional version information can be seen on the “Version” tab, you can also see any additional load balancing nodes in the TRex landscape:

image

Connectivity to TRex is performed either via RFC server on the TRex server (BWA 700) or via the ICM (BWA 720+).
The TRex Web Service which can be accessed via “https://<trex server>:3xx05/TREX”.
The “Connectivity” tab allows you to perform connectivity tests for RFC and HTTP to the BIA.
For RFC based connections, once registered at the gateway, you can see the detail in transaction SMGW (select “Goto -> Logged on Clients”):

image

You can see the TRex connections based on the “TP Name” column:

image

For ICM based connections, you will see the HTTP requests going out via the ICM in transaction SMICM.
For SAP notes searches, the component for the BWA is BC-TRX-BIA.

SAP PI RSXMB_DELETE_MESSAGES – Copy job failed

Whilst implementing a process for the “switch” deletion of XML messages from the SAP PI persistence layer you may end up, at some point, in a situation where the deletion job simply fails to run.

The contents of the job log state: “Repeat terminated copy job first”. No further information is displayed and there are no errors in the system log (SM21).

By executing the ABAP program RSXMB_DELETE_MESSAGES (same as the delete job step) in SE38, you can then double click the error at the bottom of the screen:

Finally you will get an explanation and a process for resolution:

Call transaction SXMB_MONI and choose Job Overview. Repeat the incomplete job. If the job cannot be completed due to database errors, first correct the database errors, and then repeat the job again.”.

It seems the jobs displayed using the “Job Overview” in SXMB_MONI is not simply a view of SM37 jobs, but a view of the underlying job control and enqueue function specifically for the messaging framework.

SAP PI – Persistence Layer Deletion – Oracle Stats

When configuring persistence layer deletion in SAP PI 7.0, you should be aware that the very first time you enable the “switch” procedure and execute the delete jobs, the new tables will be created in the database.

These tables (SXMS*2) will NOT have any database statistics on them.

Therefore, once the switch procedure is completed, until your stats gathering job is executed in DB13, your PI system may run very slow indeed!

Either schedule a one-off stats gathering after the very first “switch”, or schedule the stats gathering frequently (e.g. daily).