Connections 2.5 – WebSphere Tips

WebSphere Tip : 1

When clustering Connections you may encounter issues when the wizard attempts to federate the node into your deployment manager. This is a known WAS issue as the JVM suffers out of memory errors (is you delve deep in the add node log file / dmgr log you will find them).

There is a quick work around that can solve this:

Increasing the WAS HEAP size
In order for the add node command to work correctly when running the cluster wizard please do the following:

Connection servers
On each of the connections servers browse to bin and edit the addNode file (.bat or .sh depending on your OS).

Insert the line set WAS_HEAP=-Xms256M -Xmx1500M at the top of the file to set a variable (for example โ€“ under the set CMD variable)

set CMD_NAME_ONLY=%~n0
set WAS_HEAP=-Xms256M -Xmx1024M

at the bottom of the file find the “%JAVA_HOME%binjava” line and add the variable

“%JAVA_HOME%binjava” -Dcmd.properties.file=%TMPJAVAPROPFILE% %WAS_HEAP% %WAS_DEBUG% %WAS_TRACE% %CONSOLE_ENCODING% “%CLIENTSOAP%” “%JAASSOAP%” “%CLIENTSAS%” “%CLIENTSSL%” %USER_INSTALL_PROP% “-Dwas.install.root=%WAS_HOME%” “-DWAS_HOME=%WAS_HOME%” “com.ibm.wsspi.bootstrap.WSPreLauncher” -nosplash -application “com.ibm.ws.bootstrap.WSLauncher” “com.ibm.ws.runtime.NodeFederationUtility” “%CONFIG_ROOT%” “%WAS_CELL%” “%WAS_NODE%” %*

save the file.

Deployment Manager
On the deployment manager machine.
Open the Administrative Console.
Open System Administration > Deployment Manager > Process Definition > Java Virtual Machine.
Specify 256 for Initial Heap Size and 1500 for Maximum Heap Size.

Save your changes and restart the Deployment Manager.

This should resolve the issue – you may need to increase the Dmgr maximum heap slightly more but I found 1000 was just not enough and 1500 did the trick.

When you run the cluster wizard now it should run as expected ๐Ÿ™‚

WebSphere Tip : 2

A handy tip to note if you are not a huge WebSphere guru.

To enable commands to be run from the command line without the need of the -username and -password arguments configure SOAP security.

Every WebSphere profile has a file called soap.client.props which hold soap connector client information. The path to the files is as follows : /profiles//properties

SOAP connector security is disabled by default.

When enabled with the correct information it is possible to run the standard WAS start , stop and status commands for instance by just running the .bat or .sh command without passing the extra credentials.

### EXAMPLE ###

###############################################################################
#
# JMX SOAP Connector Client Properties File
#
# This file contains properties that are used by the JMX SOAP Connector Client
# of the WebSphere Application Server product. SOAP Connector executes on WebSphere
# java servers and client systems with java applications that access WebSphere servers.
#
# ** Encoding Passwords in this File **
#
# The PropFilePasswordEncoder utility may be used to encode passwords in a
# properties file. To edit an encoded password, replace the whole password
# string (including the encoding tag {…}) with the new password and then
# encode the password with the PropFilePasswordEncoder utility. Refer to
# product documentation for additional information.
#
###############################################################################

#——————————————————————————
# SOAP Client Security Enablement
#
# – security enabled status ( false[default], true )
#——————————————————————————
com.ibm.SOAP.securityEnabled=true

com.ibm.SOAP.loginUserid=wasadminuser
com.ibm.SOAP.loginPassword=wasadminpassword

#——————————————————————————

Connections 2.5 Clustering – how to avoid some pain

All was going exactly to plan when I installed my primary node – federated correctly worked as expected and I even managed to change it fairly easily to point to a different DB and shared content store I was a very happy bunny UNTIL I decided to add node2 – then it all went “pear shaped”.

So here is a quick over view of the issue and how I have got around it โ€“ but I really want to know how this happened and if I can do anything to fix this for the future – I have a PMR open and IBM are trying to recreate the issue now.

I created node1 using the Connections install wizard to create a primary node โ€“ I supplied DB(jdbc:oracle:thin:@ < my Original DB server name >:1521:conn1) and file system info (//< my Original File server name >/LotusConnectionsData/< featureName >) and it clustered successfully and node 1 was fine.

I then moved the DB to another machine and also moved the file system. I edited the data source info at cluster and server level (jdbc:oracle:thin:@< my NEW DB server name >:1521:conn1) and also changed the file system (< my NEW File server name >/portal_collabdata$/< featureName >) in the Websphere variables section of the ICS as per the instructions in the info center. Node 1 has always worked as expected even after moving these.

When I added any subsequent node it configured the server with the original file store information (//< my Original File server name >/LotusConnectionsData/< featureName >)) and defaulting back to the original DB data source (jdbc:oracle:thin:@< my Original DB server name >:1521:conn1).

If I change these manually and resynch and restart the servers they work as expected – the Datasource although it is set at Cluster level .. is also set at server level – I had to change the datasource EVERYWHERE to fix the issue (as I have 4 servers per machine and 4 machines that is a lot of editing).

This has prompted me to ask these questions of IBM:

The WebSphere Variables for the file stores are also picking up the original path โ€“ It appears that when Node1 was federated and the config was created that it has made some kind of *template* which it creates further nodes/servers from. As I have changed the config the template is not getting updated (if this is how it is doing it).

Am I doing anything wrong?
If so what?
And if no how to I prevent this from happening in the future?

== IBM’s Response ==
I received an email back from IBM regarding the issues that I experienced after changing some settings in my cluster. The bad news is it is a limitation, the good news is they are going to fix it:

The customer is right, this is a limitation in the LC 2.5 install and is being addressed for the next release.

In LC 2.5, variables/datasources/providers/etc are created at the server level, then this is used as a template for additional servers…
the problem is that server level settings like this override higher (node, cluster, cell) level settings, causing the difficulty updating the customer experienced.
ideally, these settings would be at cluster level.

Since the customer has this working, they do not have to change anything, but, if they wish to simplify future changes they can do the following:

1. create cluster level variables, datasources, providers, etc
2. [optional… for testing] create a new node — this node will have all the server level settings by default
3. only if you did 2… delete the server level settings for the items you created at cluster level in step 1
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
4. only if you did 2… test that the applications deployed on the new node behave correctly (basically you are verifying the cluster level settings)
5. after verifying (or reviewing) the cluster level settings (variables, datasources, etc), you can delete the server level items corresponding to the new cluster level items
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
6. now, when you make changes to the cluster level variables thru the deployment manager, you just need to save changes and synchronize nodes
all the nodes and servers that don’t have node or server level instances of the same variables will get the cluster level values

Again, the order of precedence for finding variables, datasources, etc is….
first, is it defined for the Server? If yes, the server level item is used
second, is it defined for the Node?
third, is it defined for the Cluster?
fourth, is it defined for the Cell?

Wow .. This looks awsome

today thanks to the lovely Mr Stuart McIntyre for the tip off .. but Mr David Hay IBM consultant and general technical guru and geek has posted a guide to Portal 6.1.5 (with screen shots) .. WOW IBM, you have made the portal interface look as cool as the connection one does ๐Ÿ™‚

I am downloading it now and looking forward to a play

hello world

after faffing about for god knows how long on shall i host my own techie blog or wiki I have been lazy and opted for the blogger root like a lot of my fellow connections/portal buddies .. so hello world and welcome to my mind ๐Ÿ™‚

I shall do a *dave hay* and dump the contents of my tiny little mind here – mainly to help me remember all of those little gems that I appear to come across .. and I am sick of writing things down in note books and bits of paper only to lose them ๐Ÿ™‚

so watch this space for mad ramblings of a technical nature ๐Ÿ™‚

shaz