All was going exactly to plan when I installed my primary node – federated correctly worked as expected and I even managed to change it fairly easily to point to a different DB and shared content store I was a very happy bunny UNTIL I decided to add node2 – then it all went “pear shaped”.
So here is a quick over view of the issue and how I have got around it – but I really want to know how this happened and if I can do anything to fix this for the future – I have a PMR open and IBM are trying to recreate the issue now.
I created node1 using the Connections install wizard to create a primary node – I supplied DB(jdbc:oracle:thin:@ < my Original DB server name >:1521:conn1) and file system info (//< my Original File server name >/LotusConnectionsData/< featureName >) and it clustered successfully and node 1 was fine.
I then moved the DB to another machine and also moved the file system. I edited the data source info at cluster and server level (jdbc:oracle:thin:@< my NEW DB server name >:1521:conn1) and also changed the file system (< my NEW File server name >/portal_collabdata$/< featureName >) in the Websphere variables section of the ICS as per the instructions in the info center. Node 1 has always worked as expected even after moving these.
When I added any subsequent node it configured the server with the original file store information (//< my Original File server name >/LotusConnectionsData/< featureName >)) and defaulting back to the original DB data source (jdbc:oracle:thin:@< my Original DB server name >:1521:conn1).
If I change these manually and resynch and restart the servers they work as expected – the Datasource although it is set at Cluster level .. is also set at server level – I had to change the datasource EVERYWHERE to fix the issue (as I have 4 servers per machine and 4 machines that is a lot of editing).
This has prompted me to ask these questions of IBM:
The WebSphere Variables for the file stores are also picking up the original path – It appears that when Node1 was federated and the config was created that it has made some kind of *template* which it creates further nodes/servers from. As I have changed the config the template is not getting updated (if this is how it is doing it).
Am I doing anything wrong?
If so what?
And if no how to I prevent this from happening in the future?
== IBM’s Response ==
I received an email back from IBM regarding the issues that I experienced after changing some settings in my cluster. The bad news is it is a limitation, the good news is they are going to fix it:
The customer is right, this is a limitation in the LC 2.5 install and is being addressed for the next release.
In LC 2.5, variables/datasources/providers/etc are created at the server level, then this is used as a template for additional servers…
the problem is that server level settings like this override higher (node, cluster, cell) level settings, causing the difficulty updating the customer experienced.
ideally, these settings would be at cluster level.
Since the customer has this working, they do not have to change anything, but, if they wish to simplify future changes they can do the following:
1. create cluster level variables, datasources, providers, etc
2. [optional… for testing] create a new node — this node will have all the server level settings by default
3. only if you did 2… delete the server level settings for the items you created at cluster level in step 1
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
4. only if you did 2… test that the applications deployed on the new node behave correctly (basically you are verifying the cluster level settings)
5. after verifying (or reviewing) the cluster level settings (variables, datasources, etc), you can delete the server level items corresponding to the new cluster level items
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
6. now, when you make changes to the cluster level variables thru the deployment manager, you just need to save changes and synchronize nodes
all the nodes and servers that don’t have node or server level instances of the same variables will get the cluster level values
Again, the order of precedence for finding variables, datasources, etc is….
first, is it defined for the Server? If yes, the server level item is used
second, is it defined for the Node?
third, is it defined for the Cluster?
fourth, is it defined for the Cell?