Nerd Geek Dork

According to the diagram I am a Geek – which is reasuring as I was worried I was a Dork ..

This nerd/dork/geek/dweeb Venn diagram should save you a lot of time and frustration in the future.

this comment summed it up perfectly

The difference between Nerds and Geeks is that Nerds specialise and Geeks like diversity. If a Nerd has a favourite subject, they aim to make themselves the authority in it, whereas Geeks don’t take it that seriously – sure, its more serious than Average Person, but not Nerderious.

image from greatwhitesnark.com

JIRA 4.1 update SSL issues

after going nuts thinking that I had infact lost my marbles and had been following the instructions wrong I have discovered (thanks to Mr Andrew Frayling) that there is something missing from the Jira server.xml file.

A config changed happened between Tomcat 5.5 and 6 ( JIRA 4.0 uses 5.5, JIRA 4.1 uses 6 ) which means that you must have SSLEnabled=”true” in the secure connector port of the JIRA config – this is missing. Add it in and SSL suddenly starts working !!

http://jira.atlassian.com/browse/JRA-20963

It was drving me nuts all weekend!!

and now it works .. HURRAH πŸ˜‰

Issues with Oracle on Solaris with Connections 2.5 – UPDATE

After some testing with the SPARC version of this fix – which actually did work we were pleased to find out that Oracle had released a version for x86.

We applied this – this morning, and I am sorry to say it doesn’t work. If you try to delete a file from the DB directly or through the connections interface, the DB is still throwing the mutating trigger issue.

Plot thickens – time to go back to oracle πŸ™‚

Issues with Oracle on Solaris with Connections 2.5

There is an issue when running Connections with Oracle on Solaris
Symptoms of the problem are you can not delete certain files and / or the files widget from communities

The error in the logs is – table FILES.MEDIA is mutating

08/02/10 00:01:00:569 GMT] 0000005d Library E EJPVJ9166E: Unable to delete the library with id b855660b-d6bc-4b19-891f-2087aa3d9a0c. [UserImpl@26ce26ce id=64377ea3-e571-4323-922a-dc0723fead36 directoryId=2BE4B3FF-4AB4-48FF-9B83-73689537A16A]
java.sql.SQLException: ORA-04091: table FILES.MEDIA is mutating, trigger/function may not see it
ORA-06512: at “FILES.PKG_MED_DOWNLOAD_UPD”, line 45
ORA-06512: at “FILES.MED_DOWNLOAD_UPD_S”, line 2
ORA-04088: error during execution of trigger ‘FILES.MED_DOWNLOAD_UPD_S’

at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)

We have since discovered (thanks to Kieran Reid in Connections Support for doing the leg work) that this is an issue with Oracle 10.2.0.4 on Solaris – the triggers have an issue which is fixed in 10.2.0.5 – which is a big no no as far as connections go. There is a fix that you can apply to 10.2.0.4 that will resolve the problem.

From support.oracle.com search the knowledge base for 4574851
You should get three results, select the third match
Click on the link for Patch.4574851
Select the 10.2.0.4 release for the Solaris platform
Download, install and test.

*NOTE* this fix is only available for SPARC not x86

So far this appears to have fixed the issue on the backup of the Prod database (I have put a stand-alone LC25 in front of it to test which involved all sorts of DB hacking to get it to work – not recommended unless you are desperate for a quick test). I am hoping to schedule moving our prod DB from x86 to SPARC applying the patch and then plugging my LC25 cluster into it.

Changing the title on the Connections 2.5 Homepage

Changing the title on the homepage is a bit of a pain .. The steps are as follows :

Make a back up copy of the COMPRESSED homepage.ear file from the deployed application config

ND – < was ROOT >profiles< profile> configcells< cell Name >applicationsHomepage.earHomepage.ear

StandAlone – < was ROOT >profiles< profile >configcells< cell Name >applicationsHomepage.earHomepage.ear

extract it to a temp folder ie D:tempextractedhomepage

find the dboard.common.jar and extract that to a temporary folder i.e D:tempextractedDashboard

drill down into the extracted file >

> com/ibm/lotus/connections/dashboard/nls/

and find the file jsp_resources.properties

change the jsp.homepage.title = < “your new title” >

change any instances of “IBM Lotus Connections Home Page” in this file to < “your new title” >

save and close the file

do the same for any additional languages that you are supporting

re-compress the dboard.common.jar and copy the newly edited compressed version into the extracted directory of the homepage ear file.

re-crompress the homepage ear file

stop all server instances that are running the homepage application replace the newly edited and compressed homepage.ear file in the deployed application config

you will also need to replace the newly edited dboard.common.jar in the installedApps folder on your primary / standalone server.

< was_root >profiles< profile name >installedApps< cell name >Homepage.ear

once the servers are restarted they will use the new title in the homepage app

Futher to the Portal 6.1.5 set up

Futher to the Portal 6.1.5 set up – these steps were required from an ORACLE DB point of view to get the DB configured :

Grant create session to community
Create likeminds user
Grant create table to community
Grant create view to jcr
Grant unlimited tablespace to community

and also these

grant select on sys.dba_pending_transactions to public;
grant select on sys.pending_trans$ to public;
grant select on sys.dba_2pc_pending to public;
grant execute on sys.dbms_system to public;

if you don’t do this then it just doesn’t work πŸ™‚

Portal won’t start ??

I have come across a horrible little *feature* that occurs sometimes with WebSphere Portal where the server fails to start and writes absoutely nothing to the log … Just a tad annoying when you are trying to work out why it didn’t start in the first place.

Sometimes this is due to a tranlog problem which is pretty straight forward to resolve:
backup / delete the directory –
< profile root >/tranlog/cellname/nodename/servername

normally this will do the trick and a restart works – if it doesn’t do the following
rename the log folders (or delete them)

< profile root >/logs

to fix the issue that I had seen I had to rename the ffdc, nodeagent and portal server log directories

restart the servers in question and as if by magic it starts πŸ™‚

I have experienced this at Portal 6.1.5 but I have had reports that this also appears to be a problem on 6.1.0.2 as well

Portal 6.1.5 do i love it or hate it ..??

Answers on a post card … first impressions of the installer was Wow that was neat, it installed the vanilla *everything* install fairly quickly and was up and running on it’s cloudscape db and basic security very quckly, looks funky, is fast and the new connections style theming is very sexy …

Now I am trying to do something wild and crazy …. point it at an oracle db … so far I am losing this battle .. although after a lot of swearing at windows 2008 for being TOO secure and the info centre for not making much sense I feel we may be winning the battle and be well on the way to winning the war, once the oracle piece is sorted it should be a straight forward secure against Active Directory federated LDAP cluster and go ..SHOULD – it was straight forward in Portal 6.1.1 but then so was securing it against an Oracle db …

Once I have it working I will doc the steps mainly for my reference but you never know some other poor little geek may also have to use Portal with the Oracle / AD solution and I would hate people to go through pain if they don’t have to πŸ™‚

watch this space kiddies you never know I may post something useful πŸ™‚

Connections 2.5 – WebSphere Tips

WebSphere Tip : 1

When clustering Connections you may encounter issues when the wizard attempts to federate the node into your deployment manager. This is a known WAS issue as the JVM suffers out of memory errors (is you delve deep in the add node log file / dmgr log you will find them).

There is a quick work around that can solve this:

Increasing the WAS HEAP size
In order for the add node command to work correctly when running the cluster wizard please do the following:

Connection servers
On each of the connections servers browse to bin and edit the addNode file (.bat or .sh depending on your OS).

Insert the line set WAS_HEAP=-Xms256M -Xmx1500M at the top of the file to set a variable (for example – under the set CMD variable)

set CMD_NAME_ONLY=%~n0
set WAS_HEAP=-Xms256M -Xmx1024M

at the bottom of the file find the “%JAVA_HOME%binjava” line and add the variable

“%JAVA_HOME%binjava” -Dcmd.properties.file=%TMPJAVAPROPFILE% %WAS_HEAP% %WAS_DEBUG% %WAS_TRACE% %CONSOLE_ENCODING% “%CLIENTSOAP%” “%JAASSOAP%” “%CLIENTSAS%” “%CLIENTSSL%” %USER_INSTALL_PROP% “-Dwas.install.root=%WAS_HOME%” “-DWAS_HOME=%WAS_HOME%” “com.ibm.wsspi.bootstrap.WSPreLauncher” -nosplash -application “com.ibm.ws.bootstrap.WSLauncher” “com.ibm.ws.runtime.NodeFederationUtility” “%CONFIG_ROOT%” “%WAS_CELL%” “%WAS_NODE%” %*

save the file.

Deployment Manager
On the deployment manager machine.
Open the Administrative Console.
Open System Administration > Deployment Manager > Process Definition > Java Virtual Machine.
Specify 256 for Initial Heap Size and 1500 for Maximum Heap Size.

Save your changes and restart the Deployment Manager.

This should resolve the issue – you may need to increase the Dmgr maximum heap slightly more but I found 1000 was just not enough and 1500 did the trick.

When you run the cluster wizard now it should run as expected πŸ™‚

WebSphere Tip : 2

A handy tip to note if you are not a huge WebSphere guru.

To enable commands to be run from the command line without the need of the -username and -password arguments configure SOAP security.

Every WebSphere profile has a file called soap.client.props which hold soap connector client information. The path to the files is as follows : /profiles//properties

SOAP connector security is disabled by default.

When enabled with the correct information it is possible to run the standard WAS start , stop and status commands for instance by just running the .bat or .sh command without passing the extra credentials.

### EXAMPLE ###

###############################################################################
#
# JMX SOAP Connector Client Properties File
#
# This file contains properties that are used by the JMX SOAP Connector Client
# of the WebSphere Application Server product. SOAP Connector executes on WebSphere
# java servers and client systems with java applications that access WebSphere servers.
#
# ** Encoding Passwords in this File **
#
# The PropFilePasswordEncoder utility may be used to encode passwords in a
# properties file. To edit an encoded password, replace the whole password
# string (including the encoding tag {…}) with the new password and then
# encode the password with the PropFilePasswordEncoder utility. Refer to
# product documentation for additional information.
#
###############################################################################

#——————————————————————————
# SOAP Client Security Enablement
#
# – security enabled status ( false[default], true )
#——————————————————————————
com.ibm.SOAP.securityEnabled=true

com.ibm.SOAP.loginUserid=wasadminuser
com.ibm.SOAP.loginPassword=wasadminpassword

#——————————————————————————

Connections 2.5 Clustering – how to avoid some pain

All was going exactly to plan when I installed my primary node – federated correctly worked as expected and I even managed to change it fairly easily to point to a different DB and shared content store I was a very happy bunny UNTIL I decided to add node2 – then it all went “pear shaped”.

So here is a quick over view of the issue and how I have got around it – but I really want to know how this happened and if I can do anything to fix this for the future – I have a PMR open and IBM are trying to recreate the issue now.

I created node1 using the Connections install wizard to create a primary node – I supplied DB(jdbc:oracle:thin:@ < my Original DB server name >:1521:conn1) and file system info (//< my Original File server name >/LotusConnectionsData/< featureName >) and it clustered successfully and node 1 was fine.

I then moved the DB to another machine and also moved the file system. I edited the data source info at cluster and server level (jdbc:oracle:thin:@< my NEW DB server name >:1521:conn1) and also changed the file system (< my NEW File server name >/portal_collabdata$/< featureName >) in the Websphere variables section of the ICS as per the instructions in the info center. Node 1 has always worked as expected even after moving these.

When I added any subsequent node it configured the server with the original file store information (//< my Original File server name >/LotusConnectionsData/< featureName >)) and defaulting back to the original DB data source (jdbc:oracle:thin:@< my Original DB server name >:1521:conn1).

If I change these manually and resynch and restart the servers they work as expected – the Datasource although it is set at Cluster level .. is also set at server level – I had to change the datasource EVERYWHERE to fix the issue (as I have 4 servers per machine and 4 machines that is a lot of editing).

This has prompted me to ask these questions of IBM:

The WebSphere Variables for the file stores are also picking up the original path – It appears that when Node1 was federated and the config was created that it has made some kind of *template* which it creates further nodes/servers from. As I have changed the config the template is not getting updated (if this is how it is doing it).

Am I doing anything wrong?
If so what?
And if no how to I prevent this from happening in the future?

== IBM’s Response ==
I received an email back from IBM regarding the issues that I experienced after changing some settings in my cluster. The bad news is it is a limitation, the good news is they are going to fix it:

The customer is right, this is a limitation in the LC 2.5 install and is being addressed for the next release.

In LC 2.5, variables/datasources/providers/etc are created at the server level, then this is used as a template for additional servers…
the problem is that server level settings like this override higher (node, cluster, cell) level settings, causing the difficulty updating the customer experienced.
ideally, these settings would be at cluster level.

Since the customer has this working, they do not have to change anything, but, if they wish to simplify future changes they can do the following:

1. create cluster level variables, datasources, providers, etc
2. [optional… for testing] create a new node — this node will have all the server level settings by default
3. only if you did 2… delete the server level settings for the items you created at cluster level in step 1
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
4. only if you did 2… test that the applications deployed on the new node behave correctly (basically you are verifying the cluster level settings)
5. after verifying (or reviewing) the cluster level settings (variables, datasources, etc), you can delete the server level items corresponding to the new cluster level items
note: if you don’t delete the server level settings for this new node, it would continue to use the server level settings
6. now, when you make changes to the cluster level variables thru the deployment manager, you just need to save changes and synchronize nodes
all the nodes and servers that don’t have node or server level instances of the same variables will get the cluster level values

Again, the order of precedence for finding variables, datasources, etc is….
first, is it defined for the Server? If yes, the server level item is used
second, is it defined for the Node?
third, is it defined for the Cluster?
fourth, is it defined for the Cell?