CA Clarity Tuesday Tip - Frequent reindexing problems

Discussion created by Jeanne_Gaskill_CA_Clarity_Support Employee on Oct 9, 2012
Latest reply on Oct 9, 2012 by Chris_Hackett
[size=5]How to fix frequent reindexing problems on clustered Clarity systems using Clarity 12.1.x or earlier[size].

Some of our customers have had problems during period of high document upload/modification that required frequent reindexing from the command line to fix problems with search index corruption. When this problem occurs, search and/or document upload do not work and they see frequent messages about exceeding lock timeouts in the app and bg logs.

The error message in the logs looks something like this.

ERROR 2012-06-01 04:32:05,471 [http-14001-33] ( LuceneSearchEngine.addFileToIndex: IO problem while adding to index^M Lock obtain timed out: Lock@/nfs/code/clarity/tomcat-app-deploy/temp/lucene-677d05ab71ce7be82f34978c4dc280e5-write.lock

The error message that is reported shows clearly that the lucene write lock is NOT in the shared directory structure.

Because the search index write lock is not in the shared filesystem, only a single app instance has information about the exclusive lock on the index. Other app/bg instances dealing with the search index are unaware of the lock and can corrupt the index.

NOTE: This problem should not occur under Clarity 13.x and later. We have upgraded the version of the Lucene Search engine that we are using starting with Clarity 13.0. The newer versions of the Lucene search engine place the locks in the index directory by default.


1. Make a new directory under the searchindex directory on the shared filesystem called "locks".

2. Add a java argument to both the app(s) and bg(s) on each server to instruct lucene to place it's write locks in this shared directory.

For Unix Servers:


For Windows Servers:


NOTE: In most cases the filestore directories will be stored on a share out on the network. In this case, you will use a UNC path to the correct location.