Clarity

  • 1.  why do I keep getting this message? discarded message from different cluster

    Posted May 27, 2015 05:52 PM

    Notice that it is same server that says our CLUSTER is NOT mine

     

    WARN  2015-05-27 15:46:57,745 [Incoming-1,CLRTY-SA,SERVER494-41579] protocols.UDP (none:none:none:none) discarded message from different cluster "CLRTY" (our cluster is "CLRTY-SA"). Sender was SERVER494-12818^M

    WARN  2015-05-27 15:46:57,904 [OOB-41,CLRTY,SERVER494-12818] protocols.UDP (none:none:none:none) discarded message from different cluster "CLRTY-SA" (our cluster is "CLRTY"). Sender was SERVER494-42524^M



  • 2.  Re: why do I keep getting this message? discarded message from different cluster
    Best Answer

    Posted May 28, 2015 09:47 AM

    Cluster in this sense doesn't mean your Clarity implementation (e.g. the cluster for 'production'), but the clusters for the given names CLRTY (for app/bg interactions like users triggering process events) and CLRTY-SA (for beacon/csa communications, broadly speaking) as jgroup subscriptions for multicast traffic.

     

    So it isn't unexpected that on the same multicast port, protocol, and (csa) password that traffic for both 'user' and 'administration' communications would flow, nor is it unexpected that the listeners in each of those services would ignore traffic that it doesn't think is relevant (i.e. the app ignoring some nsa/beacon traffic, and the nsa ignoring some app/bg traffic).  It is just odd that you are getting log messages for it though.  How do your logs configuration pages look like in the CSA?



  • 3.  Re: why do I keep getting this message? discarded message from different cluster

    Posted May 28, 2015 10:29 AM

    It's this

    logger.png



  • 4.  Re: why do I keep getting this message? discarded message from different cluster

    Posted May 28, 2015 10:33 AM

    Thanks - so it looks like you are receiving those messages because of changes to your logging configuration.  Perhaps this was enabled in order to capture information for a support ticket, but once that ticket is over, it is advisable to revert the configuration back to how it was.

     

     

    I notice you also have the trace.* levels set to WARN as well.  Please set those back to Fatal if you are not using them currently, as WARN isn't a valuable setting.  Use only Debug or preferably Trace when you want to enable logging for the trace.* entries, and Fatal when it is off.



  • 5.  Re: why do I keep getting this message? discarded message from different cluster

    Posted May 28, 2015 11:03 AM

    You are right,

     

    We have been facing a situation where the Maximum Connections Threads=200 are met and clarity stops responding after that. You responded to that issue https://communities.ca.com/message/241795200?et=watches.email.thread#241795200

     

    Re: Change timeout settings on Tomcat. App not responding back to load balancer (mod_jk)

     

    I wanted to hear that from CA but they are not advising I should set timeouts like that. I am not sure why whole 200 threads get occupied as we have only handful of users testing the system. and what is the solution. is the change in deploy.xml going to help in this situation?



  • 6.  Re: why do I keep getting this message? discarded message from different cluster

    Posted May 28, 2015 11:28 AM

    The 200 threads are all http worker threads, and probably unrelated to the multicast traffic flying around (at least, the multicast traffic is unlikely to reveal much, especially at WARN level).

     

    So to avoid getting the extra log entries and because it probably will not help your main issue anyway, I would think you should set the org.jgroups level back to Fatal as per my screenshot.

     

    CA (support) won't be able to advise on setting timeouts by modifying the deploy.xml because as I said in that other thread, changes to the file cannot be supported, and so the answer about whether the change would help with the 200 thread issue is unknown - we just do not know, and the implementations and 3rd party vendors involved for setting up load balancers typically goes beyond a scope that is manageable for us.

     

    Perhaps the load balancer would be better configured to redirect traffic to the app servers over the http ports instead of the ajp ones though.  The AJP protocols aren't really being maintained (reference: Apache JServ Protocol - Wikipedia, the free encyclopedia), and you can still have Apache HTTPD acting as a reverse proxy (load balancer) to Tomcat without it.  It is at least something I would probably want to try myself if I was in the same situation; however if needed we should continue that discussion via the other thread or if you still have a support ticket open on the matter, through there.