how ca be monitor bad URLs where if url is found then tiger the alarms that should not be monitor for availability, if possible then how to do it ?
I would think this would be rather difficult to do.
I would think to do this centrally you would need to monitor the nas.log with logmon and check for urls that are bad and with watcher profiles and send an alarm based on that.
Nas loglevel would probably need to be left at 3 or higher to get the correct level of detail.
you would need to make sure url_response, if that is what you are using to monitor url, is sending the url in the alarm message.
How about using "curl" with the logmon probe? The logmon probe can run a command and parse the output.
curl -I http://example.org 2>/dev/null | head -n 1 | cut -d$' ' -f2
If you google curl and http return code, you will find some sample scripts people wrote. The logmon probe could run the command and parse the output, then alarm if appropriate.
Sample output: 200 = working, anything else....
[root@LINUXHostname ~]# curl -I http://example.org 2>/dev/null | head -n 1 | cut -d$' ' -f2200[root@LINUXHostname ~]# curl -I http://example.org/doesNotExist 2>/dev/null | head -n 1 | cut -d$' ' -f2404
Retrieving data ...