I have changed the log level to 5 and size to 5000.
What changes required to keep the one week logs to analyze in case of any issue.
HI I would expect that to capture about 2 minutes of issues.
usually loglevel 5 is not needed to find root cause of most supports issues.
Usually 3 is high enough.
if you have a busy hub I would not expect for than a few hours at best in the logsize that can be used.
Usually we start seeing issues on logs that get to be over 100 MBS
I would set the loglevel to 3 and logsize to 95000 and that should give you the longest possible retention without causing issue with the probe.
Hope this helps.
You could also look at running a 'backup' script through the Windows Task Scheduler (or something similar).
You would need to write a script that will copy the existing logs and set them aside.
This would allow the logs to continue to write, but give you snapshots of data to review.
Use dirscan to run something like the following:
from_file_path = "C:\\Program Files (x86)\\Nimsoft\\hub\\"file1 = "_hub.log"to_file_path = from_file_pathts = timestamp.now()file.rename (from_file_path..file1,to_file_path..file1.."-"..ts..".sav")
for dir in io.popen([[dir "C:\Program Files (x86)\\Nimsoft\\hub\\_hub*.sav" /b /a-d]]):lines() do fullfile = "C:\\Program Files (x86)\\Nimsoft\\hub\\" .. dir print(fullfile) foo=file.stat(fullfile ) age=timestamp.diff(foo.mtime, "h", ts) print("Age in hours is " .. age) if ( age >= 24) then print("Deleting " .. fullfile .. " due to its age") file.delete (fullfile ) end end
That's a good idea! I was thinking that the only way to achieve this would be with a script, but messing with hub.log while the process has it open for writing is a bad idea. I like the approach of using _hub.log, since the process is done with that file right after it creates it (until it is time to rotate again). Way to go!
Retrieving data ...