I think that there's opportunity here for some new function - either in logmon or as an additional piece of processing similar to the Time to Threshold QoS. Regardless the logmon probe poses some issues with getting a #/time measure in that it is really designed to be run relatively infrequently and in a batch mode. So presumably it would be easy to create something that did a count per logmon cycle but the measure would be based on that period and subject to all those things that affect it.
Consider the simple case of startup - what does logmon do the first time it touches the log file you are counting out of - do you count all since the last time logmon was stopped? Do you create a way to mark the file into chunks of time - similar to a format block but instead of the format identifying a line, the format identifies the time period to count? But then that gets into a big mess about the many ways log files get time stamps (or don't get them).
Were I in your shoes, I'd script this - you could still use logmon to execute the script and capture the number, but put all your counting and averaging logic in a script you supply. Something like:
if curr line file not exist
echo 0 > cur_line file
echo 0 (for logmon to read)
else
read creation date of curr line file
period = now - creation date
read x = contents of curr line file
count = result of tail +x log.file | grep -c "pattern" > /dev/null 2>&1
wc -l log.file > curr line file
echo count/period
This has some really obvious issues - especially if lines go into your file rapidly. But if you are just looking for something rough and +/- 1 or 2 in the result doesn't matter (or execution cost) this is the approach I'd take.