This is something we've debated here as well.
We use file events to monitor our SFTP servers for files at 5 minute intervals, and they run indefinitely (as opposed to "until first hit"). For this reason, we do not put them into our scheduler, as we would wind up with a bunch of instances of the events running all at once (and it usually doesn't work well when they all trigger an !event because they're seeing the same file at the same moment). Even if you set max parallel settings to 1, you'd wind up with either a bunch of events with a status of 'waiting for end of parallel task', or a bunch of cancelled events, depending on whether you went with the 'Wait' or 'Abort' setting.
So at any rate, this leaves you to execute your events the same way you execute your schedule -- i.e., fire and forget.
The problem is, what's the best way to ensure that all your events are running? That is, if you aren't kicking off your event via a schedule, and your event somehow becomes canceled, what's the best way to catch that?
I suppose you could maintain a variable with a list of all the events that should be active all the time, and you could schedule a script to loop through that list and confirm they're all there, and alert (or just activate them) if they're not...
Have other people come up with a good approach to ensuring your recurring events stay active?