Thinking more of it the length of the time frame in the slice is not likely to affect the roll over.
That is the rollover is always one period and not dependent on the total number of periods.
What counts is how many slice request you have (you could set the slice requests you do not use not to roll and have minimum data) and how many object instances with blob data to be sliced you have.
When you create new items then the initial slicing will take longer the longer your time frames are. The blobs are going to be the same regardless of your slicing or if you do not slice at all.
Those jobs which are incremental are not going to be affected by the length of the time frame in the slices. Those which do full data transfer like the initial population of the datamart are.
I was under the impression that the amount of audit data creates only dead db overhead except when you open the audit trails and have a faint recollection of having asked that, too, somewhere.
In mike2.2 you write that at the time you did not see audit trail with that number of attributes and data to make a difference in performance.