Hello Amit,
Following are the parameters available currently with script to fine tune it.
Parameters:
retention_days: Number of days to retain execution job records. Default is 90 days
time_limit: Number of seconds after which no command is executed. The procedure
will stop when the time limit is reached and the last command (started
before the time limit) is complete. The default is no time limit (null).
time_delay: Number of seconds to delay between each delete command. The default is
null, no delay.
delete_chunk_size: Maximum number of rows to delete per command. The default is 10000
rows.Example:
The following example purges jobs older than 120 days, with a time limit so that no
command is executed after 3600 seconds (1 hour)
exec sp_purge_execution_jobs @retention_days=120, @time_limit=3600
Note: When using this purge procedure on large databases the number of days purged within the
time period will vary depending on the amount of events, as an example this stored procedure
tested on a large customer database purged 5 days in one hour.
However, I am not clear on how you are going to compute execution time with size of data removed, as for above example if you are letting query run for 10 hours and DB is performing well enough it may removed X number of rows and if you re-ran the same at peak time it may be removing Y which I don't find comparable.
There are various ways to look at this like comparing sizes of table pre and post purge or you can fine tune chunk_size which controls how many rows are getting deleted per command to control that, Or you can get an offset from table till what point you want to remove and delete till that point.
Let me know if above helps if not, Are we looking for how many rows removed in purge run as a number?
I will check with our DBA to assist on this.
Regards,
Saurabh