Release Automation

  • 1.  summary of purge script

    Posted Sep 18, 2018 06:47 AM

    If i run the purge script to purge offline data, how do i get summary of how many records are deleted, what is the start and end date of the last record delete?

     

    eg, if i wish to retain certain days of data in offline tables and delete anything older than that. I have specified that script should only run for 10 hrs, how do i know how many days worth of data is deleted before the script stopped? So that i come to know that in 10hrs i could delete 30 days of worth data and then i could plan for how many more days i have to run the purge script.

     

    Best regards,

    Amit Joshi



  • 2.  Re: summary of purge script

    Broadcom Employee
    Posted Sep 19, 2018 12:30 AM

    Hello Amit,

     Following are the parameters available currently with script to fine tune it.

    Parameters:
    retention_days: Number of days to retain execution job records. Default is 90 days
    time_limit: Number of seconds after which no command is executed. The procedure
    will stop when the time limit is reached and the last command (started
    before the time limit) is complete. The default is no time limit (null).
    time_delay: Number of seconds to delay between each delete command. The default is
    null, no delay.
    delete_chunk_size: Maximum number of rows to delete per command. The default is 10000
    rows.Example:
    The following example purges jobs older than 120 days, with a time limit so that no
    command is executed after 3600 seconds (1 hour)
    exec sp_purge_execution_jobs @retention_days=120, @time_limit=3600
    Note: When using this purge procedure on large databases the number of days purged within the
    time period will vary depending on the amount of events, as an example this stored procedure
    tested on a large customer database purged 5 days in one hour.

     

    However, I am not clear on how you are going to compute execution time with size of data removed, as for above example if you are letting query run for 10 hours and DB is performing well enough it may removed X number of rows and if you re-ran the same at peak time it may be removing Y which I don't find comparable.

     

    There are various ways to look at this like comparing sizes of table pre and post purge or you can fine tune chunk_size which controls how many rows are getting deleted per command to control that, Or you can get an offset from table till what point you want to remove and delete till that point.

     

    Let me know if above helps if not, Are we looking for how many rows removed in purge run as a number?

     

    I will check with our DBA to assist on this.

     

    Regards,

    Saurabh



  • 3.  Re: summary of purge script

    Posted Sep 19, 2018 02:28 AM

    when the script starts, it indicates , from what date onwards it is purging the records from a specific table. is there a way to know, what is the last day/date for which the data is purged before the script came to halt because of the time limit reached.



  • 4.  Re: summary of purge script

    Broadcom Employee
    Posted Oct 03, 2018 02:04 AM

    Hi Saurabh ( jaisa05),

     

    Can you answer the amit's question?

    If you have any update from DBA, please share that.

     

    Thanks

    Yas



  • 5.  Re: summary of purge script

    Broadcom Employee
    Posted Oct 03, 2018 02:46 AM

    Hello Amit,

     I didn't get yet an update from my product DBA yet, as he is quite occupied with other priority work. However, I get a chance to discuss it with other people with an expertise in the area and there notions gave a unison to not built such a feature in script as it will be tricky, when the information we are requiring can be get via reporting features most of the DB comes with where we can select the purge process for reporting.

     

    For example: Purge Report (Oracle Work in Process) 

     

    I would request to check above if it can provide details your are looking for, if not I would request you to open a ticket with information request as I suspect it may be requiring some work effort for modification to current design of queries and we will try to seek same on ticket.

     

    Regards,

    Saurabh