Automic Workload Automation

  • 1.  Previous JOBS dependency

    Posted Sep 26, 2018 07:06 PM

    Hi,

    If you have two jobs in a workflow, JobA and JobB, with JobB being dependent on JobA. When you go into the properties for JobB, you get the option to setup the states in the dependency tab. Generally I would set it to All states must match and ANY_OK else Block. 

    I have a requirement that JobB is dependent on JobA but not for JobA to finish to completion. JobA runs all day. JobB only has to wait for JobA to be in a running state, then it can start. Since JobA is still running, even if I leave the dependency state of JobA blank, JobB will never start. My guess that a previous job must finish (success or failure) before it's status is evaluated?

    Is there a way around this?

     

    Cheers

    Ben



  • 2.  Re: Previous JOBS dependency

    Posted Sep 26, 2018 07:51 PM

    Hey Ben,

     

    Yes you are correct, if they are in the same path in the workflow it will have to wait until the job is finished.

     

    I would look into making it a parallel task and use the Task Properties -> Pre-conditions to 'check activities' on JobB. With that you can check to see if the job is in a running state.

     

    Maybe something like this. You can use an alias as well so you know its specific to this workflow:

     

     

    Regards,

     

    Michael



  • 3.  Re: Previous JOBS dependency

    Posted Sep 26, 2018 08:07 PM

    There's definitely ways to handle this, although it might depend on what type of agent (windows vs unix) we're dealing with.

    1. Update job A so that you're kicking off the process in the background.  In unix, this would mean appending a "&" to the end of the command.  It's a little trickier to do in Windows, but you can read up on the "start" command.
    2. Replace job A with a task that checks to see if that process is running.  (You'd have to take care of actually kicking off job A through some other workflow.)  In unix, you could maybe use some sort of 'ps -ef | grep (job A process)'-type command to see if it's running.  Again, it'd be a bit trickier in windows, but probably possible.
    3. Have a separate workflow that takes care of kicking off Job A at whatever interval is appropriate.  Right before kicking off Job A, create some temporary flag file that we can use to indicate that Job A is running (e.g., "job_a_is_running.txt").  Once Job A completes, delete that flag file.  Now, in your new workflow, replace Job A with a file event, configured for "Until first hit" and using whatever polling interval you like, that is looking for that flag file.
    4. Replace job A with a job (or script) that does an activate_uc_object on job A (or a workflow containing job A).  Once this "replacement" task finishes, it means job A has been kicked off.  (You may still want to add a delay before running job B, just to make sure job A actually had sufficient time to start.)  Note that this is not necessarily the best solution, because successfully activating a workflow or job is not the same thing as confirming that it actually ran properly.


  • 4.  Re: Previous JOBS dependency

    Posted Oct 30, 2018 03:55 PM

    Hi BenSumner612127 

     

    Was the information provided by Michael or Daryl helpful in answering/resolving your question?  Did you end up picking a method?

     

    If either of the comment provided by Michael & Daryl helped in resolving your question, please use the "Mark Correct" button on their respective comment or let us know.

     

    Thanks,
    Luu