Performance benchmarking

Discussion created by DanielGates628573 on May 17, 2018




Automic Web Interface 12.1.0.HF02-dev-feature-12.1.0-HF02-67568


We are currently developing and running workflows on a new environment.  Apart from this 


Is there anything else we can run/check to see how the environment performs under stress and if there is anything we can change to aid performance?


I have downloaded this as well.

Automic Download Center - Performance Index 


I have put together a workflow that will execute 1,000 jobs in parallel (slight over kill I think) .  I have executed this in a loop to see if the timings degrade but they seemed to be fairly consistent for each execution. Approx 3 minutes


I executed 5 of these workflows in parallel (5,000) jobs and monitored the data in the EH table as these were running.  From what I can see the number of Active jobs rose to a peak of about 3,000 but each execution time for the job escalated to about 3 mins 30 seconds. The overall runtime was about 13 minutes.  I am guessing there is some sort of database contention possibly?  Is it better to put a limit on the number of possible active jobs (if this is possible)? Would this speed up overall performance? Or is it best just to let Automic try and figure out how to deal with the job backlog?


Again, I know this is rather generic and like a lot of performance/tuning issues there will be many different answers.