IT Process Automation

Expand all | Collapse all

PAM with Netscaler

  • 1.  PAM with Netscaler

    Posted Nov 17, 2016 02:09 PM

    Has anyone used Netscaler as a load balancing configuration for a clustered PAM setup?

     

     

    Do we absolutely need a loadbalancer?

     

    Can a domain orchestrator distribute workload against other orchestrator by itself?



  • 2.  Re: PAM with Netscaler
    Best Answer

    Broadcom Employee
    Posted Nov 17, 2016 02:46 PM

    Yes, a load balancer is currently required to setup a clustered Orchestrator. 

    Currently we have tested and provide basic configuration examples for Apache, NGINX and F5.

     

     

    I believe Netscaler has been setup and used to load balance Orchestrators, but do not have any details on the required configuration. 



  • 3.  Re: PAM with Netscaler

    Posted Nov 17, 2016 03:20 PM

    Okay. Thank you.

     

    And can we configure a Master domain orchestrator with slave domain orchestrator without a load balancing solution?

     

    Master DC will distribute workload against its slaves?



  • 4.  Re: PAM with Netscaler

    Broadcom Employee
    Posted Nov 17, 2016 03:24 PM

    All cluster configurations require a load balancer. 



  • 5.  Re: PAM with Netscaler

    Posted Nov 17, 2016 03:39 PM

    Okay okay.

     

    And what is the usage of the slave and master domain orchestrator?



  • 6.  Re: PAM with Netscaler

    Broadcom Employee
    Posted Nov 17, 2016 03:47 PM

    These terms no longer really apply to orchestrator clusters.  In the past, the first orchestrator you installed was the master and was the only node that could do certain actions.

     

    After 4.2 Sp2, 'Master' simply refers to the first of your cluster nodes that is started.    Whatever nodes in a cluster is started first will assume the 'master' role.



  • 7.  Re: PAM with Netscaler

    Posted Nov 17, 2016 04:00 PM

    Thank you Micheal.



  • 8.  Re: PAM with Netscaler

    Posted Nov 30, 2016 08:04 AM

    Our cluster ENV is now working with NetScaler.

     

    I have a question though.

     

    When you assign a pending action in Catalog with PAM, the process stays in a waiting state.

    When the pending action is terminated, catalog sends a message to PAM to continue.

     

    What will happen if the load balancer sends this message to the orchestrator who is not waiting for it?

     

    thanks



  • 9.  Re: PAM with Netscaler

    Broadcom Employee
    Posted Nov 30, 2016 09:38 AM

    We do have a requirement for sticky or persistent sessions to be enabled at your load balancer, but If I understand the update from Catalog correctly it is a generic update for 'ROID = 12345' or similar and should work regardless of the node which receives the update.



  • 10.  Re: PAM with Netscaler

    Posted Nov 30, 2016 09:40 AM

    Makes sense.

     

    Yes we configured the persistent sessions on the NetScaler. But if a pending action take two weeks to complete, the netscaler does not hold sessions for this long

     

    I'll keep you updated.



  • 11.  Re: PAM with Netscaler

    Broadcom Employee
    Posted Oct 13, 2017 10:21 AM

    Pier

     

    Can you share what you had to do on Netscaler to make it work with PAM nodes? Did you convert the irules files to work with NetScaler?

     

    Mike



  • 12.  Re: PAM with Netscaler

    Posted Oct 16, 2017 01:21 PM

    I do not know what an irules file is but we did not do nothing fancy to get it work.

     

    I've asked NetScaler team for a "entry" in netscaler balancing between 3 PAM nodes. Sticky session, healtcheck.

     

    We associated the netscaler's entry IP with a dns name pam.domain.local.

     

    Used this DNS in PAM setup for the load balancer name.

     

    Used Node1, Node2, Node3 for the node name. Nodes were installed by using the install orchestrator option in the GUI.

     

    Everything is working fine since November 2016.

     

     

    Pier



  • 13.  Re: PAM with Netscaler

    Broadcom Employee
    Posted Oct 16, 2017 01:32 PM

    Thanks Pier