Layer7 API Management

CA API Gateway - Cluster merge 

May 23, 2016 06:47 AM

1          Executive Summary

The Customer is going to execute merge of two production clusters. CA Services agreed to create short summary of steps needed to successfully accomplish the task. It is advised to have CA Personnel over watch the actual merging process in case of unforeseen circumstances occurs.

 

 

2          Basic overview of current state and goals

Based on the short presentation of CUSTOMER CA APIM Solution and phone call with *** consultant YYY YYY, following are the information, in short:

  • Current Prod configuration
  • Goals of cluster merging
  • Prerequisities
  • Notes

 

2.1             Current prod configuration

  • Cluster of 2 nodes (vm app. -> cluster 1)
    • Node – primary DB  (-> n1)
    • Node – secondary DB ( -> n2)

 

  • Cluster of 2 nodes (hw app. -> cluster 2)
    • Node – primary DB (-> n3)
    • Node – secondary DB (-> n4)

 

2.2             Goal of the cluster merge

  • Cluster 2 is not “used”, so it is wise to use its power by merging it with the cluster 1.
  • Primary node of the final cluster will be n1
  • Secondary node of the final cluster will be n3
  • Third, fourth node of the final cluster will be n2, n4

 

2.3             Prerequisities

  • Network environment must allow communication on all needed ports between nodes

 

2.4             Notes

  • Cluster 2 is “clone” of Cluster 1
  • Cluster 1 settings are preserved
  • There is no need to worry about policies, as they are identical -> no need for import
  • Cluster 2 has few cluster wide properties, which needs to be added after the merge
    • Backend endpoints ->
      modification of policies after the merge
    • Certificates, pkeys (if used) specific only to cluster 2 needs to be imported to final cluster (if not already exists)
    • Others (suggested to review if there are other properties, JDBC connections etc to be set/changed/configured)
  • Assure the gateways are "identical" (assertions, users...)
  • For dirty cleaning of audits (no audit purge script), you can use following process:
  • -----------------------------------------------------------------

SET FOREIGN_KEY_CHECKS=0;

SET GLOBAL FOREIGN_KEY_CHECKS=0;

truncate table audit_main;

truncate table audit_admin;

truncate table audit_detail;

truncate table audit_detail_params;

truncate table audit_message;

truncate table audit_system;

SET FOREIGN_KEY_CHECKS=1;

SET GLOBAL FOREIGN_KEY_CHECKS=1;

  • -----------------------------------------------------------------

 

 

3          Cluster Merge steps

  • Stop replication on both
    clusters and each node

1) Backup
  the primary database. For safety, backup the secondary database as well. Run
  this command on both nodes:
  [root@host ~]# mysqldump --all-databases | gzip > ~/all.sql.gz
  2) Stop slave replication on both nodes, both clusters.
  [root@host ~]# mysqladmin stop-slave
  3) Reset the master/slave configuration on both nodes, both clusters.
  [root@host ~]# mysql -e "reset master"
  mysql -e "reset slave all"

 

  • Stop the gateway service on allnodes (via ssg menu)
  • Drop the n2, n3, n4 nodes via ssg menu
    • 2) Display configuration menu > 5) Delete the gateway
      • Delete the db too
  • Configure network on each node via ssg menu (if necessary)
    • 1) Configure networking and system time settings
      • Node n1, n3, n2, n4
      • /etc/hosts files -> if necessary
  • Configure server ids on n1, n3 node

The master has to have the
  value server-id set to 1 (server-id=1), the slave (n3) to 2 in the
  /etc/my.cnf file (from previous configuration both are set to 1..)

1)#service ssg stop

2)#vi /etc/my.cnf and set
  server-id=2

3)#service mysql restart

4)#service ssg start

(5)re-run conf. of replication – i.e. create_slave.sh -> next steps)

 

 

  • Configure replication  ((on n3 -> clone db from n1 YES – otherwise errors))
    • Add_slave_user.sh on n1, n3
    • Create_slave.sh on n1, n3
    • Check replication status (show slave status \G)

 

  • if any problem occurs:
    • restart replication (so it starts from new, clean point. Configure on primary node, to remove possibility
      when running on 2nd node you “delete” primary db accidentally by replication of 2nd db…
      • : /opt/SecureSpan/Appliance/bin/restart_replication.sh ([SET ME] = Master node – i.e. the “other”)
    • or follow help on:

 

 

  • Configure mysql db on node n1
    (add “secondary” node to cluster)
    • Ssg menu 2) > 3) Configure the L7 Gateway -> change failover connection
    • Failover is going to be set in correlation to n1+n3
    • Check the configuration via ssg menu

 

  • ADD  a new processing node to a cluster (Configure
    mysql db on node n3, n2, n4 (NOTE: n1 is primary, n3 is failover))
    • Ssg menu 2) > 3) >
      • For n3, during node creation process, the n1 is used as primary
      • n2 + n4 are going to use n1 db as primary, n3 as failover
      • check the configuration via ssg menu

 

  • DO FULL RESTART
    • Check mysql replication
  • Check the cluster status
    • open policy manager > view > dashboard > cluster status
      • all 4 nodes should be visible

Statistics
0 Favorited
9 Views
0 Files
0 Shares
0 Downloads

Related Entries and Links

No Related Resource entered.