We are fairly new to the API Gateway and just now getting it rolling along. Having a heck of a time finding out any best practices or guidance on using the gateway across multiple data centers....CA support seems to act like this isn't good and we should have one active only.
Using a single active data center is not very scalable and not ideal performance for users. Our normal setup is at minimum one location of servers on the west coast and one on the east coast. And run as active-active with DNS resolving users to the nearest region for best performance.
Have looked through community site which had some references. But nothing too detailed. Closest was reference to scripting it via GMU in the "Replicating policies across many gateway clusters" below.
Our setup minimum:
- Virtual appliance using built-in MySQL
- 2x Data Centers (possibly more in future)
- OTK Toolkit will be installed as well
Initially we thought just have 1 active database at each region. So still within the 2x only master-master replication. But person at support didn't seem like this was a good idea due to latency (~100-200ms).
And of course the possibility of some time later adding more than just 2 data centers.
Do people actually have this deployed in a multi-data center setup? If it's scripted would there be problems with delays in replication for things like session information? Any gotchas, known problems, or specific tools we should be looking at?
I'm open to just about anything and even just some helpful pointers to reference documentation (can't seem to dig up a lot of good official docs, CA just pushes us toward professional services).
Posts I've been referencing thus far
Replicating policies across many gateway clusters (closest I could find to what we need but not sure?)
Disaster recovery (doc here of course wants folks to pay professional services)