ca.portal.admin

Re:Re: IDMS Data Replication Via TCP/IP

Discussion created by ca.portal.admin on Nov 9, 2008
This probably doesn't help your situation any, but I can tell you what
we did.

We are trying to get off the mainframe (and IDMS).
We have a need to keep many databases synchronized real-time throughout
the day while development continues to retire IDMS applications and
replace with brand new non-IDMS systems.

We wrote our own message exchange system.
It handles both PUBLISH and SUBSCRIBE messages.
IDMS can be updated from an update that originates on another database
and another database can be updated when an update originates in IDMS.

On the PUBLISH end:
1. An update occurs in IDMS.
2. A database procedure gets called for the record and writes a queue
record to any one of over a hundred queues.
3. A DC-COBOL program fires up as a result of queue trigger of 1.
4. DC-COBOL program re-obtains the record or (in case of an ERASE,
captures the key from the queue record) and writes appropriate fields
for the record to message exchange database on IDMS.
5. TCP/IP Socket program idles looking for PUBLISH records to send to
server.
6. Server routes message to appropriate system and processes update
database.

On the SUBSCRIBE end:
1. TCP/IP Socket program sends message to message exchange database on
IDMS from open-systems side.
2. Another non-socket idler program looks for SUBSCRIBE messages to
process.
3. Idler program reads the tag to figure out what message it is.
4. Idler attaches the appropriate task to process the message and
update IDMS.

The above is a simplification of what happens.
There are called programs for handling writing and reading messages and
a lot of tables are involved so we don't have to recompile / reassemble
every time there is a change or a new message.

It took a while to develop ( 1 year +) and it was a lot of fine-tuning
effort along the way for the first couple of years.
Also, it is possible to end up with ""Dead Messages"" for any of a number
of reasons.
We have a process in place to handle those so we don't lose updates and
get databases out-of-sync.

Thanks.
Jon Gocher

----- Original Message -----
From: ""Govan, Hal (RET-DAY)"" <Harold.Govan@REEDELSEVIER.COM>
To: <IDMS-L@LISTSERV.IUASSN.COM>
Sent: Thursday, November 06, 2008 11:35 AM
Subject: IDMS Data Replication Via TCP/IP


Hi Everyone:

I have a question for folks using IDMS TCP/IP under R16.0.

We are currently using IDMS TCP/IP to support web-based apps coming in
from WebSphere. We are contemplating doing selective data replication
to ORACLE and MS SQL Server databases using TCP/IP enhancements to
existing IDMS-DC applications.

Has anyone out there done this type of replication without the benefit
of a third-part product ? If so, I would appreciate any information
you could share regarding your experiences.

TIA


Hal Govan
Senior Database Administrator
Reed Elsevier - Technology Services
harold.govan@reedelsevier.com <mailTo:harold.govan@lexisnexis.com>
Phone: (937) 865-7820
"
IDMS 3rd-party providers forum
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP
IDMSVENDOR-L@LISTSERV.IUASSN.COM
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Re: IDMS Data Replication Via TCP/IP
"Now that the traffic on this topic has died down, and many folk are packing
for Vegas...

A few unsolicited thoughts on the matter of home-baked vs. store-bought
replication mechanisms.

First the disclaimer: Run Right, LLC is a consulting firm, and while we
don't sell software, we (from time to time) DO provide services to firms
that do. Be aware that we are not unbiased, and take what follows for what
it's worth. Also note that we highly value the technical qualities of the
IDMS product line, and believe that businesses have a huge investment of
great value in the business logic embedded in their legacy IDMS
applications.

That said, there are situations why replication (and even migration) makes
sense. What follows is a generic discussion, applicable to many products
available commercially. This is not a commercial for a specific product or
products (not going to discuss any), more of a Public Service Announcement,
intended to point out things you should consider in approaching any
replication project.

Jon and his team are to be congratulated for crafting a solution that works
for his shop. Another shop where I have toiled for many years (henceforth
referred to as ""Company 'A'"") did something similar, but used embedded logic
in applications and storing information in a databases rather than using DB
Procedures and Queues. At Company A there are hundreds of thousands of IDMS
record images sent down the pipe to Oracle-land daily (maybe millions).
This solution has been in place for at least 10 years; the plumbing has
changed slightly over time, but the basic architecture remains. Key to all
this discussion is ""works for his (or your) shop"". What are you trying to
do? I want to focus this discussion on: ""Offloading Reporting"".

There are many objectives a company may have in seeking a replication
strategy; offloading reporting to a cheaper platform than the mainframe
being a common one. I personally think you're going to have problems
arguing AGAINST the economics of offloading reporting cycles from the
mainframe, but that's not where THIS PARTICULAR discussion is going.

Once management has made this decision, the question becomes what is the
most effective means of replicating to a distributed platform; and aside
from functionality, performance and supportability, the issues of
time-to-implement and net CPU savings need to be part of the evaluation.
There are two major advantages commercial replication mechanisms will tend
to have over home-grown: you can put it in quicker (thus starting to save
cycles earlier), and you can save MORE cycles.

Why is this?

Most commercial solutions tend to be driven by JRNL data; home grown tend to
be constructed using triggers and re-extracting from the original database
(as in Jon's and Company A's solutions).

So what?

Well, using Jon's solution as an example (and Company A is the same), each
time an update happens that needs to be replicated, TWO MORE IDMS UPDATES
must happen to effect this change... a trigger (QUEUE record or other) is
STORED, then eventually must be OBTAINED and ERASED. This is additional
locking, updating, journaling... something that a solution based on JRNL
extracts (either thru realtime exits or JRNL post-process) doesn't have to
incur. You also won't have to concern yourself with lock contention if
you're in a high volume situation. My guess is the locking issue posed Jon
and team some serious implementation/design issues; I know it did at Company
A.

On top of the tripling of the update overhead (the original update plus two
for the triggering), you have to OBTAIN the trigger/QUEUE record and
re-OBTAINING the record(s) in question. That's two updates and two
retrievals added for every change you're trying to replicate. This added
overhead is a cost that a JRNL-based approach avoids.

The downside of trying to construct a JRNL-based homegrown solution is that
decoding JRNL data is hideously messy, and you have some real challenges
pasting things together in a meaningful manner. It will require an advanced
skill set to construct and maintain, and while you may have such expertise
available, is this how you want them spending their time?

So... the message here is IF your objective is workload offloading, esp in a
high-volume environment, there are definite advantages to commercial
solutions. You want to make sure you understand your shop's specific
objectives, so you can map them in a prioritized fashion against the various
commercial offerings vs. a DIY approach. And don't forget to factor in the
time-to-implement dimension (rolling your own, as Jon notes below, can take
some time).

End of PSA.

Don Casey
Run Right, LLC

P.S. Hi Jon! Linda says ""Hi"" too.

Outcomes