I would like to know, if anyone using XOMT/CMMT product with ZOS 1.7 or
any ZOS >1.4.
If anybody does, what release of XOMT/CMMT is being installed.
"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP
Normal
Normal
RES: Question: Why Severe Performance Degradation after Database Enlarge
"HI Bill,
Are this proccess running using area sweep ?
This degradation occurs with all processes our only with some of
them ?
Do you have any part of your database that all proccesses need to
use ? Like a ""customer contract"" by exemple ?
I believe you have a problem on some navigation strategy. Try to
find some program using area sweep, may be with a wrong use of find current
record on area.
Try to find a common subroutine using area sweep. If this
subroutine is called many times during some program you'll have this kind of
problem.
LuisH>
Brasil
=======================================
-----Mensagem original-----
De: IDMS Public Discussion Forum [mailTo:
IDMS-L@LISTSERV.IUASSN.COM] Em nome
de Vohs, Bill
Enviada em: quarta-feira, 15 de março de 2006 16:32
Para:
IDMS-L@LISTSERV.IUASSN.COM
AssunTo: Question: Why Severe Performance Degradation after Database Enlarge
From: Bill Vohs, SC Data Center, Monroe, Wi
Date: March 15, 2006
Back in February I submitted the following.
_____________________________________________
From: Vohs, Bill
Sent: Wednesday, February 01, 2006 2:09 PM
To: 'idms-l@listserv.iuassn.com'
Subject: Question: Why Severe Performance Degradation after
Database Enlarge
From: Bill Vohs, SC Data Center, Monroe, Wi
Date: February 1, 2006
Subject: Enlarge/Expand IDMS Database
We are running:
z/OS 1.4
IDMS Release 16 SP02
EMC DMX2000 DASD
We have a need to enlarge a number of the IDMS databases.
Every time we do, we experience a severe performance degradation.
Run times elongate by a factor of 3 to 5.
Our first attempt was back in November.
The approach at that time was to do an UNLOAD and RELOAD to enlarged
area. This time the databases were enlarged by a factor of 3.
One database went from 15,000 tracks on a 3390 to 45,000 tracks.
Another database went from 7,000 tracks to 21,000 tracks.
Our run times were:
Before UNLOAD/RELOAD 11/09/2005 74
minutes
11/10/2005 46 minutes
After UNLOAD/RELOAD 11/16/2005 174
minutes
11/1/2005 198 minutes
Our second attempt was January 29, 2006..
The approach for this time was to do a EXPAND PAGE to enlarged area
This time the databases were enlarged by a factor of 2.
One database went from PAGE SIZE 5064 to 10796 from 15,000 tracks to
30,000 tracks. Another database went from PAGE SIZE 5064 to 10796 from
7,000 tracks to 14,000 tracks.
The run times are:
Before EXPAND PAGE 01/26/2006
64 minutes
After EXPAND PAGE 01/31/2006
209 minutes
02/01/2006 212 minutes
We have had an IDMS consultant review our work and he says every thing
looks great.
Our z/OS and IDMS statistics show nothing out of the ordinary.
We are at a bit of a loss.
Might anybody have any suggestion for us?
------------------------------------------------------------------------
-----------------------------------------------------------
We have been fighting this problem off and on since then.
We would try something and it would look good then go sour on us.
Finally the hardware people got involved.
Our DASD vendor, EMC, took a look at the problem.
EMC installed another 16gb DASD cache the weekend of 03/04/2006. We had 16gb
to start with.
The jobs in question went from hours to minutes in run times.
I.e. PIC530 ran 209 minutes on 01/31/2006
212 minutes on 02/01/2006
PIC530 running between 30-50 minutes since cache install.
All other jobs are experiencing same results.
Bill Vohs
SC Data Center
"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP
High
Normal
S-010 Errors in CICS and TSO
"Hello All:
I have a client running IDMS 15.0 SP7, they use CICS and TSO to access IDMS.
They are receiving a fair amount of S-010 RE-SIGNON Aborts.
They only have one UCFLINE so a TSO user (DBA STAFF ONLY) and a CICS user
share the same terminal pool.
When the S-010 Abort occurs we see in the IDMS log two users signing off the
same terminal one second apart, there are no other messages.
I can't figure out how two different users are getting the same terminal.
Here is a cut and paste of our CICSOPT Macro and our #UCFCICS Macro, we do
allow multiple sign on and we also use the HSL Menu Facility.
CICSOPTS TITLE 'CA-IDMS/CICS OPTIONS MODULE'
GBLC &MODNAME
&MODNAME SETC 'CICSOPT'
CICSOPT CWADISP=120, X
OPSYS=OS390, X
CVNUM=110, X
SVC=NO, X
SYSCTL=SYSCTL, X
CICSLVL=41, X
MACLVL=NO, X
DL1MAC=NO, X
TIMEOUT=IMMEDIATE, X
TRANSID=IC12, X
PLT=YES, X
TPNAME=, X
DBNAME=, X
NODENAM=, X
OPTIXIT=YES, X
XA=YES, X
DEBUG=NO, X
DSECT=NO, X
EIBTRN=YES, X
HLPI=NO, X
PRINT=ON, X
ESCTBL=ETOTTBL, X
ESCNTHR=15,ESCPRH=16, X
ESVSSUB=ESVS
END
UCFCIC12 #UCFCICS OS=OS, X
COLOR=YES, X
LASTOUT=TASKEND, X
RESETKB=TASKEND
UCFFET12 #UCFUFT SYSTEM=CICS,NTID=DB12,MODE=PCONV,PTID=DP12
"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP
Normal
Normal
Re: Scheduler Journal Submissio
"Hi Steve,
The decision to use the wtoexit feeding CA7 was done before my time. In
the last 7 years we have not had any problems with the scheduler being down
when a journal or log archive was auto submitted. If a Journal job abends
for some reason -- say an operator cancel due to tape drive issues, etc.
then I submit the restart job manually and we force-complete the abended
journal/log offload.
Bob Wiklund
Tiburon Technologies
wiklund@tiburontech.com
623 594-6022
"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP
Normal
Normal
Re: Scheduler Journal Submissio
"We use the KISS principle - so our non-scheduler Journal and Log jobs
use a JCL cond code at the end of the job to check if the offload step
fails to do a WTO message to inform the DBAs urgently. We use 4 active
journals. The only time we have had problems is when sustained
journaling is much faster than offloading (it usually takes about a
minute to offload a full journal). This has happened a few times in
batch in various Production CVs - the CV becomes stop start as each
active journal fills and then becomes available. We have also seen it
happen when TUNE INDEX is run.
Some of our sites write archive journals to disk in GDGs so we have a
scheduled job look around at journal archive pack space, GDG limits etc
and email alert to the DBA at lower percentages and warn the Ops to
contact the DBA at higher percentages.
Sam