ca.portal.admin

Re:Re: Efficiency

Discussion created by ca.portal.admin on Jun 17, 2008
Hi Allen:

The answer to your question depends on what you consider ""efficiency"".
There is probably some via overflow in the set if the area is 83% full. You
are also likely to be incurring the overhead of space management activity,
which (by default) is triggered by pages that reach 70% fullness.

Expanding the area page size (XPAG) won't relocate any records, so you would
still have to read the same number of pages for the set, so this doesn't
create any 'efficiency'.
An unload/reload into a larger area will relocate the records, and ""tend"" to
reduce overflows if the new area is larger than the old one. Whether you
choose 55% or 65% 'fullness' will depend on your knowledge of the actual set
ratios to determine which one would provide the best data 'spread'. Given
that 65% is close to the 70% space management 'trigger', if you are going to
add many new records to the area, you'll probably want to choose 55% as the
target.

Please note also that the members are stored VIA the owner occurence. If
you have some densely populated pages, any new records that are targeted to
be stored on those pages will likely become new 'overflows', depending on
how much new data is added to the area. If the area containing the ""owner""
records contains owner overflows, you'll want to consider doing an
unload/reload of that area also for proper data balancing.

Regards,

Tom Schoenborn

p.s. I extend my personal best wishes to all in Iowa (and other states) who
have suffered from all the weather-related ills...


----- Original Message -----
From: ""Riddle, Allen [IDR]"" <Allen.Riddle@IOWA.GOV>
To: <IDMS-L@LISTSERV.IUASSN.COM>
Sent: Tuesday, June 17, 2008 2:27 PM
Subject: Efficiency


All,



We have a large area that contains only one record type and the records
are stored VIA. Currently, the area is 83% full and we are looking to
expand the area to bring that down. Will the area really become more
efficient if we drop it to say 55% as opposed to 65%? We are thinking
that the efficiency will stay the same because there are no CALC records
in the area.



Thank you in advance for any help you may be able to provide.



Thanks,



Allen Riddle

Department of Revenue

State of Iowa

(515) 281-3973
"
IDMS 3rd-party providers forum
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP
IDMSVENDOR-L@LISTSERV.IUASSN.COM
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Limits and Statistics
"Hello Listers,
At our site we are looking at ways to reduce the CPU usage of the IDMS
CV's. I remember seeing some statistics stating that having statistics
collection on added approximately 8 - 10% CPU overhead. Also, turning on
LIMITS increased the CPU usage roughly 7%. I have found some internal
documentation that states this, but I cannot find any 'official' CA
documentation that does.=20

Does anyone else on the list remember seeing similar statements about
the CPU overhead of LIMITS & STATS?

The reason for the question is that I ran some tests on one of our CV's.
Using a testing tool we ran almost 70,000 tasks thru the CV with LIMITS
enabled and STATISTICS collection on. We then shutdown the CV and
recorded the # of tasks, CPU used, etc. Next we changed the SYSGEN to
set LIMITS disabled and STATISTICS off. We also changed the #PMOPT macro
to stop writing SMF data. We reran the exact same set of scripts. The
results showed only a 2% reduction in CPU usage.

Does this figure surprise anyone else? Any ideas what I could be missing
to only see this small reduction?

Thanks for any help you can give.

Dan Hall
GE Capital Solutions
Database Administrator

T 513.217.5060
E dan.hall@ge.com
www.ge.com/capitalsolutions/

Middletown, OH 45042
General Electric Capital Corporation
"
IDMS 3rd-party providers forum
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP
IDMSVENDOR-L@LISTSERV.IUASSN.COM
IDMSVENDOR-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Limits and Statistics
"Hello Listers,
At our site we are looking at ways to reduce the CPU usage of the IDMS
CV's. I remember seeing some statistics stating that having statistics
collection on added approximately 8 - 10% CPU overhead. Also, turning on
LIMITS increased the CPU usage roughly 7%. I have found some internal
documentation that states this, but I cannot find any 'official' CA
documentation that does.

Does anyone else on the list remember seeing similar statements about
the CPU overhead of LIMITS & STATS?

The reason for the question is that I ran some tests on one of our CV's.
Using a testing tool we ran almost 70,000 tasks thru the CV with LIMITS
enabled and STATISTICS collection on. We then shutdown the CV and
recorded the # of tasks, CPU used, etc. Next we changed the SYSGEN to
set LIMITS disabled and STATISTICS off. We also changed the #PMOPT macro
to stop writing SMF data. We reran the exact same set of scripts. The
results showed only a 2% reduction in CPU usage.

Does this figure surprise anyone else? Any ideas what I could be missing
to only see this small reduction?

Thanks for any help you can give.

Dan Hall
GE Capital Solutions
Database Administrator

T 513.217.5060
E dan.hall@ge.com
www.ge.com/capitalsolutions/

Middletown, OH 45042
General Electric Capital Corporation
"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Re: Limits and Statistics
"We always have statistics off in production.
Several years ago I needed to run a full day with statistics
on in production. I took the opportunity to compare the
CPU per task between the day with stats on
and a comparable production day with stats off.

Stats off CPU seconds per task = .0240
Stats on CPU seconds per task = .0375

This was measured over approximately 1.3 million transactions, most of
which
were ADS/O.

Hope this helps..

Terry Schwartz
Perot Systems

Outcomes