ca.portal.admin

Re: Memory cache

Discussion created by ca.portal.admin on May 15, 2006
Kay,

We've tested the new feature, but are not using it currently. Based on
the
results of our testing, here is what we learned. Since we haven't
implemented it in production, we haven't watched it in action under
heavy
volume, so some of this is speculation. We had lots of questions that
weren't answered by the manuals, so opened an issue w/CA that ended up
being
quite long. Some of that is below because it isn't in the manuals.
Also,
take a look at docup QI53095.

1) Q: How is this better than a very large buffer?

I don't think it's better than a very large buffer. CA told us that if
you
turn this on and then create a buffer the same size as the file, memory
cache won't get used because that would be redundant. Also, all
movement
into and out of memory cache goes thru the buffers. So, you have extra
overhead to use it.

One interesting thing is that the entire file must be put up in memory,
so
each page is put into a 'slot' that corresponds with its place in the
file.
This makes it easy to find and retrieve, just like it would be in the
physical database. If you cache a file, the amount of memory for that
file
is acquired with the first ready update or if readied retrieval with the
first DML command issued against it and stays out there even if not all
of
the slots are filled. This is done file by file, not by area.

I guess caching would be better if the overhead of the buffer search for
the
page exceeds the cost/overhead of the movement through the buffers.
Don't
know if/when this would happen.

One definite use I could see is when buffer requirements exceed 31 bit
storage availability or I/O is a real problem for some large files and
you
want to load them up into memory. The movement thru the buffers
theoretically would still be much faster than disk I/O.

2) Q: If you use memory cache for a file, should you make its buffer
small?

My take (guess) on this is that the buffers need to be large enough to
handle the concurrency of access anticipated. Since the movement goes
thru
the buffers, you want to make sure you have enough pages defined to
handle
the concurrent movement or you'll have contention.

3) Q: Is it a good idea to do something at startup to read the entire
file
into the memory cache?

If you can, I would. Otherwise the buffers could become a bottleneck
with
movement going both directions. If your CVs stay up a very long time
and
you don't have a lot of people pounding the same data right after a
recycle,
then this may not be as important, but there is the issue of a user
paying
for the first I/O. Our CVs are shut down daily for backup, so I would
do
this or there doesn't seem much point in using it, the overhead might
negate
the benefit. This also completely depends on how much of the data
cached is
used on a regular basis.

A couple of other things to keep in mind:
1) Your PHY READ count in the DCMT displays will be incremented every
time
there is a cache hit. So, it is no longer a true measure of physical
I/O.
2) There are no DCMT commands yet for 64 bit info.
3) Your MVS sysprog may have to enable this feature (64 bit storage
access)
and they also need to know how it will be used (in my opinion). I'm not
as
familiar with the new hardware and OS as I was with the old (circa
1995),
but would think there have to be ramifications to real memory usage and
availability if this feature is exploited.

I would love to hear other opinions or actual experiences under load,
especially about optimizing buffers. What seems to make sense in theory
doesn't always work out the way you think it will.

Linda Campbell
Informatix, Inc.

"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Re: Memory cache
"Kay,

We've tested the new feature, but are not using it currently. Based on the
results of our testing, here is what we learned. Since we haven't
implemented it in production, we haven't watched it in action under heavy
volume, so some of this is speculation. We had lots of questions that
weren't answered by the manuals, so opened an issue w/CA that ended up being
quite long. Some of that is below because it isn't in the manuals. Also,
take a look at docup QI53095.

1) Q: How is this better than a very large buffer?

I don't think it's better than a very large buffer. CA told us that if you
turn this on and then create a buffer the same size as the file, memory
cache won't get used because that would be redundant. Also, all movement
into and out of memory cache goes thru the buffers. So, you have extra
overhead to use it.

One interesting thing is that the entire file must be put up in memory, so
each page is put into a 'slot' that corresponds with its place in the file.
This makes it easy to find and retrieve, just like it would be in the
physical database. If you cache a file, the amount of memory for that file
is acquired with the first ready update or if readied retrieval with the
first DML command issued against it and stays out there even if not all of
the slots are filled. This is done file by file, not by area.

I guess caching would be better if the overhead of the buffer search for the
page exceeds the cost/overhead of the movement through the buffers. Don't
know if/when this would happen.

One definite use I could see is when buffer requirements exceed 31 bit
storage availability or I/O is a real problem for some large files and you
want to load them up into memory. The movement thru the buffers
theoretically would still be much faster than disk I/O.

2) Q: If you use memory cache for a file, should you make its buffer
small?

My take (guess) on this is that the buffers need to be large enough to
handle the concurrency of access anticipated. Since the movement goes thru
the buffers, you want to make sure you have enough pages defined to handle
the concurrent movement or you'll have contention.

3) Q: Is it a good idea to do something at startup to read the entire file
into the memory cache?

If you can, I would. Otherwise the buffers could become a bottleneck with
movement going both directions. If your CVs stay up a very long time and
you don't have a lot of people pounding the same data right after a recycle,
then this may not be as important, but there is the issue of a user paying
for the first I/O. Our CVs are shut down daily for backup, so I would do
this or there doesn't seem much point in using it, the overhead might negate
the benefit. This also completely depends on how much of the data cached is
used on a regular basis.

A couple of other things to keep in mind:
1) Your PHY READ count in the DCMT displays will be incremented every time
there is a cache hit. So, it is no longer a true measure of physical I/O.
2) There are no DCMT commands yet for 64 bit info.
3) Your MVS sysprog may have to enable this feature (64 bit storage access)
and they also need to know how it will be used (in my opinion). I'm not as
familiar with the new hardware and OS as I was with the old (circa 1995),
but would think there have to be ramifications to real memory usage and
availability if this feature is exploited.

I would love to hear other opinions or actual experiences under load,
especially about optimizing buffers. What seems to make sense in theory
doesn't always work out the way you think it will.

Linda Campbell
Informatix, Inc.

"
IDMS Public Discussion Forum
IDMS-L@LISTSERV.IUASSN.COM
SMTP
IDMS-L@LISTSERV.IUASSN.COM
IDMS-L@LISTSERV.IUASSN.COM
SMTP








Normal

Normal
Re: BIUA event on May 5 2006 presentations.
"These are good presentations. Thanks for sharing them.

Outcomes