When using automation to integrate two different systems, it's sometimes necessary to create a recording of the request data that was used to trigger the integration process, so it's easy to see what was sent, how it was transformed, and what ended-up being loaded into the targret system. I was thinking that some requests may be very large because they could comprise attachments, verbose text, and so forth. With that being said, it seems like it would make sense to set-up a process that is nothing more than a Start and Stop Success operator. When the request completes, its data will be archived somewhere in PAM's database schema. Once archived, it seems like it would make sense to copy the request data from the archive table into your own custom schema by utilizing a trigger and stored procedure. Wouldn't it take fewer resources to do that in the back-end database than to have a PAM process parse the data, escape the data, and load the data with some query that may be very complex due to the potential size of the request data? Well, I'm very interested in using a trigger and stored procedure to copy request data from the PAM archive table to some custom schema, but when I look at the request data, or what appears to be the request data, it's encoded in a format I don't recognize--it's not base64/MIME, and always begins with H4sIAAAAAAAAAM.
Has anyone tried to query real-time or archived request data from the PAM database schema before, let alone implement the type of scenario I'm describing where the intiial copying of data to a custom repository is driven by the database itself and not a PAM process? If so, how did you do it? Do you have any details surrounding the PAM database schmea? I did find this article (https://support.ca.com/irj/portal/kbtech?docid=556200&searchID=TEC556200&fromKBResultsScreen=T). It touches upon the schema a bit, but doesn't describe how to actually read the request data from the tables.