Skip navigation
All People > Georgy N Joseph > Georgy's Clarity PPM Blog

Hi Friends,

 

In our CA PPM journey, we get to pen down so many GEL scripts to work in all those CA PPM workflows. In many of them, we often xog out an object instance, modify some values or add some values to the xogged out xml and xog back the modified xml document to affect a change. One crucial part in this exercise is searching for the right node or element in the xogged out xml document. Me being a big zero in JAVA, I found using the XPath predicates too simple and handy for this purpose. Sharing the same in this blog post. Let's take this ride!

 

The Story Begins

 

It all began when I had a new requirement to modify the monthly values in a financial plan via XOG. As I had never used XPath predicates before, the task seemed almost impossible to me at first. Let us look at a sample Benefit Plan xog out below. Imagine trans_class as a custom attribute linked to a transaction class lookup, which we created in the  "Benefit Plan Detail" object.

 

XML Sample

 

Now how will you navigate to the first of the three Detail sections you see here and extract one particular numerical value in the above picture? There can be many different ways to do this, like getting the first of these in this array list. But rather than trusting on any order or list, I will choose to use XPath predicates here. Please note that I have stored the xog read output under xogReadOutput variable.

 

Our Hero - XPath Predicate

 

If there was only one month segment under the Detail_1 section and if you are OK with filtering on that DetailName field, then something as simple as below will get you the monthly value.

<gel:set asString="true" select="$xogReadOutput//BenefitPlan/Details/Detail[@detailName='Detail_1']/Benefit/segment/@value" var="to_print"/>

 

The piece of code above highlighted in RED within square brackets is called as Predicate. It filters a node-set by evaluating the predicate expression on each node. As you see here, we filter on the detailName attribute verifying if it has a value of Detail_1.

 

Nested Predicates

 

But I don't want to filter by the detailName field as it is a name field. What if I would like to filter through each Detail section above by looking at that custom lookup attribute I had created in the Benefit Plan Detail object - trans_class? Here we can make use of Nested Predicates.

<gel:set asString="true" select="$xogReadOutput///BenefitPlan/Details/Detail/Benefit/segment[../../CustomInformation/ColumnValue[@name='trans_class']/text()='trans_class_1']/@value" var="to_print"/>

 

As you see above, we have used predicate inside a predicate here. The ../../ allows us to jump from the segment element to its grandparent Detail node and then go into the CustomInformation node. Unlike our screenshot above, trans_class may not be the only custom attribute there, so we filter again using another predicate to go into the trans_class columnValue element and use the text() to retrieve the value of it and validate it against the value we want to filter on, i.e. trans_class_1

 

Multiple Predicates

 

Now let us come to a real life scenario when you have multiple month segments as in the above screenshot. Here you will need more than one predicate joined together in AND logic. 

<gel:set asString="true" select="$xogReadOutput///BenefitPlan/Details/Detail/Benefit/segment[@start='2018-04-01T00:00:00'][../../CustomInformation/ColumnValue[@name='trans_class']/text()='trans_class_1']/@value" var="to_print"/>

 

Multiple predicates are written here one after the other and they give us the effect of an AND condition. Only thing to remember here is that the predicate expressions must make sense to the single node/element against which they are applied. Just like here, we mention the first predicate without any path modification as 'start' is an attribute existing on the same segment element but the second predicate is appropriately modified and prefixed to reach the CustomInformation node in the hierarchy.

 

Predicates in GEL Scripts

 

Once you are clear until here, I assume now my blog post's header image will make sense to you. Attaching the same image below again. (Please click on it to enlarge it)

XPath Predicates in CA PPM GEL Script

As you see, I am getting all the billing changes I need to do in the GEL via some SQL query named as "Billing", each row of which I am storing to "Billing_Changes".In the first gel:set line, I am checking for the existence of some billing values already for the particular transaction class and the particular month in the xogged out Benefit Plan.

 

The next core:choose code helps me to add the delta value to the existing Billing value and get the new Billing value to replace  OR  if there is no existing Billing value for that month, then just insert the delta value as such against that month along with the full segment element to lie under the Benefit node. Note here, how we use the same predicates again while inserting/replacing some values in the XML, not just for value extraction from the XML.

 

Some Basics for GEL Beginners

 

I am not sure if all my above explanation and code will make complete sense to a GEL script beginner. So some more details and GEL script tips just for them. Experts, please ignore 

 

1. XPath search / and //

You might have noticed usage of both / and // in the above code. There is indeed a difference between the two. / is used when we know the parent element and when we construct the continuous parent-child path. But as in the above example, if you don't want to mention each of the nodes until the node you need and want to quickly jump to a particular node in the XML document, then we use the // search directive.

 

2. Beware of the two types of gel:set statements

Syntax alert! We use the one with asString="true" to extract some value or information from the XML document specified in the select attribute and store it in the variable we mention in the var attribute. We use the other one with insert="true" to insert/replace some information in the XML document at the location specified in the select attribute,  with the information provided in the value attribute.

 

3. When to use the @ construct and the text()

In the above piece of code, you might have noticed usage of both @ and text() while extracting data from the XML document. It is simple. When you want to extract data from an XML attribute (like the value attribute in the segment element in our example),  then we use @attribute. On the other hand, if your data lies within an XML element (like the ColumnValue element in our example), then we use the text().

 

4. Double check for element existence

Whenever you want to modify some information in an object instance and if you do it via the xogout-xogin method, make sure the value you want to replace/modify does exist in the xogged out XML document. It can very well happen that the particular attribute on the object instance is blank/null in which case a XOG READ output of that object instance will not contain the XML element at all. And then imagine you trying to navigate to that xml element assuming the xog out code will have it and trying to set a value inside it --- this will certainly result in an error. That's where the core:choose section in the above screenshot helps. It first checks for the existence of the xml element in the XOG OUT code, does some actions if it really exists there. It will also parse and create an entire XML element to insert into the appropriate node if it does not exist.

 

5. Never trust only on state="SUCCESS"

This final tip is a word of caution. While there are many GEL scripts I see around which check for each value against failureRecords, insertedRecords, totalNumberOfRecords, updatedRecords etc in the XOGOutput section which we get as a result of a XOG read/write operation, a much bigger number of people are lazy. Me too, sometimes! They tend to check only the state attribute in the Status element and if it shows SUCCESS, they gel:log it saying "hurray...we did it". But I have seen in many cases, especially while working on financial plans, that a SUCCESS state alone may not imply that the new financial plan has been updated or created. There could be reasons like an exchange rate not existing in the system etc. In all those cases, the ErrorInformation node in the XOG output tells us what went wrong. So it is best to add this one more additional line where you extract the ErrorInformation node and store it to display later when an error happens. Also, you could check for it being empty along with our state="SUCCESS" check. Refer the screenshot below.

ErrorInformation Check

 

That's it guys. We come to the conclusion part of this post. Will be back again if I find something tricky or useful. I know, this one was lengthy. But I assume this will be helpful to someone in this community. 

 

Regards,

Georgy

Hi Friends,

 

We all use <sql:query> tags in our GEL scripts to retrieve values from the database. We also use forEach to iterate through the SQL query result set. But this can be done in many different ways as listed below.

  

1. Using rowsByIndex

 

<gel:script
xmlns:core="jelly:core"
xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary"
xmlns:sql="jelly:sql">

 

<gel:setDataSource dbId="Niku"/>

 

<sql:query escapeText="false" var="resource">
   select
   sr.first_name,
   sr.last_name,
   sr.email
   from
   srm_resources sr
   where id in (5807189,5807190,5807191,5807192)
</sql:query>

 

<core:forEach trim="true" items="${resource.rowsByIndex}" var="row">
   <core:set value="${row[0]}" var="Fname"/>
   <core:set value="${row[1]}" var="Lname"/>
   <core:set value="${row[2]}" var="EmailID"/>
   <core:set var="RowDetails">${Fname}---${Lname}---${EmailID}</core:set>
   <gel:log level="INFO">${RowDetails}</gel:log>
</core:forEach>
</gel:script>

 

2. Using rows

 

<gel:script
xmlns:core="jelly:core"
xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary"
xmlns:sql="jelly:sql">

 

<gel:setDataSource dbId="Niku"/>

 

<sql:query escapeText="false" var="resource">
   select
   sr.first_name,
   sr.last_name,
   sr.email
   from
   srm_resources sr
   where id in (5807189,5807190,5807191,5807192)
</sql:query>

 

<core:forEach trim="true" items="${resource.rows}" var="row">
   <core:set value="${row.first_name}" var="Fname"/>
   <core:set value="${row.last_name}" var="Lname"/>
   <core:set value="${row.email}" var="EmailID"/>
   <core:set var="RowDetails">${Fname}---${Lname}---${EmailID}</core:set>
   <gel:log level="INFO">${RowDetails}</gel:log>
</core:forEach>
</gel:script>

 

3. Using rows and get

 

<gel:script
xmlns:core="jelly:core"
xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary"
xmlns:sql="jelly:sql">

 

<gel:setDataSource dbId="Niku"/>

 

<sql:query escapeText="false" var="resource">
   select
   sr.first_name,
   sr.last_name,
   sr.email
   from
   srm_resources sr
   where id in (5807189,5807190,5807191,5807192)
</sql:query>

 

<core:choose>
<core:when test="${resource.rowCount != 0}">
<core:forEach trim="true" items="${resource.rows}" var="row">
   <core:set value="${row.get('first_name')}" var="Fname"/>
   <core:set value="${row.get('last_name')}" var="Lname"/>
   <core:set value="${row.get('email')}" var="EmailID"/>
   <core:set var="RowDetails">${Fname}---${Lname}---${EmailID}</core:set>
   <gel:log level="INFO">${RowDetails}</gel:log>
</core:forEach>
</core:when>
<core:otherwise>
   <gel:log level="ERROR">No Records Returned</gel:log>
</core:otherwise>
</core:choose>
</gel:script>

 

4. Using columnNames

 

<gel:script
xmlns:core="jelly:core"
xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary"
xmlns:sql="jelly:sql">

 

<gel:setDataSource dbId="Niku"/>

 

<sql:query escapeText="false" var="resource">
   select
   sr.first_name,
   sr.last_name,
   sr.email
   from
   srm_resources sr
   where id in (5807189,5807190,5807191,5807192)
</sql:query>

 

<core:forEach trim="true" items="${resource.rowsByIndex}" var="row">
<core:forEach items="${resource.columnNames}" var="ColName" indexVar="i">
   <core:switch on="${ColName.toUpperCase()}">
   <core:case value="FIRST_NAME">
   <core:set value="${row[i]}" var="Fname"/>
   </core:case>
   <core:case value="LAST_NAME">
   <core:set value="${row[i]}" var="Lname"/>
   </core:case>
   <core:case value="EMAIL">
   <core:set value="${row[i]}" var="EmailID"/>
   </core:case>
   <core:default>
   <gel:log level="WARN">SQL Query returned ${ColName} column</gel:log>
   </core:default>
   </core:switch>
</core:forEach>
<core:set var="RowDetails">${Fname}---${Lname}---${EmailID}</core:set>
<gel:log level="INFO">${RowDetails}</gel:log>
</core:forEach>
</gel:script>

 

 

I hope this will be useful to beginners when they start to retrieve information from the database via GEL scripts 

 

Regards,

Georgy

Hi Friends,

 

What do you miss the most in CA's online documentation available on CA Clarity PPM? If you ask me, my answer will be database column level details of the CA PPM process tables i.e. the BPM tables. Not sure why CA skipped this section in the Technical Reference Guide when it has almost all other OOTB table columns explained in detail 

 

Need for Database Schema Details of BPM Process Tables

 

The complete database schema details of the BPM process tables for CA PPM would have helped us in many different ways. It can be to drill down to the process and its instance whose error is tagged in the PPM log files where they might just refer the process instance id or the thread id. We could have also used the information to do some general global searches across all Clarity processes, their log messages or a search in all GEL scripts

 

Given below are a set of Oracle SQL queries touching most of the BPM process tables. Even if this is not complete, it can be used as a quick reference point to know the table names and their foreign keys to be used. Before running the below SQL queries, please make sure that you are replacing the text in bold red with appropriate values from your CA PPM environment.

 

Query Connecting BPM_DEF Tables

 

select 

caption.name process_name,

defn.process_code,

step.step_code,

action.action_code
from
BPM_DEF_PROCESSES defn
inner join BPM_DEF_PROCESS_VERSIONS ver on ver.process_id=defn.id
inner join BPM_DEF_STAGES stg on stg.process_version_id=ver.id
inner join BPM_DEF_STEPS step on step.stage_id=stg.id
left join BPM_DEF_STEP_ACTIONS action on action.step_id=step.id
left join BPM_DEF_STEP_ACTION_PARAMS parm on parm.step_action_id=action.id
left join BPM_DEF_STEP_AI_ACTIONS aiaction on aiaction.step_action_id=action.id
left join BPM_DEF_ASSIGNEES assignee on (assignee.table_name='BPM_DEF_STEP_ACTIONS' and assignee.pk_id=action.id)
left join BPM_DEF_STEP_CONDITIONS cnd on cnd.step_id=step.id
left join BPM_DEF_STEP_TRANSITIONS trnstn on trnstn.step_condition_id=cnd.id
left join CMN_CUSTOM_SCRIPTS scripts on scripts.id=action.script_id
left join CMN_CUSTOM_SCRIPT_PARAMS par on par.script_id=scripts.id
left join BPM_DEF_OBJECTS obj on (obj.pk_id=ver.id and obj.table_name='BPM_DEF_PROCESS_VERSIONS')
inner join CMN_CAPTIONS_NLS caption on (caption.table_name='BPM_DEF_PROCESSES' AND caption.language_code ='en' AND caption.pk_id=defn.id)
where

defn.process_code='<process code>'
group by caption.name, defn.process_code, step.step_code, action.action_code
order by Process_Name, step_code, action_code

NOTE: For a couple of tables like BPM_DEF_ASSIGNEES and BPM_DEF_OBJECTS, multiple join conditions are possible as decided by the table name value specified in the TABLE_NAME column of these tables. Based on that the values provided for TABLE_NAME and PK_ID columns in the query join condition can be tweaked.

 

Query Connecting BPM_RUN Tables

 

select
caption.name process_name,
defn.process_code,
runs.id process_instance_id,
runs.status_code,
runs.start_date,
runs.end_date,
csu.first_name||' '||csu.last_name Run_By
from
BPM_RUN_PROCESSES runs
inner join BPM_RUN_PROCESS_ENGINES engine on engine.id=runs.process_engine_id
inner join BPM_RUN_STEPS steps on steps.process_instance_id=runs.id
inner join BPM_RUN_THREADS threads on (threads.process_instance_id=runs.id and steps.thread_id=threads.id)
inner join BPM_RUN_OBJECTS obj on (obj.pk_id=runs.id and obj.table_name='BPM_RUN_PROCESSES')
left join BPM_RUN_STEP_COND_RESULTS condres on condres.step_instance_id=steps.id
left join BPM_RUN_STEP_TRANSITIONS trnstn on trnstn.step_instance_id=steps.id
left join BPM_RUN_STEP_ACTION_RESULTS actres on actres.step_instance_id=steps.id
left join BPM_RUN_ASSIGNEES assignee on (assignee.pk_id=actres.id and assignee.table_name='BPM_RUN_STEP_ACTION_RESULTS')
left join BPM_RUN_ASSIGNEE_NOTES notes on notes.run_assignee_id=assignee.id
inner join BPM_DEF_PROCESS_VERSIONS ver on ver.id=runs.process_version_id
inner join BPM_DEF_PROCESSES defn on defn.id=ver.process_id
inner join CMN_CAPTIONS_NLS caption on (caption.table_name='BPM_DEF_PROCESSES' AND caption.language_code ='en' AND caption.pk_id=defn.id)
left join CMN_SEC_USERS csu on csu.id=runs.created_by
where
defn.process_code='<process code>'
and runs.id=<process instance id>
group by caption.name, defn.process_code, runs.id, runs.status_code, runs.start_date, runs.end_date, csu.first_name||' '||csu.last_name
order by process_name, process_instance_id

NOTE: The above note applies here also, to both the BPM_RUN_ASSIGNEES and BPM_RUN_OBJECTS tables.

 

Search All Process Log Messages

 

We usually print some important information to the process log messages using the gel:log tag within GEL scripts. This will insert this data in the BPM_ERRORS table when we execute the script in a custom step within a process inside Clarity. This happens irrespective of the warning level used for this tag in the GEL script. The only thing to note here is that as this is a process message and not a logger one, DEBUG/INFO becomes INFO, WARN is WARNING, and ERROR/FATAL becomes ERROR messages in the BPM_ERRORS table. 

 

Being the system administrator or the process creator, if we already know the error/warn/info message text that can be expected from different processes, the below search query on all the process messages can quickly give us a list of process instances that need our attention.

 

select
caption.name process_name,
defn.process_code,
runs.id process_instance_id,
runs.status_code,
runs.start_date,
runs.end_date,
csu.first_name||' '||csu.last_name Run_By,
be.type_code warning_level,
be.exception_trace2 Log_Message,
arg.arg_value error_argument_value
from
BPM_ERRORS be
inner join BPM_RUN_PROCESSES runs on runs.id=be.process_instance_id
inner join BPM_DEF_PROCESS_VERSIONS ver on ver.id=runs.process_version_id
inner join BPM_DEF_PROCESSES defn on defn.id=ver.process_id
inner join CMN_CAPTIONS_NLS caption on (caption.table_name='BPM_DEF_PROCESSES' AND caption.language_code ='en' AND caption.pk_id=defn.id)
left join BPM_ERROR_ARGUMENTS arg on arg.error_id=be.id
left join CMN_SEC_USERS csu on csu.id=runs.created_by
where

UPPER(be.exception_trace2) like UPPER('%your search text%')
order by process_name, process_instance_id

 

Modify any of the above queries as needed for your use.

 

Happy BPM-Querying 

 

Regards,

Georgy

Hi Friends,

 

Have you ever received any of the two error messages shown in the banner image while trying to access a filestore document from CA PPM application?

  • Error 500 - Internal Server Error. The server could not retrieve the document due to server-configuration or other technical problems. Contact your site administrator.
  • ERROR DMS-02104: The system cannot retrieve the requested document from the DMS. Contact your system administrator.

 

Check The CA PPM Log Files

 

I could not find anything helpful on these errors recorded in the CA PPM log files with the default logger settings in CSA. But if we add a new category com.niku.dms to the STDOUT appender with priority at Debug level, we will be provided with detailed information on the root cause for this DMS error.

Logger Config in CSA

 

My issue was that the file was completely missing from the server filestore location as I could see from the app-ca.log file after doing the above log configuration changes in CSA and a retry to access the file from the application. This is what I could see in the app-ca log file:-

 

DEBUG 2016-12-22 07:45:26,300 [http-bio-***.XX.***.***-***-exec-XXXX] niku.dms (clarity:XXXXXXX:42979***__7BEAFXXX-DCXX-4DXX-BEXX-CEF838XXXXX:dms.viewFile) File "8753293" not found in path "E:\KnowledgeStore\filestore\clarity\Files\753\008"

 

How To Check The Entire DMS Filestore On The Server 

 

But what if this was not the only file that's missing? What if there are so many more which are somehow missing from our server filestore? Obviously we won't be able to manually check each and every file in the Knowledge Store and other file locations inside CA PPM. This forced me to research a bit online for any available scripts which can be readily used with minimum edits. Although I found none like that, I stumbled across many code snippets which can be tweaked to help us in this mission. Thought I will share the same new GEL script with the community here.

 

NOTE: I ran this gel script from the command prompt on the CA PPM server where the filestore folders were present.

 

Gel Script To Check For Missing Files in The DMS Filestore

 

You may need to modify a couple of values in the below Gel script (given in red) as per your environment. Also note that this will check for only the latest versions of the DMS files, as specified in the SQL code within this script. 

 

<gel:script
xmlns:core="jelly:core"
xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary"
xmlns:file="jelly:com.niku.union.gel.FileTagLibrary"
xmlns:sql="jelly:sql"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

 

<gel:setDataSource dbId="niku"/>

 

<!--Write the results to a CSV file in D drive of the CA PPM Server-->
<file:writeFile delimiter="," fileName="D:\\Missing_DMS_Files.csv" >

 

<!--Specifying the number of columns in the CSV and their headers-->
<file:line>
<file:column value="File ID"/>
<file:column value="Server Path"/>
<file:column value="PPM File Name"/>
<file:column value="PPM File Type"/>
<file:column value="Created Date"/>
<file:column value="PPM Path"/>
<file:column value="File Status"/>
</file:line>

 

<!--Oracle SQL code to get DMS file metadata from PPM Database (latest file versions only)-->
<sql:query escapeText="false" var="DMS_Query">
<![CDATA[
SELECT
files.id File_ID,
'E:\KnowledgeStore\filestore\clarity\Files\'||substr(ver.id,2,3)||'\00'||substr(ver.id,1,1)||'\'||ver.id Server_Path,
files.name PPM_File_Name,
files.mime_type PPM_File_Type,
TO_CHAR(files.created_date, 'DD-MON-YY') Created_Date,
fold.path_name||'/'||fold.name PPM_Path
from
CLB_DMS_VERSIONS ver
inner join CLB_DMS_FILES files on ver.file_id=files.id
inner join CLB_DMS_FOLDERS fold on files.parent_folder_id=fold.id
where
ver.is_latest=1
order by Server_Path
]]>
</sql:query>

 

<core:forEach items="${DMS_Query.rowsByIndex}" trim="true" var="row">

<!--Storing the complete Server_Path of the file into a new variable-->
<core:set var="fileName" value="${row[1]}"/>

<core:new className="java.io.File" escapeText="false" var="fileTest" >
<core:arg type="java.lang.String" value="${fileName}" />
</core:new>

<!--Checks whether the file is empty or not available. Skips the record if the file is OK-->
<core:choose>
<core:when test="${fileTest.exists()}">
<core:if test="${fileTest.length() == 0}">
<core:set var="FileStatus" value="Empty File"/>
<!--Writes the record to the CSV file when the file is found but is empty-->
<file:line>
<file:column value="${row[0]}"/>
<file:column value="${row[1]}"/>
<file:column value="${row[2]}"/>
<file:column value="${row[3]}"/>
<file:column value="${row[4]}"/>
<file:column value="${row[5]}"/>
<file:column value="${FileStatus}"/>
</file:line>
</core:if>
</core:when>


<core:otherwise>
<core:set var="FileStatus" value="File Not Found"/>
<!--Writes the record to the CSV file when the file is not found in the Filestore-->
<file:line>
<file:column value="${row[0]}"/>
<file:column value="${row[1]}"/>
<file:column value="${row[2]}"/>
<file:column value="${row[3]}"/>
<file:column value="${row[4]}"/>
<file:column value="${row[5]}"/>
<file:column value="${FileStatus}"/>
</file:line>
</core:otherwise>


</core:choose>

</core:forEach>
</file:writeFile>

</gel:script>

 

Special thanks to gcubed as your inputs all across this forum helped me a lot 

Hope someone finds this script useful.

 

Regards,

Georgy

Hi Friends,

 

We use lookups in many places in CA PPM. Sometimes we attach a lookup to a new custom attribute in any object or to a NSQL query attribute so that it becomes available in the Grid portlets made from that NSQL query.

 

So if you want to modify a lookup, how do you know where all that lookup is currently in use? Or if you want to delete a custom lookup, how will you check which all are the impacted items? The following Oracle SQL queries will help you in this mission.

 

Object Attributes Using A Particular Lookup

 

SELECT

lookup_type "Lookup ID",

object_name "Object ID",

column_name "Attribute ID",

data_type "Data Type",

CASE is_active WHEN 1 THEN 'Yes' WHEN 0 THEN 'No' END "Active?"

FROM

ODF_CUSTOM_ATTRIBUTES

WHERE

lookup_type='Your Lookup ID'

ORDER BY object_name, column_name

 

Object Based Portlets Using A Particular Lookup

 

As you very well know, any portlet created from a Clarity Object can use any of the Object attributes. So the only additional information the below query will give you is the list of all those Object based portlets where the lookup attribute is currently available for use.

 

SELECT

oca.lookup_type "Lookup ID",

cp.portlet_code "Portlet ID",

oca.object_name "Source Object ID",

oca.column_name "Attribute ID",

oca.data_type "Data Type",

CASE oca.is_active WHEN 1 THEN 'Yes' WHEN 0 THEN 'No' END "Active?"

FROM

CMN_PORTLETS cp

INNER JOIN CMN_GRIDS cg ON (cg.portlet_id=cp.id AND cg.principal_type='SYSTEM')

INNER JOIN ODF_CUSTOM_ATTRIBUTES oca ON oca.object_name=cg.dal_code

WHERE

oca.lookup_type='Your Lookup ID'

ORDER BY cp.portlet_code, oca.object_name, oca.column_name

 

NSQL Query Based Portlets Using A Particular Lookup

 

SELECT

cnqf.lov "Lookup ID",

cnqf.filter_association "Attribute ID",

cgnq.query_code "Query ID",

cp.portlet_code "Portlet ID",

CASE cp.is_active WHEN 1 THEN 'Yes' WHEN 0 THEN 'No' END "Active?"

FROM

CMN_NSQL_QUERIES cnq

INNER JOIN CMN_GG_NSQL_QUERIES cgnq ON cgnq.cmn_nsql_queries_id=cnq.id

INNER JOIN CMN_NSQL_QUERY_FILTERS cnqf ON cnqf.cmn_nsql_queries_id=cnq.id

LEFT OUTER JOIN CMN_GRIDS cg ON (cg.dal_id=cnq.id AND cg.principal_type='SYSTEM')

LEFT OUTER JOIN CMN_PORTLETS cp ON cp.id=cg.portlet_id

WHERE

cnqf.lov='Your Lookup ID'

ORDER BY cgnq.query_code, cnqf.filter_association, cp.portlet_code

 

 

I hope all the above SQL queries will help you in solving the lookups-used-where mystery which we are facing right now in CA PPM. Feel free to modify them as per your needs.

 

Regards,

Georgy

Hi Friends,

 

Many of you might have come across scenarios where you are removing/modifying an attribute and you want to know where all this attribute is being used at present, taking into consideration any NSQLs or GEL Scripts in CA PPM. Or just like my present scenario when it is the year end and I want to know if the last year is hard-coded in any of the GEL Scripts or NSQL (be it NSQL for a portlet or a lookup). The following Oracle SQL queries will help you in such scenarios.

 

Search All NSQL Grid Portlets

 

SELECT

qry.portlet_code "Portlet ID",

CASE qry.is_active WHEN 1 THEN 'Yes' ELSE 'No' End "Active",

qry.created_date "Created Date",

qry.full_name "Created By",

qry.query_code "Query ID",

nsql.nsql_text "Query NSQL"

FROM cmn_nsql_queries nsql

INNER JOIN

(

SELECT cp.portlet_code,cp.is_active,cp.created_date,sr.full_name,cgnq.query_code,cgnq.cmn_nsql_queries_id

FROM cmn_grids cg

INNER JOIN cmn_portlets cp ON cp.id=cg.portlet_id

INNER JOIN cmn_nsql_queries cnq ON cg.dal_id = cnq.id

INNER JOIN cmn_gg_nsql_queries cgnq ON cnq.id = cgnq.cmn_nsql_queries_id

INNER JOIN srm_resources sr ON sr.user_id=cp.created_by

WHERE

UPPER(cnq.nsql_text) LIKE UPPER('%your search keyword%')

--cgnq.query_code='Query ID'

GROUP BY cp.portlet_code,cp.is_active,cp.created_date,sr.full_name,cgnq.query_code,cgnq.cmn_nsql_queries_id

)

qry ON qry.cmn_nsql_queries_id=nsql.id

order by qry.query_code, qry.portlet_code

 

NB: The same query above can also be used if you want to know the list of Grid portlets made from a particular NSQL query. In that case, you just have to comment out the Keyword search line of code and uncomment the following line of SQL code substituting the "Query ID" text with that particular NSQL Query ID.

 

Search All NSQL Lookups

 

SELECT

clt.lookup_type "Lookup ID",

CASE clt.is_active WHEN 1 THEN 'Yes' ELSE 'No' End "Active",

cnq.created_date "Created Date",

sr.full_name "Created By",

cnq.nsql_text "Query NSQL"

FROM

cmn_list_of_values clov, cmn_lookup_types clt, cmn_nsql_queries cnq, srm_resources sr

WHERE clt.lookup_type=clov.lookup_type_code

and cnq.id=clov.sql_text_id

and sr.user_id=cnq.created_by

and UPPER(cnq.nsql_text) like UPPER('%your search keyword%')

 

Search All GEL Scripts

 

This SQL code below is a slight improvisation to my SQL code at Search All GEL Scripts to handle keyword searches in any case - upper or lower.

 

SELECT ccnp.name Process_Name,

  bdp.process_code,

  ccns.name Step,

  ccna.name Action,

  ccs.script_text,

  bdpv.user_status_code Process_Status,

  bdp.created_date

FROM BPM_DEF_PROCESSES bdp,

     CMN_CAPTIONS_NLS ccnp,

     BPM_DEF_PROCESS_VERSIONS bdpv,

     BPM_DEF_STAGES bdstg,

     BPM_DEF_STEPS bds,

     CMN_CAPTIONS_NLS ccns,

     BPM_DEF_STEP_ACTIONS bdsa,

     CMN_CAPTIONS_NLS ccna,

     CMN_CUSTOM_SCRIPTS ccs

WHERE ccnp.table_name       ='BPM_DEF_PROCESSES'

AND ccnp.language_code      ='en'

AND ccnp.pk_id              =bdp.id

AND bdpv.process_id         =bdp.id

AND bdstg.process_version_id=bdpv.id

AND bds.stage_id            =bdstg.id

AND ccns.table_name         ='BPM_DEF_STEPS'

AND ccns.language_code      ='en'

AND ccns.pk_id              =bds.id

AND bdsa.step_id            =bds.id

AND ccna.table_name         ='BPM_DEF_STEP_ACTIONS'

AND ccna.language_code      ='en'

AND ccna.pk_id              =bdsa.id

AND ccs.id                  =bdsa.script_id

AND ccs.language_code       = 'gel'

AND UPPER(ccs.script_text) LIKE UPPER('%your search keyword%')

ORDER BY ccnp.name, ccns.name, ccna.name, bdp.created_date

 

 

Feel free to modify these queries to suit your needs.

I hope these will give you a high-level idea on the database tables involved.

 

Wish you all happy searching in CA PPM

 

Regards,

Georgy

Hi Friends,

 

Suppose one fine day, you feel like checking the user login history of your Clarity application. Where will you go to get the list of users who had logged into Clarity, along with their login dates and the number of times each user had logged-in? CA PPM, traditionally stores session information in two database tables but each has its own disadvantages.

 

Traditional Session Tables

 

  • CMN_SESSIONS --- stores information of ACTIVE authenticated user sessions only. Moreover the entries get automatically removed either when the user logs out or when the session expires and is cleared by the session cleanup job.
  • CMN_SESSION_AUDITS --- keeps a record of when all authenticated user sessions begin and end. But there is no way to differentiate between a session created by a real user login and a session created by the Clarity background (BG) service on behalf of the user who runs or schedules a job/process.

 

LOG_SESSIONS: The New Session Table from CA PPM v14.1

 

This new LOG_SESSIONS table contains the session created date, the session token and the logged-in user id. The same Clarity 14.1 version also introduced other LOG_XXXXX tables too. We only need to set them up properly which is given in detail under "Setting Up The Log Tables" in the article Most Viewed Portlets In Clarity PPM.

 

NOTE: The only additional step you need to do here is to make sure the parameter 'Session token retention in days' in Delete Log Analysis Data job is set appropriately. As is evident from the parameter name, the Clarity login history details you get when you use the below query is only for the number of days you set for this parameter in this scheduled job. Set both the parameters (Log retention in days and Session token retention in days) in Delete Log Analysis Data job to a higher value if you desire longer login history. This will result in adequate data to be present in the LOG_DETAILS and LOG_SESSIONS tables which are used by the following SQL query.

 

Advantages of LOG_SESSIONS table

  • Entries remain in the table even after the user logs out of Clarity PPM
  • By coupling it with the LOG_DETAILS table, we can differentiate whether it's a BG session or a real user session

 

Clarity User Login History Query

 

SELECT

csu.user_name,

csu.first_name||' '||csu.last_name Full_Name,

TRUNC(ls.created_date) Date_Of_Login,

COUNT(*) Times_Logged_In

FROM

log_sessions ls,

cmn_sec_users csu

WHERE

csu.id = ls.user_id

AND ls.token IN (SELECT distinct session_cookie FROM log_details) ----- To exclude all the BG sessions

GROUP BY csu.user_name, csu.first_name||' '||csu.last_name, TRUNC(ls.created_date)

ORDER BY TRUNC(ls.created_date) DESC, COUNT(*) DESC

 

Feel free to edit the above Oracle query as per your needs. A grid portlet with the above result fields can be easily made by changing this SQL query to NSQL.

 

NB: The most important task here is to setup the log tables by properly scheduling both the jobs i.e. Tomcat access log import/analyze job and Delete Log Analysis Data job. Their job schedule and parameter values determine how much login history is available to us.

 

Regards,

Georgy

Hi Friends,

 

Ever wondered how to get the list of top viewed or most commonly used Clarity portlets/pages in your organization? Your only chances were either to analyze all the app_access logs on the Clarity server or to use the resource-heavy Performance Tracking feature in CSA. But if you are running a Tomcat application server for your Clarity installation and your Clarity version is v14.1 or higher, then you can get your hands on this information pretty much easily. Interested? Read on...

 

Setting Up The Log Tables

 

This is possible due to the addition of two stock jobs from v14.1, which are:

  • Tomcat Access Log Import/Analyze Job

          --- imports and analyzes Clarity cluster Tomcat access log files

          --- inserts thus collected information in LOG_DETAILS, LOG_SUMMARY, LOG_FILES and LOG_REPORTDEFS tables

          --- Log Date parameter = date for access logs that are imported and analyzed.

          --- Set this parameter to Today using the Relative Date option and schedule this job to run daily.

  • Delete Log Analysis Data Job

          --- Purge Log Analysis data and session tracking data over N days old

          --- Log retention in days parameter = number of days data is retained in the above tables.

          --- This job parameter determines the duration on which we will be doing our analysis. So set it appropriately.

 

Once you have scheduled both the above jobs properly to run on a daily-basis, meaningful data will start accumulating in the above mentioned LOG_XXXXX tables. Then it's just a matter of running the below given SQL query to get the list of most commonly used or viewed portlets in your Clarity application. Feel free to edit the query accordingly if you want to see only the most viewed pages in Clarity PPM.

 

NOTE: The output of the below query won't be so accurate. That is because there can be multiple portlets in the same tab and we only get the short version of the URL in the url_short column in LOG_DETAILS table. So if you place two portlets in the same tab, one view will be credited to both the portlets as the page URL remains the same for both of them.

 

Most Viewed Portlets Query

 

SELECT

ccn.name Portlet_Name,

cportlets.portlet_code Portlet_Code,

cportlets.portlet_type_code Portlet_Type,

cportlets.source Content_Source,

COUNT(*) "VIEWS"

FROM

cmn_page_portlets cpp,

cmn_portlets cportlets,

cmn_pages cpages,

cmn_captions_nls ccn,

log_details ld

WHERE

cpp.portlet_id = cportlets.id

AND cpp.page_id = cpages.id

AND cpp.principal_type = 'SYSTEM' ----- To exclude user added portlets to personalized pages/tabs like the Overview page

AND cportlets.portlet_type_code != 'SYSTEM' ----- To exclude System portlets like Action Items, Favorite Links, Favorite Photo etc

AND ccn.table_name = 'CMN_PORTLETS'

AND ccn.language_code = 'en'

AND ccn.pk_id = cportlets.id

AND cpages.page_code = ld.url_short

GROUP BY ccn.name, cportlets.portlet_code, cportlets.portlet_type_code, cportlets.source

ORDER BY COUNT(*) DESC, ccn.name

 

You can easily convert the above SQL code to NSQL query and then create a Grid portlet on it so that this information can be accessed by privileged users from within Clarity itself. Don't forget to provide a disclaimer that the user is only looking at data retrieved over a period of last N days (determined by the value you set for the Log retention in days parameter in your Delete Log Analysis Data job schedule).

 

Finally, a BIG THANKS to CA for adding this Apache Tomcat Log Analysis feature to CA PPM. We love you

 

Regards,

Georgy

Hi Friends,

 

As with the my previous blog post on creating a filter portlet, this is another area where we did not have any requirement from clients and which was never explored. This blog post is about creating those Graph or Chart portlets in Clarity which now look kind of beautiful to me (no, these are not the Xcelsius interactive ones).

 

I have used a very basic NSQL query to provide data to these portlets. The query just gives ETC per resource in a project.

 

Click on the permalink at the bottom of this post to see the attached word document.

 

Pie Chart

Pie Chart.png

Donut Chart

Donut Chart.png

Funnel Chart

Funnel Chart.png

Bar Chart

Bar Chart.png

Line Chart

Line Chart.png

Column Chart

Column Chart.png

Area Chart

Area Chart.png

 

Happy claritying guys,

Georgy N Joseph

Hi Friends,

 

Many of you, even after working several years in Clarity PPM, might not have got a chance to build a filter portlet. Even if you disagree, thats my case for sure. So while creating all those Grid, HTML, Xcelsius portlets in the past 5 years, I always wondered when will I get the chance to make a Filter Portlet. The requirement never came from any of the clients and I was too lazy to try it without any real need for it.

 

But then, at last, I couldn't control it any longer and I went ahead and created my first, very basic, crude, Filter portlet. I just wanted to see how it works in a page where you have multiple portlets with common filter fields, how the filtering is done in one go etc. Once I completed it, it looked so amazing. If any of you are having some spare time and have not tried making filter portlets yet, I urge you to try it. I assure you that you are gonna like it and will love experimenting with it.

 

I have also put together a detailed document with screenshots on how to create a basic Filter portlet, so that you understand all the simple, core things in the creation of a filter portlet. So take a look at the attched word document with step-by-step instructions on building a filter portlet. Do let me know if you have any suggestions or queries regarding this content.

 

Click on the permalink below to see the document.

 

Happy claritying guys...

Georgy N Joseph