Clarity

Expand all | Collapse all

Availability Rate

  • 1.  Availability Rate

    Posted Oct 26, 2015 04:10 AM

    Hi All,

     

    Availability Rate is a virtual attribute in Resource Object and is sytem created. I would like to confirm that when this is added to list and filter view of Resource List, it doesnot work as desired. Like searching(filter) and sorting does not occur.



  • 2.  Re: Availability Rate

    Posted Oct 27, 2015 02:58 AM

    Any information on this?



  • 3.  Re: Availability Rate

    Posted Oct 27, 2015 04:44 AM

    Tried that on 14.2 in a single test (which is not statistically relevant)

    Can add the Availability rate to the Resource list view and as a filter view.

    Can't sort the list. If I put my cursor on the label it it does not turn to a "hand" like when I do put it on other fields which I can use for sorting.

    When I put Availability rate into the filter and enter a value which I see in the list and click filter nothing happens. All records with all values are displayed.



  • 4.  Re: Availability Rate

    Posted Oct 27, 2015 05:26 AM

    Thanks. I just want if someone can explain this behavior. I know its a virtual field and is system defined.



  • 5.  Re: Availability Rate

    Posted Oct 27, 2015 07:53 AM

    It would take someone like nick_darlington  to explain it.

    The way I see it in order to use it for sorting or filtering you would have to retrieve the data first or stage it, then calculate the values and with the selected sorting or filter to retrieve the data again.

    That is not how CA PPM works, it retrieves the data only once.

    The large string field work the same way.



  • 6.  Re: Availability Rate

    Posted Oct 27, 2015 11:05 AM

    That's correct.  In short, in order for the sorting/filtering to work, the data needs to be something that can be natively queried and operated on with SQL.

     

    I.e. the field type needs to support conditions in the WHERE clause such as '=' and 'LIKE' for the values, and/or it needs to support being included in the ORDER BY column list.

     

    Large strings are LOB (Large OBject) based in the database servers, as are any of our 'rate over time' and 'value over time' fields that are BLOB (Binary Large/Long OBject) based.

     

    The detailed reasoning is that in order to maximize the database server's capability to sort and filter the results it is fetching, you want as little non-database code as possible (performance  and resource utilization reasons).  So whilst in theory functions and procedures could be derived that 'crack' the formats of these fields so they can be sorted and filtered to some extent, it is similar as urmas describes and would take more than just complex SQL to resolve.

     

    Why databases are storing data this was is a matter of performance and efficiency themselves too; my attempt to describe that would be as follows: If you can have massive blocks of data in a row and the row sizes aren't uniform, then the database cannot perform optimally (in traditional RDMS terms).

     

    So instead of storing this data directly within the row as you logically perceive it, the database server virtualizes this information into another table and just contains a 'pointer' to it from the original row and column so it knows where to look.  This pointer is a memory address, and so if you tried to sort/filter on it directly you'd just be trying to sort values like 0x12345678 vs. 0x44332211 and not the underlying data you thought was there.  Instead of allowing that, the database just errors instead:

     

    Example with LOB:

    select pralloccurve from prteam order by pralloccurve asc
    /

     

    Results:

    ORA-00932: inconsistent datatypes: expected - got BLOB 

     

    Example with standard 'in-row' columns:

     

    select distinct prstatus from prteam order by prstatus asc
    /

     

    Results:

    PRSTATUS   
    -----------
    5          

    1 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms]

    [Executed: 27/10/15 10:02:16 CDT ] [Execution: 131/ms]

     

    If you need that capability, it would need to be reported as (or if one already exists, then up-vote) an idea as this would require significant effort to accommodate.



  • 7.  Re: Availability Rate

    Posted Oct 27, 2015 12:47 PM

    Excellent explanation.

     

    Nick - why can't we have a 'blob-cracker' like the good ol' days?  I've never had a good explanation on this, and miss this ABT capability.  But I still get weird looks from users when trying to explain to them why they can't see their data because its outside the slice window.

     

    Perhaps the DWH with Jaspersoft addresses this?  I see that the DWH has date range settings to work with, but these might be able to be set very wide, and once populated on the reporting server, we wouldn't have the performance issues with it that we'd have with huge timeslice windows on the app servers.  Is this the though process, today?

     

    We just launched 14.2, moving from 13.1 - so, this is our first experience with DWH and Jaspersoft - and we still have some issues to work out before its running correctly.  Haven't had training on It yet (scheduled for some tomorrow, and at CA World 2015).



  • 8.  Re: Availability Rate

    Posted Oct 27, 2015 01:17 PM

    The live PRAPI/PRAPIX calls that could do blob cracking (e.g. prcurvesum(blobfield, start, finish) / 3600) was limited in its scalability - enough so that we then had to create the first 'time slicing' job (pre-Niku 6 / pre-Clarity) that was wrapped into an executable called 'NiCE' (Niku Curve Extractor), and these were all upper bound when it comes to performance over large amounts of data.

     

    I think though, technically, there isn't a reason why that couldn't be reinvented either with T-SQL/PLSQL (and knowledge of the proprietary formats for the data) or with the ability to use things like embedded java in the database, but from a direction standpoint the changes have been towards making a move away from blobs and cracking altogether, hence the new portfolio curves (when portfolio management was recently redesigned) which are using non-LOB storage that can utilize the DBMS capabilities with native and regular SQL.

     

    Personally I'm not sure what the answers are on the DWH and Jaspersoft front, and suspect it might remain dependent on the PPM schema contents (i.e. that data which still needs to be blobcracked / timesliced in PPM may have limited slices or windows of time available in DWH too, and those which have migrated to native SQL can be present in their entirety).  That is just a hunch though and I'd not want to be quoted on that



  • 9.  Re: Availability Rate

    Posted Oct 27, 2015 01:31 PM

    Thanks, Nick,

     

    I now recall some talk about moving away from blobs.  Now that portfolio management is using this curve technique, perhaps there is hope that other OTB portlets/reports will soon be using "time" and other type curves.

     

    I think your hunches on DWH/Jaspersoft are correct.  Until the other blobs are replaced with curves, my thought was to run sufficiently large timeslices and DWH windows to initially populate the DWH, then reduce the timeslices back down to reasonable performance settings.  Expect I'll be able to raise this question in training.

     

    Dale



  • 10.  Re: Availability Rate
    Best Answer

    Posted Oct 29, 2015 01:39 AM

    Thanks all for your inputs. This is an identified defect logged with CA and they have decided not to fix this.



  • 11.  Re: Availability Rate

    Posted Oct 29, 2015 09:27 AM

    I had a look at those references, but to clarify something, the fix would not have necessarily been to make this area work, but to remove the attribute from places where it would not function (i.e. don't let it be added in the list of filter attributes if you can't filter on it).  So it is a defect but not to be implied that the attribute should be able to filtered or sorted.



  • 12.  Re: Availability Rate

    Posted Oct 30, 2015 01:20 AM

    Tend to agree with that as CA PPM works based on the way the db works (you can*t filter or sort an SQL query much different) and thus the underlying cause is not in CA PPM but outside it.