Skip navigation
All Places > Clarity PPM > Blog > Authors Aurora_Gaimon

Clarity PPM

8 Posts authored by: Aurora_Gaimon Employee

Do your days start like mine, with one challenge after another? Some of you may have to streamline how you get the kids ready for school, others on getting to work on time, and all of us with finding balance and carving out time for a social life. As project professionals, we may be ahead of the rest of the world in how we learn to manage these challenges, but CA PPM can present challenges, too. Just like with our personal lives, knowing how and when to streamline processes makes the difference between starting—and ending—the day with a smile, or with a big headache. 

 

When you’re looking for ways to manage projects and portfolios more efficiently, an intriguing place to begin is to use CA PPM for extracting data from - and integrating data with - other systems. It’s a common practice—in fact, many CA PPM customers have integrated the solution with multiple systems. Here’s the challenge, though: for the integration to work well, an experienced skilled professional must review the environment and take in consideration many variables.

 

For example, every time a new outbound integration or large-dump extracts are needed in excel, csv, XML or flat delimited files, here are just a few of the many questions that need answers before you can even begin: 

  • How many process engines exist, and how many pipelines have each one of them?
  • How many GEL scripts are running concurrently?
  • What JVM is assigned to background service?
  • Do I need to split the background service? Do I need an additional one?
  • How much data needs to be extracted? In what format? Do I need to serialize the files?
  • How much CPU is used? How fast is the script?
  • How many days and hours of coding are needed? How much debugging?
  • And many, many more…

 

As you can see, a lot of factors need to be considered so that data is extracted with the right resources, and in a timely fashion, to avoid performance issues. If you don’t know exactly what you’re doing, your day is going to start and end with a big headache, and you’ll be asking yourself, “Why isn’t there a better way?”

 

Good news! There is. You can use a PPM component—the Data Extraction adaptor—to simplify this process, and it’s so awesome and user friendly that it saves thousands of lines of code, eliminates a lot of the challenges, and reduces the risk of performance issues. Curious? Click here to find out more about this packaged work product (PWP).

 

But before you do, I will answer what are likely to be your top three questions:

 

Is it compatible with and supported in SaaS environments?

Yes, it is. In fact, all PWP global delivery components are SaaS compatible and supported.

 

How user friendly is it?

It requires SQL skills to write queries for extracting data, but the rest is intuitive and very user friendly. Also, the manual is very helpful.

 

Can I download it from support.ca.com?

No, but you can get it quickly by contacting your CA sales representative or account manager for a quote.

 

Readers interested in more detail can check out DocOps. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM release 15.3 upgrades/implementations and individualized business outcome references and analysis.

 

What kinds of headaches has data extraction given you? Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

 

The accepted wisdom is that large-scale enterprise software solutions like CA PPM require skilled functional and technical resources. In my experience, most customers aren’t interested in spending too much of their time on an enterprise solution’s technical details or too much of their money on administrative costs. Rather, they rightfully focus on strategy initiatives, business process enhancements and portfolio delivery.

 

Application Management Services to the Rescue

I had the privilege of working as an AMS architect in EMEA. In doing so, I learned firsthand that AMS provides an extension of customers’ PMO teams, including many advantages such as:

  • Proactive monitoring of infrastructures and applications
  • Onsite and remote resources
  • No need to worry about acquiring and securing the right skills
  • Access to CA’s Global Delivery packaged work products (PWPs)
  • More efficient/productive utilization of out-of-the-box modules
  • Timely development of custom reports
  • Creation of interfaces using a robust framework
  • Documentation of all issues, changes and enhancements

  • Detailed reports on SLAs

  • Predictable cost

 

How It Works

AMS provides customers with a full team of product experts (service manager, architect and consultant) for all issues and changes in their environments.

 

 

AMS architects (like me) are quickly able to develop a vision of a customer’s wants and needs as well as a view of all the “magic” that happens in the AMS technical background. Whenever possible, architects accommodate customer business requirements via smart usage of CA PPM’s out-of-the-box capabilities, applying best practices and developing less expensive customizations, especially since AMS customers have access to Global Delivery PWPs.

 

With AMS, all functional and technical details are managed by CA’s architect and consultants, so they are no longer a burden for the customer’s PMO. CA’s service manager supervises all approvals, SLAs, deliveries, escalations and reports. We also guide customers in building realistic roadmaps to the future.

 

But AMS is more than just having a team at your disposal—it’s a complete solution with 24/7 support that covers configuration and customization, too. Best of all, AMS offers many flexible working models, depending on the individual customer’s needs and requirements. It can be implemented for on-premise or SaaS solutions.

 

Just One Success Story of Many

In the first three years after a large multinational company adopted CA PPM, not much was deployed and the environment was heavily configured, which added complexity. The customer wanted to focus on business requirements, including several complex approval workflows and interfaces.

The customer had a very ambitious roadmap, but the company could not achieve it on its own. Once they adopted AMS, the roadmap was delivered, and unpredictable costs and lack of skilled resources weren’t issues any more.

 

For readers interested in more detail, contact your CA account manager or sales representative. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM Release 15.2 upgrades/implementations, and individualized business outcome references and analysis. Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

One of the most feared issues a customer can face in an enterprise application is poor performance, which can be caused by environmental issues, configurations, misuse of the tool, etc. It can be a long journey of tedious tasks—troubleshooting, support tickets, log and evidence collection, steps to reproduce, etc., all in an attempt to discover the root cause.

 

At CA World ’17, CA PPM’s new out-of-the-box suite of portlets for monitoring and analyzing technical performance issues was demoed.

 

That’s right! It’s never been so easy to spot the causes of performance issues, thanks to CA PPM’s friendly and dynamic graphical representation.

 

I can just hear the crowd’s excitement. Read on for more details.

 

In what CA PPM version are the new log analysis portlets available?

They’re in CA PPM 15.2 and higher versions.

 

How do they work?

The portlets automate the process of parsing application server log files and storing them in the database for analysis. Portlets are meant to be used to identify performance bottlenecks, frequently accessed features, response times, peak hours, and more.

 

Portlets are built using NSQL queries against LOG_ tables such as LOG_SUMMARY. This table is a summary of data collected in tables such as LOG_DETAILS and LOG_SESSIONS. Tables are populated when a Tomcat access log import/analyze job is completed.

 

What preliminary steps should I take?

Rights: Only administrators should be allowed access. Ensure that your administrators have the following rights:

       Administration - Application Setup global access right

       Administration - Access to access the administrator menu

 

Administrators must be added to the Log Analysis Access group to access the page.


·           Ensure that administrator users have been granted rights to run jobs:

  • Jobs - Access global access right
  • Jobs - Run - All (or at least the appropriate instance level for running Tomcat access log import/analyze job)

 

How do I run the job?

Run Tomcat access log import/analyze.


·         Run on scheduled mode with Log Date blank: The default value will be the previous day.

·         Run on immediate mode: enter today’s date as the Log Date value.

Note: To troubleshoot a specific issue, it’s recommended to run the job on demand immediately so that collected data includes the most recently generated line in the logs.

 

How are the results presented?

Go to the Administration menu in the Security and Diagnostics section and click on Log Analysis.

The page contains four tabs:

  • System stats by hour/day
  • Average response time by server
  • Session counts by server
  • Slow actions by day

Sample: Average response time by server

 

Sample: Slow actions by day


 

System Stats by Month:

Ø  Monthly page views

Ø  Monthly page views by server

Ø  Monthly session counts

Ø  Monthly XOG requests

Ø  Monthly XOG requests by server

 

Sample: Monthly page views


 

Sample: Monthly session counts


  

Daily Page/XOG Views:

Ø  Slow pages top 20

Ø  Page views by service

Ø  Page views by server

Ø  Average response time by day

Ø  Daily page views

Ø  XOG requests by service

Ø  XOG request by server

Ø  Average XOG response rime by day

Ø  XOG call by day

 

Sample: Slow pages top 20


 

Sample: Average response time by day


 

Log Data Export: Each row of exported data shows the date, total page views by all users, average page response time in seconds, total XOG requests, average XOG response time, and unique session count for that specific day.


 

Log analysis is a graphical tool, a reading from the logs. Portlets are self-explanatory. No special guidance is needed to understand them. Just bear in mind that you need to set values in the filter to show data. You must always select a Date, Environment and Hostname from the filter bar.

 

In my experience, initial root cause analysis of a performance issue takes 5-20 minutes with this tool, compared to several hours reading logs and asking questions. This new component is a true time saver.

 

For readers interested in more detail, check out DocOps. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM Release 15.2 upgrades/implementations, and individualized business outcome references and analysis. Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

Removing repetition throughout

 

In the past few years, many CA customers have migrated their CA PPM environment from on-premise to SaaS. The first step in this process is to run a discovery script to ensure that the customer’s CA PPM instance is SaaS compliant and identify any customizations that have been developed.

 

I can hear you’ saying, “The discovery script tool sounds great. It should be available to CA PPM administrators and advanced users for health checks, customization reports and/or troubleshooting. Reports should be generated in a few easy steps.”

 

Well, I’m happy to announce that a new customization discovery analysis report has been integrated as an out-of-the-box feature in CA PPM. Read on for more details.

 

In what CA PPM version is the new customization discovery analysis report available?

It’s in CA PPM 15.2 and higher versions.

 

What information does the report provide?

This report was designed for CA Technologies Global Delivery teams, CA Support, and advanced administrators to help them prepare to migrate from an on-premise to SaaS environment. Bear in mind that some customizations allowed in an on-premise system are not permitted in SaaS. Also, the report includes an estimated level of complexity for the migration, based on non-compliant objects and other factors.

 

How do I run the report?

 

1.    Prepare to run the report.

 

Rights: Only administrators should be allowed access. Ensure that your administrators have the following rights:

       Administration - Application Setup global access right

       Administration - Access to access the administrator menu

       Jobs - Access global access right

       Jobs - Run - All (or at least the instance level right for running the PPM Customization Discovery Analysis job)

 

Administrators must be added to the Log Analysis Access group to access the health report page.


 

2.    Run the report: There are two different ways to run the report:

 

a) “One Click” from Health Report:

·         Go to the Security and Diagnostics section of the Administration menu and click on Health Report.

·         Click on Download Discovery Analysis Report.

 


 

b) From Jobs (optional): Suggested method for large deployments.

·           Go to the Administration menu in Data Administration and Jobs.

·           Search for PPM Customization Discovery Analysis job.

·           Select the Active check box.

 


 

The job results are in an Excel spreadsheet that can also be emailed to the recipient if that is specified in the job parameters at the time of execution.


 

Do you have any advice on how to analyze the report?

The output contains many tabs of data: Cover Page, Non-Compliant Objects, Non-Compliant DB Source, Custom Triggers Detail, Custom Procedures Detail, Custom Functions Detail, Custom Views Detail, Custom Synonyms Detail, Custom Packages Detail, Grid Portlets, Grid Object Portlets, Graph Portlets, Filter Portlets, HTML Portlets, Interactive Portlets, NSQL Code, Reports, Custom Studio Objects, Custom Attributes, Gel Scripts, Processes, Scheduled Jobs, TimeSlices, Email Anomalies, Install History, File Store, Dynamic Lookups, Custom Groups, Transactions, License Counts, Custom Java jobs, Custom Stored Proc Jobs and GD Components.

 


** Note: Data analysis requires advanced technical skills.

 

The report contains very detailed information with names, IDs, and source code. A task that requires hours of deep-dive analysis is determining why an object has been flagged as non-compliant. Some of the most common causes are:

  • Customizations done directly in the database instead of using CA PPM studio, such as creating triggers, store procedures, tables, views or indexes.
  • Portlets, dynamic lookups and scripts that rely on non-compliant database customizations or hard-coded URLs in the custom code.
  • Custom integrations.

 

Analysis requires collaboration between skilled technical and functional resources who are knowledgeable about the particular environment’s specifics. Once the discovery analysis has provided a full scan of the system to help users make it SaaS compliant, the organization must conduct a cleanup to remove obsolete customizations.

Some organizations document all technical changes and enhancements, but they may also become obsolete. But in my experience, many unsupported and non-compliant customizations are not kept in customers’ documentation or a private knowledge base. The discovery analysis report is an excellent tool for getting a detailed picture of environment customizations.

 

For readers interested in more detail, check out DocOps. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM release 15.2 upgrades/implementations and individualized business outcome references and analysis. Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

 

We’re lucky to live in a world where everything is dynamic and evolving. Someone, somewhere is always making enhancements to tools we use on regular basis—including software. However, when new functionalities are incorporated into applications, it is sometimes rightly viewed as adding more complexity to infrastructure environments.

 

But it doesn’t have to be that way. What we really need is a good set of administration tools, like a complete health report. Summarizing the status of all systems on a single page can save tons of troubleshooting time and overheard.

 

You may be thinking: “That’s awesome; do we have a health report in CA PPM?” The answer is “Yes, the CSA page contains a health report tab. This is a legacy from previous CA PPM versions, when we didn’t incorporate a data warehouse or have an embedded reporting tool like Jaspersoft, and we weren’t able to generate extended reports. The good news is that CA PPM can now supply health reports.”

 

Read on for more details.

 

 

In what CA PPM version is the new health report available?

It’s in CA PPM 15.1.0.4 (patch 4), 15.2.x and higher versions. The new health report is in the administration menu and shows the status of all systems (application, database, data warehouse and reporting). It detects incorrect setups and any non-compliant governor limits.

 

 

How does it work?
First of all, take the following preliminary steps:


Rights: Only administrators should be allowed access. Ensure that your administrators have the following rights:

  • Administration - Application Setup global access right
  • Jobs - Access global access right
  • Jobs - Run - All (or at least the instance level right for running Job - Run on Tomcat access log import/analyze job)
  • Administration - Access to access the administrator menu


Also, administrators must be added to the Log Analysis Access group to access the health report page.


Jobs: Run Tomcat access log import/analyze.


Run Health Report:

  • From the UI (User Interface), go to the Administration menu in the Security and Diagnostics section and click on Health Report.

  • On the command line, enter the following command: admin healthcheck

 

Analyze Results:

Now we’re ready to finally review the report, which contains five tabs:

 

  • Summary: This tab shows the “% Completed” calculated from the total number of settings that require a value during a new install or upgrade. 

  • Green icon: 100% completed.
  • Red icon: The percentage that has been completed correctly. For example, in this screenshot, 81% of database settings have been completed correctly, so the remaining 19% of database settings are incorrect.

 

  • Application: This tab shows application server details. Settings include file locations, URLs, Java version, JVM, LDAP, SSO, SSL, ports, and instance-specific configurations for CSA, background (bg), and beacon services.

 

  • Database: This tab provides information about database connectivity, table spaces, options, and parameters.

 

  • Data Warehouse: This tab provides information about data warehouse connectivity, table spaces, option, and parameters. Settings include JDBC URL, database time zone mismatch, database size, and ETL job timeout for the Load Data Warehouse job. This tab also includes setup information for time slices and other application settings for the data warehouse. 

 

  • Reporting: This tab identifies the health status of the Jaspersoft report server.

 

 

What does the “Fix It” link mean?

The Fix It link appears when the non-compliant setting can be corrected from an application page. For example, in this screenshot, the red icon and Fix It link appear to help you set up Weekly Slices.

 

 

Click on Fix It to navigate to the Time Slices page.

 

 

Can we download the Health Report?

Yes. You can import it two different ways.

  • From the Download Health Report button
  • From the Excel Icon (top-right icon)

 

 

Is the health report feature available in SaaS environments?

Yes, but with restrictions. Database and data warehouse tabs do not list the following items:

  • JDBC URL
  • Database parameters
  • SQLNET parameters

 

 

For readers interested in more detail, check out DocOps. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM Release 15.2 upgrades/implementations, and individualized business outcome references and analysis. Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

There are many posts dedicated to the new CA PPM release. No wonder, since the release has many new enhancements and functionalities.

 

How many times have we heard the request, “Can we add a ‘forgot password’ functionality?” (If only I had a penny for every time...) Well, it’s finally here. CA PPM 15.2 presents the out-of-the-box Password Reset on the login screen.

 

Because it does not require the help of your CA PPM administrators or support team, this function will reduce the administrative support your organization needs for CA PPM. So encourage your users to reset their passwords by themselves!

 

While everyone has reset their passwords in other software solutions, I’ll lead you through a step-by-step guide for doing it in CA PPM. After the guide, I’ll answer several frequently asked questions about password reset in CA PPM.

 

How does it work?

 

 

  • Enter your CA PPM user name:

 

 

  • Click Send EmailYou will receive the email below at the email address associated with your CA PPM account:

 

  • Click on the link provided in the email and you will arrive at the screen below. Enter and confirm your new password and click Continue:

 

  • So simple!

 

 

How do we enable/disable it?

It’s enabled by default in the new UX and can’t be disabled.

Does it work for SSO and LDAP users?

Currently, it is not available for SSO or LDAP users.

Can we change the text of the email notification?

No.

Can a user reuse the link in the email to reset passwords in the future?

No. Users should not bookmark or reuse links provided in the emails. They should make a new request each time they need to reset a password. If a user tries to reuse the link, error API-1041 will appear.

 

 

 

Do we need to grant any special right for CA PPM users to use this function?

No. It's not related to any right in CA PPM.

 

What happens if a user is locked or inactive in CA PPM?

The notification email will not be sent to the user.

 

For readers interested in more detail, check out DocOps. I encourage you to participate in the best-in-class CA Communities site, where you have access to your peers, events and support. You can also reach out to CA Services for information about CA PPM Release 15.2 upgrades/implementations, and individualized business outcome references and analysis. Feel free to post in the comments section of this blog or contact me directly via email and Twitter @aurora_ppm.

.

Many times I get the following question: “What are best practices / recommendations for CA PPM housekeeping?

 

In this post, I will explain some good practices regarding CA PPM application using OOTB (out of the box) jobs and workflows. Therefore, infrastructure housekeeping maintenance will not be covered.

 

 

OOTB Scheduled Jobs:

Schedule following general jobs to run on regular basis:

 

  • Purge Audit Trail: it's a good practice to set a max period to keep records in table. You need to set it on each object where audit trail is used. I’d recommend a max of 90 or 120 days, but all depends on how many attributes are being audited and your business needs.

 

Based on my experience, when table grows beyond 1 million records, it will cause performance issues.

Run the following query to check table size:

   SELECT COUNT(*) FROM CMN_AUDITS

       Ensure it does not grow beyond 1 million records.

 

  • Purge Documents: WATCH OUT!!!: Do not run it on regular basis unless this is what really you want, it will delete permanently documents!!!!  CA PPM administrator should always back up documents (“file store” files or DB dump if case they are stored in DB) before running job.

 

Job allows you to filter based on:

Purge All Documents for the Following Objects

[Or] Purge Documents and Versions Not Accessed for [n] Days

[Or Retail the [n] Most Recent Version and Purge the Prior Versions

All Projects

Project OBS

Specific Project

All Resources

Resource OBS

Specific Resource

All Companies

Company OBS

Specific Company

Knowledge Store

 

  • Purge Notifications: It's a good practice is to delete old notifications. Based on my experience, users don’t do it .... Relay on “From Created” “To Created” to purge based on “n” days old, otherwise, table will grow and may cause performance issues (similar to audit trail)..

 

  • Delete Log and Analysis: This job will be already scheduled by default to run once per day. Do not cancel it, just accommodate scheduled time to your non-working business hours.

 

  • Delete Process Instance: It's a good practice to delete “completed” and “aborted” processes older than “n” days on regular basis depending on how long you need to keep details. Bear in mind, when going to more than 200.000 completed processes, it may cause slowness or performance issues.

 

  • Oracle Table Analyze Job: (Just if Database vendor is Oracle and CA PPM On Premise).  It's a good practice to run it weekly during non-working business hours. In case of general performance, based on CA Support or CA Services recommendations, you may run it daily.

 

This job refreshes statistics that are used to determine the best path or execution for a query.

Analyze statistics under certain circumstances, such as when the schema or the data volume has changed.

 

Processes:

 

  • Review on daily basis all failed processes: Re-try them and if still failing, troubleshooting error messages. Do not leave them unattended in “error/failed” status.

 

  • When aborting processes (cancel them), ensure they are not stuck in “aborting” status. If they are, run following query:

UPDATE BPM_RUN_PROCESSES
SET STATUS_CODE = 'BPM_PIS_ABORTED'
WHERE STATUS_CODE = 'BPM_PIS_ABORTING'

      and restart BG service.

 

  • Delete “Completed” processes via job Delete Process Instance.

 

  • Delete “Aborted” processes via job Delete Process Instance.

 

  • Ensure there are not orphan records:

SELECT * FROM BPM_RUN_PROCESSES

WHERE PROCESS_VERSION_ID NOT IN (SELECT ID FROM BPM_DEF_PROCESS_VERSIONS)        

         In case it returns results, then proceed with following:

DELETE FROM BPM_RUN_PROCESSES

WHERE PROCESS_VERSION_ID NOT IN (SELECT ID FROM BPM_DEF_PROCESS_VERSIONS)

         and restart BG service.

 

  • Ensure there are not orphan Process Engine records: It’s a good practice to remove an outdated and unused Process Engine. You can run the following queries to identify the inactive process engines and delete them: 

      For Oracle:

SELECT * FROM BPM_RUN_PROCESS_ENGINES 

WHERE END_DATE != NULL OR END_DATE <= SYSDATE

      If it returns results, then proceed with following:

 

DELETE FROM BPM_RUN_PROCESS_ENGINES

WHERE END_DATE != NULL or END_DATE <= SYSDATE

        For MSSQL:

 

SELECT * BPM_RUN_PROCESS_ENGINES
WHERE END_DATE != NULL OR END_DATE <=GETDATE()

      If it returns results, then proceed with following:

DELETE FROM BPM_RUN_PROCESS_ENGINES
WHERE END_DATE != NULL OR END_DATE <= GETDATE()

       Restart the APP and BG services.

 

 

That’s all. Thanks for reading until here. Did you like it? Please, don’t be shy and share it.

Recently I had the opportunity to visit a customer with many partitions and performance issues. One of the questions I had is: “What is the maximum number of partitions within the same CA PPM instance?”

 

  • CA PPM documentation does not provide a specific number…
  • CA support advices as a good practice no more than 3…
  • On the other hand, the tool does not really limit you on 3…

 

So, what is the right answer? From my point of view, there is not a concrete number. It will depend on what you have configured “inside” the partition.

 

Key elements to be considerate:

 

Infrastructure:

  • Ensure right CA PPM sizing for your environment.
  • Monitor Database performance. Engage your DBA team.
  • Monitor CA PPM java memory usage (it should not go higher than 80%). You can easily check via URL:

http://<your_ca_ppm:port>/niku/nu#action:security.caches

or if SSL:

https://<your_ca_ppm:port>/niku/nu#action:security.caches

Refresh several times web browser to observer the peaks (if any).

 

Application Studio Configuration:

One of the key elements to bear in mind is studio views configurations. This will drag most of your front-end memory and performance.

 

  • Do not abuse of custom attributes per object: Best practices is no more than 100 and technical limitation of the tool is 500.
  • Do not abuse of display conditions (Sub pages): Best practices is no more than 10 per view. Tool does not provide specific technical limitation.
  • Use wisely the AVP (Attribute Value Protection) settings for portlets list views.

  • Ensure if you are willing to allow users to configure portlet list. Some users will add any possible attribute they see and use CA PPM as a dumping data too. Best practices is 20-25 columns (when no attachments, URL, images) or 10-15 columns when using expensive attributes.

  • Ensure "Rows per Page" is 20:

  • We recommend to use “Do not show results until I filter” to enhance navigation from portlet page to another:

  • Do not use same data provider (especially if it’s an out of the box with many custom attributes) to build all your portlet variation: NSQL queries should be built as data providers.

 

 

But again, this is not rocket since and it requires always an analyses per customer environment and business needs.

 

From my point of view, any page taking more than 3-4 seconds to display, it could be considerate slow and a performance issue. But!!! (there is always a but) heavily configured environments or pages could 5-6 seconds and accepted as a good performance from user perspective point of view.

 

 

That’s all. Thanks for reading until here. Did you like it? Please, don’t be shy and share it.