Skip navigation
All Places > DevTest Community > Blog
1 2 3 4 Previous Next

DevTest Community

233 posts

I mentioned earlier in this blog series that not many organizations are able to avoid “the adoption chasm.” Even after initial service virtualization (SV) success, organizational progress on the SV journey can stall or regress as organizations undertake efforts to create scale and operationalize. As successes occur, organizations inevitably add governance and process-oriented tasks. High-performing teams are often asked to document more thoroughly. Status reports and deadlines receive more scrutiny. Teams are expected to follow a new set of good practices, and post-its listing tasks proliferate on Kanban boards. The list of expectations and commitments grows, which translates to longer task lists for team members. Leaders tend to generate reports and measure project success based on completion of outputs.

Please understand: I am not discounting the importance of identifying, managing and reporting on SV project commitments. I am proposing that we avoid quantifying SV value in these terms.

Keep Your Teams Focused on Outcomes

Distinguishing between outputs and outcomes may seem like semantics to some and be nebulous to others. A colleague of mine uses this analogy to distinguish the two:

If you wanted to improve your fitness (an outcome), you might purchase fitness gear, join a health club and schedule workouts (all outputs), but these outputs alone would not improve your fitness. On the other hand, if you engage a personal trainer, he or she would ask clarifying questions to define “fitness.” Do you want to improve strength and stamina? Do you want to get closer to your target body mass index? Are you interested in improving heart health and/or reducing cholesterol levels? Answers to these questions help the trainer design a regimen that helps you achieve your desired outcome—improving your fitness. Similarly, the trainer measures your achievements based on your desired outcome, not the number of stations you visit in each workout.

We must understand and communicate the business outcome(s) that SV helps us achieve. Organizations that successfully identify and measure SV’s production against business outcomes will have greater success than those reporting on outputs. We increase the level of stakeholder buy-in when we communicate how SV is helping the business achieve its desired outcomes. The graphic further illustrates outputs versus outcomes:

Look to Metrics for Help

Measuring outcomes can be difficult because it requires connecting SV with business-relevant data. IT teams should rely on their business counterparts to identify simple, trackable metrics that gauge value. However, the metrics must be more impactful than simply the number of virtual services delivered or the number of times a virtual service is used.

Key concepts to think about when defining metrics are:

  • Do not substantiate the value of SV only once. Do it for every SV project.
  • Express metrics as they relate to outcomes. You may have to answer the question “Why are we doing this?” many times to identify the outcome the business is seeking.
  • Use metrics to incentivize and/or create healthy competition among development teams.
  • As part of your decision-making process, measure the potential value of a virtualization against the cost of building the virtualization; SV may be prohibitively expensive in some circumstances.

Develop SV Project Selection Criteria

Selecting the wrong projects can affect team morale and diminish SV ROI. Organizations can avoid many problems by establishing clear SV project selection criteria and adjusting them as needed. Consider these criteria:

  • A well-articulated business problem or challenge
  • Stakeholder sponsorship and commitment
  • The existence of before-and-after metrics for developing value statements
  • A good understanding of complexity and data dependencies
  • Availability of skills and project team willingess to support SV

SV adoption and transformation are personally significant for people involved in these projects. The above tactics can help teams focus on delivering long-term success.

My next blog will peek into federation and communities of practice. Your comments and questions are always welcome.


Maximizing Your DevTest Investment

Making the Most of Your Service Virtualization Assets

SV Requires Transformation at Scale

Change Management: A Key Element in Your SV Strategy

To make genuine, transformative progress through Service Virtualization (SV), organizations should adopt a 360-degree view that connects four critical elements: the technical SV solution, people, process and measurable business outcomes. By explicitly focusing on these elements and how they intersect, teams can develop effective tactics to improve SV maturity, increase scale and derive greater value from their SV investment. Additionally, when teams map the value of SV to business outcomes, they connect SV to business priorities, help validate and demonstrate SV contributions and establish measures for ongoing improvement.

Start with People—Your Most Important Asset

Organizational and individual resistance to change can impede the best plans and greatest opportunities for SV adoption. Organizations can beat this inertia by helping people answer the question: “Why change?” To be convincing, the answers must connect to business outcomes and their value. Some tactics to employ include:

  • Showing how SV aligns to executive priorities
  • Educating line-of-business and IT leaders on ways to communicate SV’s importance and relevance in achieving business outcomes
  • Developing an accountable core team of knowledge experts and deploying them on campaigns that support the use of SV on high-impact projects
  • Identifying processes and methods that impede change within the organization—and eliminating them
  • Allocating resource time to integrate SV into application lifecycle methods and processes
  • Communicating business outcomes facilitated by SV via lunch-and-learn sessions, internal demos and communications.

Process, Process, Process to Drive Scale

The absence of process-related artifacts, templates, and metrics that report value can inhibit organizational success. By creating reusable patterns and templates, SV can be operationalized at scale, thus delivering the greatest value and exposing new opportunities for SV adoption. Formalizing the following activities helps in developing scale:

  • SV discovery processes
  • SV project backlogs and pipelines
  • SV support processes
  • SV skills, learning and self-enablement processes
  • SV asset management and support processes
  • SV utilization metrics, socialization and consumer advocacy processes

Be Driven by Outcomes, Not Outputs

Organizations often find themselves trapped in the paradigm of socializing the value of SV by expressing value in terms of output, not outcomes. Whenever possible, don’t use an output metric to describe an outcome. Consider the impact of project value statements for the same project communicated by the PM in two different ways:

  • Example A: “The team had a virtual service with 900+ million hits and other virtual services with 500+ million hits.”
  • Example B: “We used SV to build better applications faster by ensuring that customer-facing applications could handle expected transaction volumes without experiencing outages and by providing a platform that enables daily performance tests rather than monthly or quarterly tests. Our approach resulted in reducing volume-related outages by 80%, completing performance testing 97% sooner in the application lifecycle and reducing infrastructure costs by $100K per quarter.


In example A, the numbers are measures of output; they don’t quantify or qualify how SV contributed to achieving a business outcome. We can only assume that the results had an impact on the business.


In example B, I can hear executives asking, “How did you do that? Can we replicate these outcomes?” It’s easy to feel the pride and excitement of everyone who contributed to these outcomes and to wonder if the focus on business outcomes motivated them to raise the bar for SV scale and value even higher!


I welcome your comments and questions below.


Links to previous posts:

Maximizing Your DevTest Investment on Your Way to SV Maturity

Making the Most of Your Service Virtualization Assets

Change Management: A Key Element of Your SV Strategy

To sustain progress and achieve substantial business value, organizations should aggressively integrate Service 

Virtualization (SV) into their software development lifecycle (SDLC). In fact, this may be your most important move in driving meaningful change that leads to increasing IT value.


Although this type of change is critical, it can also be overwhelming for many organizations, given the pressures on IT regarding time, budgets, competing priorities, staff availability, etc. Fortunately, metrics demonstrate the value of SV, and these metrics can be convincing when presented to business leaders.


Given the availability of metrics, why do many organizations miss the opportunity to leverage SV to drive change - specifically to transform processes?


Perhaps the reason is that change requires change.


Change is perpetual in IT, which may explain why the industry isn’t short on change management methodologies and practices. It seems that most standards bodies assume that process improvement is a technical challenge rather than a social and cultural challenge. Another perspective is that automation will compel needed change.


A change management function must focus on developing principles, practices and processes that enable organizations to improve software development outcomes by paying attention to the cultural and social aspects of change. Some key questions addressed in change management are:


  • How are data-rich SV metrics used to obtain buy-in at all levels of the organization?
  • How do organizational process changes related to SV mitigate IT risk?
  • Which SV-related change initiatives bring about the greatest organizational value?
  • What process improvement metrics aid in evaluating the success of change?


SV activities and maturity strategies such as business process alignment, staffing and organizational alignment, and initiative identification and measurement intersect with change management. Organizations benefit by identifying a skilled practitioner who understands these intersection points and defines and facilitates the necessary cultural and social changes. Change management activities that focus on the following questions can help:


  • How does change management drive executive and stakeholder understanding of value?
  • How does SV alignment with executive initiatives increase business value and outcomes?
  • What assessment approaches best measure the impact of behavioral changes?


Focusing on these areas helps teams develop a change management ethos that can be socialized across stakeholders at all levels and connects directly to desired business outcomes for SV. One of the best places to start is by identifying a talented SV practitioner or team who can step beyond the technology to link process and cultural/social changes to benefits the organization will experience due to change. In doing so, the SV conversation and its value is elevated to the enterprise. When organizations elevate the conversation, the discussion shifts from a focus on technical outputs to SV’s impact on desired business outcomes.


Stay tuned. The next blog examines people and process changes that help drive the value of SV. In the meantime, your comments and questions are always welcome. 


Previous posts:

Maximizing Your DevTest Investment

Making the Most of Your Service Virtualization Assets

When IT organizations don’t integrate SV into their culture and IT decision-making processes, they under-utilize the potential of service virtualization (SV). When SV is integrated in a holistic manner, SV maturity increases and the organization begins to develop the muscle and flexibility necessary to deliver the requisite scale, speed, velocity, and value that business leaders expect.


In his book, Digitally Remastered: Building Software into Your Business DNA, Otto Berkes identifies IT’s optimization of continuous development and delivery capabilities as important ingredients for building a Modern Software Factory. He notes that robust Agile and DevOps capabilities are essential to driving speed of innovation and responsiveness to deliver real customer value. In the Modern Software Factory era, IT teams—with support from business stakeholders—must:

  • Foster environments where continuous improvement, led or influenced by IT, is always top of mind
  • Create and operate software, and focus on its business value as a core capability—or better—a competitive differentiator
  • Operate in the mindset of developing muscle and flexibility through software
  • Figuratively put software at the center of the business.

The Importance of Defining an SV Maturity Model

Organizations that develop a robust SV maturity model or roadmap and integrate SV into their software development lifecycle (SDLC) are generally more successful. These organizations target low-effort/high-value virtualizations, demonstrate business value using SV, create standardized processes and methods for delivering SV, integrate SV into the SDLC, and find ways to federate SV delivery to move faster and generate greater business value. On the other hand, organizations that under-invest in continuous improvement of people and SV processes miss the mark and undermine the value-generating ability of their SV assets.


Airplane pilots carefully plan flights by incorporating frequent checkpoints to gauge progress and enable course correction. Similarly, organizations should to plan the milestones and activities necessary to continuously improve SV maturity, monitor progress, and correct course as needed. CA’s approach to adoption and maturity establishes a model with customizable sequenced activities that help organizations create scale. This crawl®walk®run approach separates activities into a logical focus areas.


Early-stage activities in the adoption model drive business value, increase knowledge of SV, and create reference implementations of SV artifacts. Middle-stage activities increase utilization and streamline SV delivery; many of these activities focus on people and process. Final-stage activities optimize and deliver SV at scale in the application lifecycle ecosystem. This graphic illustrates some of the activities and focus areas in the maturity model:



SV Integration into the SDLC is Critical

When presented with the model above, customers sometimes ask which activities are most important for developing a maturity model. While they’re all important, failing to integrate SV with SDLC processes and people will impede maturity. Integrating SV into the SDLC enables organizations to identify the points in the lifecycle processes where SV generates maximum value. This enables the constituent teams (business and product owners, architects, developers, managers, quality assurance, etc.) to understand where SV is expected to provide value. Without this understanding, teams are less effective in driving business outcomes.


A related issue is that many organizations have the initial SV conversation in the lifecycle’s development phase—too late to allow time for SV to generate positive impact. SV should be discussed when service needs are initially identified. At the latest, SV should be discussed during the analysis phase, when the service contract is being created or reviewed by an architect.


With the advent of Agile, test-driven, and behavior-driven development methodologies, organizations are striving to shift left. As they do, alignment between SV and the SDLC becomes even more critical. SV touch points must shift as developers and quality assurance personnel work side by side to iteratively deliver features.


While creating a roadmap to maturity and aligning SV with the SDLC are critical, they aren’t the only elements an organization must address to build SV muscle and flexibility. In a future blog, we’ll focus on change management as a key element of SV strategy.


I welcome your comments and questions below.


Related posts:

Maximize Your DevTest Investment

This blog is based on the recent and past request on how to get the Transaction Count by Virtual Service Operation.

PFB the steps required to get the Transaction Count by Virtual Service/VSI Operation.


1. Connectivity to Database to Query a Table which provides the Transaction Count. Database tool or Script to fetch the data.

2. Admin user to modify DevTest property files.


Before the configuration the database table will have the below information:

Transaction as seen on Portal


Steps to Get VS Operations:

1. Shutdown all the DevTest Services.

2. Open and add the following line and save the file:

   lisa.vse.metrics.txn.counts.level=operation[Options available are operation service (the default), operation, or arguments]

This will turn on transaction counts at an operation level. Please make sure that no space gets added at the end of the line.

3. Once the file is saved restart all the DevTest Services. Consume the VS and the transaction count can be seen in the database as shown below: 

In his highly-regarded book, “Crossing the Chasm”, Geoffrey Moore presents a model for broad commercial adoption of new technologies. Moore’s macro-view of the technology adoption lifecycle and the chasm created by new or disruptive technologies has been a guide to many in the IT industry.  Moore’s crossing-the-chasm concept can be loosely applied at a micro-level to customers hoping to drive adoption of technologies such as CA DevTest©. While few (or no?) customers avoid Moore’s adoption chasm altogether, many use fast-track techniques that reduce time, cost, effort and frustration. This series of blogs will share insights and best practices from customers and CA Services – focusing on CA Service Virtualization (SV).


Consider the graphic:

Generally, CA Service Virtualization projects start well and make genuine progress in a short timeframe. Successes in this initial phase cause expectations to track upward. However, when CA Services disengages, some customer teams begin to slide down a slippery slope and find themselves in “the adoption chasm.” Some of the challenges that manifest are:

  • Early successes have caused adopters to prematurely declare victory or shift critical attention away from further developing adoption strategies.
  • Unjustified hope that superior technology results in strong adoption
  • Setbacks after the CA Services engagement ends cause projects to be de-resourced or de-prioritized.
  • Organizations postpone SV training, integration and business process changes
  • Good practices are not engrained
  • Effort and budget expended on implementation and early deployment leaves insufficient resources for high value integrations, solution architecture improvements, business process transformation
  • Metrics describing business outcomes are not developed and reported
  • Operations teams are tasked with both running an ‘adequate’ implementation and discovering, recommending and applying improvements to the deployment


These challenges, and more, result in higher long term costs and greater near-term frustration within the organization.  When one or more of these challenges slows the pace of adoption, a gap, as seen in the following graphics, develops between expectations and delivered value.

Organizations that successfully address these challenges do so by implementing tactics, good practices, and strategies that shorten the trip through the chasm (or avoid the chasm altogether) and produce measurable business outcomes.   


I have observed that through our own transformation and hundreds of successful SV engagements, CA has accrued and developed insights and strategies that help organizations build robust and mature SV capabilities. So in this series of blogs, we’ll share some of these maturity strategies. The blogs will highlight key activities for developing SV maturity and building the scale necessary to consistently deliver the right business outcomes. The series will include topics such as:

  • Making the Most of Your Service Virtualization Assets
  • Change Management: A Key Element of Your Service Virtualization Strategy
  • What Type of Adoption Models Support SV Expansion
  • SV Requires Transformation to Drive Maturity
  • Common Pitfalls on the Path to SV Maturity
  • What Do Federation and Communities of Practice Look Like


Stay tuned for more very soon.  In the meantime, I welcome your comments and questions below.

Today, we are excited to announce a new and lightweight way to define virtual services and achieve even faster time to value. We’re calling it CodeSV and it’s built to look just like a typical developer interface or IDE.  There is no need to learn a new tool because it works exactly as any software engineer would expect.


Here are some of the benefits for using CodeSV:


  • Simple - The simple, fluent, Java-based interface (API) lets developers virtualize HTTP traffic in just a few minutes. By using CodeSV, you or your team no longer need to spend unnecessary time configuring endpoints. Endpoints are auto-generated. “In-process virtualization” (IPV) artifacts are created on the fly, saving time and avoiding the annoyance of doing it manually.
  • Versatile - Request/Response (RR) pairs can be imported and exported so that virtual assets can be reused and even shared across teams. This eliminates the need to create new assets from scratch and, again, accelerating the time to test.
  • For Developers - CodeSV has been built by developers for developers. It supports virtualizing both REST and SOAP services. So, we’ve got your web service virtualization covered.


Built off the experience of a decade’s worth of virtualizations and testing, CA Service Virtualization is now optimized for your desktop. At its core, CodeSV is a lightweight yet powerful tool that brings real value through its simplicity, automation, and sharing features.


Best of all, CodeSV is a free tool you can download today from GitHub.


Support for SV as Code is provided as a Community effort and can be found under CodeSV Category on the DevTest CA Community. 


 If you ever need help with anything, here is the place for you. Here are simple steps to post questions to the community and stay informed:


  • Search Existing Content: Just type your question in the top box.



  • To start a new question, simply “Ask my Question. Enter the question, please feel free to attach documents if you think it’ll help.

  •  Before you finish posting, please make sure to tag (#CodeSV) your question. And most Important select CodeSV at the question’s category.


If you want to track all information tagged and categorized as CodeSV, you can simply select the Content Category File on the left panel. And if you want to stay constantly informed of the latest postings related to CodeSV, you can create a RSS Feed.


Enjoy CodeSV! And drop by our community to let us know what you think of the tool… 


Download today from page.


As you might already know (or not), End of Service for DevTest versions 8.1, 8.2, 8.3 and 8.4 are coming up soon on July 1, 2017. This means that the above mentioned versions will not be supported after July 1. If you are one of those that are still on those releases (or even prior ones), this would be a good time to start planning for the upgrade to the latest release of 10.1. 


I would be more than happy to discuss this with you. Please reach out to me at for any questions/concerns or to simply discuss the upgrades.



 - Koustubh

Now that DevTest 10.1 is GA, attached is an upgrade guide that will take you thru the steps and the important considerations for the upgrade to the latest and exciting release of DevTest.

Here is the location of the PDF - How To - Upgrade DevTest 8.x To DevTest 10.1 


Remember we now support in-place upgrade from 8.x to 10.1 which means you can install in the same location as your prior release and you can also use you existing database which will be upgraded automatically for the 10.1 release!!

Special thanks to mazda03 (Dan Mazzei) for updating the doc with the latest screenshots and new content.





- Koustubh

DevTest 10.1 - CA Service Virtualization /CA Application Test

We are pleased to announce that DevTest 10.1 including CA Service Virtualization and CA Application Test is generally available (GA) as of April 18, 2017.  We have several new and exciting features in 10.1.  Below are the highlights in our new release.  Keep an eye out for additional videos, how-to's and best practices as we head into the new year.


Key New Features in DevTest 10.1:


New – Create Virtual Services from Swagger 2.0 Specification using the DevTest Portal –  The DevTest Portal now supports the creation of virtual services described by Swagger 2.0 specifications (the de-facto standard for REST API definition)


New –  Implementation of Invoke 2.0 APIs in DevTest Portal – Creation of virtual services (via Recording, RR-Pairs or Specification) is now done using the new Invoke 2.0 APIs and provides an improved workflow experience


Key Enhancements in DevTest 10.1:


Enhanced - Support for Swagger 2.0 – Additional support for Swagger 2.0 when creating virtual services and test cases from Swagger 2.0 specifications in the DevTest Workstation:

  • Reference Object
  • Example Object
  • Form-Data Form type
  • Default parsing of optional parameters


Related Links


Download DevTest 10.1


Video - Create Virtual Services from Swagger 2.0 In DevTest10.1

What's New in DevTest 10.1

Release Notes - DevTest 10.1

User Docs - DevTest 10.1

Upgrade Center - DevTest 10.1


CA DevTest Team


Copyright © 2017 CA. All Rights Reserved. All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies. This document does not contain any warranties and is provided for informational purposes only. Any functionality descriptions may be unique to the customers depicted herein and actual product performance may vary.

This How To blog is for those who signed up for the "Bring Your Own License" (BYOL) offering on either the AWS Marketplace or Microsoft Azure and need to insert their license and start the CA Service Virtualization tool. Here are the steps to follow to insert your license and start up the SV services on your new virtual machine.


1. Find your License Email. This is typically from a "" email address with the subject line including the words "License Activation". ATtached to the email will be a file called "devtestlic.xml". This is the file you need to add to your virtual machine.


2. Place the devtestlic.xml file in the install directory for CA Service Virtualization - C:\Program Files\CA\DevTest


3. Start the DevTest Services in the order listed below - Note all services are readily accessible on the bottom toolbar on your Desktop



  • 1. Start Enterprise Dashboard Server (wait until ready)
  • 2. Start Registry (wait until ready)
  • 3. Start Virtual Service Environment
  • 4. Start Portal
  • 5. Start Simulator
  • 6. Start Coordinator


4. Open the Portal UI and Get Started!

Double Click on the "DevTest Portal UI" and sign in with the default user name/password - admin/admin

Screen Shot 2016-04-12 at 2.57.53 PM.png


Here's a video on using the tool...

How to Get Started with CA Service Virtualization on Demand in Microsoft Azure - YouTube

When it comes to testing, core Java developers around the world tend to be a "do-it-yourself" bunch. And this is certainly true for testing components such as Java objects while they are under development. Testing Java classes and Java Remote Method Invocation (RMI) servers requires hours of tricky coding and test client development, so often developers will wait until much later in the development process to test, when the object is being rolled into a larger system.


Prior to the introduction of tools like CA’s DevTest Framework for Java & RMI, Java class or framework testing was only possible with a programmer's IDE in Java or JavaScript. Since this code-based approach can only be used by the developer, often when they are in the middle of coding the app, the tests that emerge often take the "happy path" of testing only for the conditions the developer expects.


Better unit and regression testing practices (with JUnit/Ant and other tools) have become par for the course in enterprise Java development. This offers a starting point for quality, but unit testing alone is simply too manual to properly exercise your components early in development, when problems are much easier to fix.


To really exercise these complex components early, we need to stop writing code to test code. We need to test the functionality and performance of the component - not create new volume of scripts to test. We need to help the QA and business requirements owners test to functionally validate our components, earlier in the project. CA’s DevTest was built from the ground up for this purpose. The DevTest framework includes CA Application Test and CA Service Virtualization.


Features Include

  • Inline testing.Break the code-compile-run-curse cycle, and never write a test client again. Execute test while you're authoring and adjusting them. You will have to see to believe the level of continual feedback you get while you are developing with DevTest.
  • Built on and for Java. DevTest is pure Java, and is innately aware of Java objects and RMI at a granular level, whether they are under development or "legacy" components you need to leverage. As you should expect, DevTest runs on any Java-ready client (Unix/Linux, Win, OSX) and can easily test objects on any Java-compliant server.
  • Easy enough for non-developers. DevTest is no-code automated testing, meaning developers save time by no longer having to script tests. Non-programming QA and business requirements team members can also get involved in testing the functional logic of these components.
  • Multiple systems. One DevTest test case can follow a complex workflow and validate multiple web sites, web services, Java objects and application servers. DevTest supports active sessions, SSL, authentication and magic strings, so it lets you test systems just as your end users will. On the back end, DevTest provides live interaction and deep testing of any component or service you can access over the Internet.
  • Across the full development lifecycle. Test components iteratively during development with immediate point-and-click responses. Then roll those same unit and functional tests into regression tests included in your Ant/Make builds. Then take those same test cases and turn them into load and stress tests that can run continuously



  • DevTest for Java/RMI provides an ideal automated testing environment for Java objects and components, offering a new level of breadth and depth that allows the entire team to own quality.
  • True no-code automated testing with compelling features for both developers and QA professionals.
  • Eliminates the need to program test clients and maintain fragile test scripts.
  • Provides an immediate way to accelerate "test first," XP development practices.
  • Brings QA into the iterative development cycle for early and continuous testing.
  • Works with any J2EE application server (Java App Server, WebLogic, WebSphere, JBoss, more).
  • Leverages and runs with your existing build and deploy environment of JUnit/Ant scripts.
  • Point-and-click testing of Java classes, RMI, and other objects (CORBA, etc.) in a way no other product can.
  • Browse, analyze and make live assertions against any available logic or controls in the application.
  • Manage properties and test assertions.
  • Turn your Java/RMI test runs into load tests, and scheduled regression and performance tests over time.
  • Great for analyzing and testing legacy code and objects that you may need to integrate.
  • Test instrument your custom application code with the powerful Extension SDK.
  • Built for the future keep your test cases and extend them to web interfaces, EJBs, JMS queues, JDBC databases, web services and JMX metrics.
  • Share test cases and test runs as simple XML files and incorporate them into your existing groupware, issue tracking and requirements management processes.

CA’s Application Test - The Test Harness for Custom Java Applications

Are you running proprietary or custom software? Where most testing solutions would give up, DevTest’s integration abilities let developers "test enable" software. With minimal incremental effort, developers can make their code testable with our simple API. The whole team can get rich metrics and debugging information on applications behind the scenes.

The more complex your enterprise applications are, the more compelling testing with DevTest becomes. Quality is not just a tool you can buy, it is an experience your whole team needs to own.


Take a closer look at DevTest today – check out our latest intro product tours:



CA Application Test provides a single platform to execute the automated tests across systems, APIs and services. Test execution can be performed cross-browser and on mobile, and can use real devices, simulators, labs or cloud platforms. CA Application Test builds portable, executable test suites that are easy to extend, easy to chain into workflows with other tests, and simple to integrate with existing test repositories. With workflows the results from one test can be used to feed and kick off the next test saving hours of time and enabling continuous testing.

Multiple verification steps can be chained together. A test step is a workflow test case element that performs a basic action to validate a business function in the system under test. Steps can be used to invoke portions of the system under test. These steps are typically chained together to build workflows as test cases in the model editor. From each step, you can create filters to extract data or create assertions to validate response data.

The tool can include any type of different component verification steps. Below is an example of a multi-step test that crosses various layers of a financial application.


Multi-step test cases can be then staged together into test suites and then executed in a single command. A test suite can be combination of different tests, so if user wants to setup sequentially test one channel e.g. Web and then Test Mobile they could accomplish that as well.


You can test the interface. You can test your source code. But how do you know the "loosely coupled" components of your application are talking to each other? Messaging is the core connection point for any distributed application. Until now, your options for testing the integration that connects the dots on your infrastructure were largely manual endeavors.

The automated testing tool that speaks the language of your systems

CA Application Test, part of CA’s DevTest Framework, for JMS provides teams with a no-code way to discretely and deeply test messaging queues of almost any known flavor. Rather than forcing you to custom code or adapt a test client, CA Application Test can become a test harness that publishes and subscribes to JMS frameworks. Finally, without a massive implementation effort, you can decouple your systems for flexibility, and test for reliability at the same time.


CA Application Test was built from the ground up for distributed architectures. If your strategy relies on JMS messaging to enable cross-component and cross-application integration, you should take a closer look at CA Application Test for JMS.

  • Inline testing. You'll never write a test client again. And you will execute while you're testing. You would have to see it to believe the level of information you can get while you are designing and maintaining JMS frameworks with CA Application Test. Break the code-compile-run-curse cycle.
  • Built on and for JMS standards. CA Application Test can talk and listen to most known JMS frameworks. As you migrate systems, you can get a wealth of reusable test cases and accompanying validation of your asynchronous messaging environment over time.
  • Multiple roles. CA Application Test is no-code automated testing, meaning developers can invoke JMS calls without having to script tests, and non-programming team members in QA and business requirements teams can also get involved in testing on their own terms.
  • Multiple systems. CA Application Test is pure Java, and is innately aware of Java objects, EJBs, RMI, JMS queues, JDBC databases, web services and JMX metrics. As you should expect, CA Application Test runs on any Java-ready client (Unix/Linux/Solaris, Win, OSX) and can easily test any server that routes JMS. CA APPLICATION TEST provides live interaction and deep testing of any JMS messaging you can access locally or over the Internet.
  • Across the full development lifecycle. Roll the same unit and functional tests you design into regression tests included in your Ant/Make builds. Then take those same test cases and turn them into load and stress tests that can run continuously from CA Application Test Server.



  • Advanced publish and subscribe abstraction layer to test and load JMS frameworks for any message queue protocol (TIBCO, IBM MQseries, more).
  • Allows you to rapidly and deeply exercise JMS messaging queues across disparate systems, including guaranteed delivery settings and JNDI.
  • Create and/or assume temporary JMS Queues and Topics to snoop through the entire life cycle of a message as it move from message handler to message handler.
  • Validate that web services, EJBs and Message Driven Beans (MDB) are receiving/sending the appropriate message payloads.
  • True no-code automated testing with compelling features for both developers and QA professionals
  • Eliminates the need to program test clients and maintain fragile test scripts.
  • Provides an immediate way to enable "test first," XP development practices.
  • Brings QA into the iterative development cycle by including JMS calls within their test cases.
  • Works with any J2EE application server (Java App Server, WebLogic, WebSphere, JBoss, more).
  • Leverages your existing build and deploy environment of JUnit/Ant.
  • Browse, analyze and make live assertions against any available logic or controls in the application.
  • Manage properties and test assertions.
  • Instantly turn all unit/functional/regression test runs into load tests and scheduled performance tests.
  • Set alerts for boundary or failure conditions within any test run.
  • Test instrument your custom application code with the powerful CA Application Test Extension SDK.
  • Share test cases created in CA Application Test and test runs as simple XML files and incorporate them into your groupware, issue tracking and requirements management processes.


The Test Harness for Custom Applications

Are you running proprietary or custom software? Where most testing solutions would give up, CA Application Test's integration abilities let developers "test enable" software. With minimal incremental effort, developers can make their code testable with our simple API. The whole team can get rich metrics and debugging information on applications behind the scenes.

Testing Software AG/WebMethods-powered Integration, Business Process and SOA applications Enterprises adopt the Software AG suite to accelerate the development and alignment of IT around the core processes of the business. But whether you are approaching Software AG (webMethods) solutions as an enterprise integration challenge, or using the company's IT governance platform, CentraSite, as a way to embark on an SOA strategy, quality is essential from the start. CA's DevTest Framework includes CA Service Virtualization and CA Application Test and offers Software AG (webMethods) customers a no-code way to test, validate and virtualize every layer and every phase of Software AG implementations - at design time, run time and change time - ensuring trust in your critical business applications.


Ensuring Quality and Agility for webMethods Customers

The possibilities for deploying Software AG (webMethods) solutions for enterprise application integration (EAI), or as an enabler for SOA (Service-Oriented Architecture) initiatives, are as limitless as the ways a company can configure its business processes. But one aspect that all Software AG (webMethods) customers share is a need to integrate heterogeneous technology assets - from legacy apps to newer technologies - to build a larger business context. Conventional testing methodologies can be both costly, and ineffective in these environments.

Why is achieving test automation so critical in Software AG (webMethods) integrations?


  • High cost and effort of test creation and maintenance. Conventional UI-oriented and unit test coding approaches take a procedural approach that results in brittle tests that are invalidated by dynamic change in the system.
  • Lack of complete validation of the business workflow. Many testing tools oversimplify the approach for messaging and BPM workflow testing to only component testing at a single endpoint or messaging technology type.
  • Quality must maintain a business context across multiple layers of the application. In order to validate that a business requirement is met, integration testing must not skip verification of any of the layers in the architecture, whether these layers are newly integrated, or under the authority of a third party.
  • Inability to test at every stage of the application lifecycle. Testing as an "acceptance" phase or event prior to deployment can no longer provide sufficient coverage for a constantly changing, heterogeneous application structure. Quality must be a continuous part of design, development, build and deployment in order to ensure trust in the application.
  • Constraints due to unavailable or "not yet ready" services and components limits agility and lengthens delivery times for needed functionality. Virtualization of these components is needed to enable distributed teams to design and deliver functionality in parallel.



  • CA’s DevTest framework is a comprehensive automated testing, validation and virtualization tool built from the ground up for SOA and composite application integration. If your integration strategy relies on SAG solutions, CA Application Test's declarative, no-code testing approach offers a compelling solution for maximizing delivered quality and minimizing business risk.
  • Native integration with CentraSite Active SOA. Out of the box, DevTest directly interacts with SoftwareAG's CentraSite platform, providing an immediate reference point for adding Quality to your overall Service Management and SOA Governance efforts. The ability to validate Policies - and SOA endpoints - both at a management and an implementation level with CA Application Test, ensures reliability and Trust that the system will work as defined.
  • Built for messaging standards. CA Application Test can talk and listen to webMethods Broker messaging layers, as well as most known JMS/MQ frameworks. As many implementations must integrate with - or migrate functionality from - other systems, teams using CA Application Test can get a wealth of reusable test cases and accompanying validation of the asynchronous messaging environment over time.
  • BPM Process-aware functional and load testing and verification. Break the code-compile-run-curse cycle. CA Application Test is no-code automated testing that normalizes your integration layers into a common, point-and-click test interface, allowing developers and business stakeholders to validate software in business process terms. Tests are highly reusable as orchestrated validation suites. CA Application Test extends the value of these efforts to performance testing activities, to verify that scalability or latency issues do not appear in production.
  • Multiple integrated systems. CA Application Test is pure Java, and natively tests Java objects, EJBs, RMI, JMS queues, JDBC databases, as well as web services (WSDL/SOAP) and file systems. As you should expect, CA Application Test runs on any Java-ready client (Unix/Linux/Solaris, Win, OSX) and can easily test any server that routes JMS. CA Application Test provides live interaction and deep testing of any applications assembled using SoftwareAG integration platforms.


 Features of CA Application Test for Software AG

  • Make live assertions against available services and implementation layers in SAG and webMethods-based integrations, both for business functionality, SLA validation, and as load and performance testing for IT Operations activities.
  • Create and/or assume temporary Broker JMS Queues and Topics to snoop through the entire life cycle of a message as it moves from message handler to message handler. Validate every layer of SOA or composite apps, ensuring web services, HTTP, Broker, and JMS endpoints are receiving/sending the appropriate message payloads at a unit level, or as part of the same test workflow.
  • Ensure integrity of migrations by validating file systems and transferred data within any JDBC data source. No need to program test clients and maintain fragile test scripts. Developers and QA teams stay engaged in testing throughout the application lifecycle with a high level of test automation and reuse. Business process owners can utilize CA Application Test's subprocesses to understand the validity of a process - even if they do not know exactly what components are tested.
  • Rich test metrics and feedback (SNMP and others) from webMethods Optimize, with monitoring and alerting to report boundary or failure conditions within Integration Server or Broker.
  • Virtualize dependent components with CA Service Virtualization across all technology layers within the SoftwareAG implementation, including Web Services, messaging services, data sources and underlying systems, allowing 24/7 availability of a valid environment for testing and validation for parallel development and integration.
  • Create a baseline test from a set of transactions in the using the Application Insight functionality.


DevTest for Software AG/webMethods - Testing, Validation and Virtualization Across the Platform



CA Application Test coverage for Software AG integration efforts.

CA Application Test provides teams with a 360-degree view of the quality and reliability of the application, and its underlying implementation layers, throughout its lifecycle at design time, run time, and change time.

Extending quality within your implementation process

The partnership of CA’s DevTest and Software AG extends beyond software, because an SOA strategy isn't something you buy, it is something you do. Best practices for Lifecycle Quality using CA Application Test fits within Software AG's best practice methodologies, and the end result is increased reliability and reduced cost and implementation risk.

In addition, enterprise applications will always contain some legacy or custom functionality that needs to be tested in order to fully validate a business process. CA Application Test Trace Kit's integration abilities let developers "test enable" software. Test Runner lets you incorporate tests into a continuous build workflow. Or, use Test Runner with JUnit to run standard JUnit tests in Ant or some other build tool. The whole team can get rich metrics and debugging information on applications behind the scenes.