Skip navigation
All People > Rick.Brown > Rick Brown's Blog > 2015 > June
2015

Rick Brown's Blog

June 2015 Previous month Next month

I wrote this on 17 December 2014, but never uploaded it here. As DevTest development has become even more agile, and discussions have continued regarding version control, perhaps I should publish it?

 

We’ve all been there. In a meeting with an existing or a prospective customer, the question will be asked

Does your product integrate with our project planning* / ALM* / source control* / logging* / ESB* / banking system* / SAP* / spread sheet* / word processing* / search tool* / development tool* / continuous delivery mechanism* / single sign on*?

*delete or add as appropriate

The answer is sometimes yes, but more often no.

Why is this, what are the implications, should we provide integrations, is there value in doing so?

 

As a real example, my partner wanted to upload a photo at the weekend, because the Internet doesn’t have enough cat photos. Her go-to photo sharing application has changed over the years, as many sites have that functionality. She had a choice of Facebook, Instagram, Photobucket, Flickr, Picasa, LiveJournal, iCloud, or even our own media server at home from where we can make photos available on the cloud. She decided that Flickr would be where she wanted to put this photo.

 

Fast forward 30 minutes and much swearing at poor user experiences, and the photo appeared on Imgur.

The photo

A quick aside here – the hyperlink above might not display correctly in all browsers, and isn’t as in-line as others you might have seen. This is because I am writing this article in MS Word rather than in a browser-based editor – I have been stung too many times by integration failures where pressing the “backspace” button does a “back” in my browser and loses my whole entry, where an integration link between cloud-based storage and cloud-based version control fails and loses my whole entry, where things just don’t work cleanly because providers try, and fail, to provide seamless integrations.

 

It’s hardly a ground-breaking photo - the soppy cat is cute but I don’t think much of the accompanying human, but it should have been simple to share, so I asked her what had been wrong with Flickr that meant she uploaded to Imgur. She mentioned a painful login process, not knowing where her old Flickr login name and password have gone, why the login page was plastered by Tesco advertisements, why Yahoo! Logos were all over the place, and a complete lack of enjoyment in the experience.

 

I am aware that Flickr was bought by Yahoo! a few years ago, and that it was a great photo sharing site before then, but has something gone wrong with it? I must admit, I don’t take many photos, and when I do, my phone just syncs any photos to whatever service I have it linked to. I’m not certain, but I presume my Android phones automatically upload to some Google service, and my iPhone automatically uploads to Dropbox (unless I have blown my usage limit … again). So, perhaps some investigation is needed into Flickr, and the replacement of that site as the preferred place for sharing photos.

 

I found various articles talking about a slow demise; an increasing lack of relevance of Flickr for today’s consumers. This is strange, as Flickr had user-contributed tags, comments, multiple photo sizes, in fact all the facilities I would expect in a photo sharing service, and it had these 10 years ago, when Facebook was a University project, when Twitter was what birds did, when Imgur was simply a misspelling. Investigating further, it seems that the Flickr engineering teams were tasked with integrating Flickr into the Yahoo! ecosystem instead of creating new innovative facilities that would have kept it at the forefront of photo sharing. The descriptions of its decreased usage explained about mobile strategy coming from Yahoo!, about how a large company can mismanage a small one whilst making good business decisions, how the reason for takeovers can sometimes not match the perception of either company in the marketplace, how user requirements can sometimes be neglected in the push for business requirements.

 

I wondered if I can draw parallels from this in my own working life, as I have been employed by a couple of companies who have been subject to takeover, and I have occasionally been frustrated that our product does not integrate cleanly, or sometimes at all, with complementary products, both sold by our company and third party products.

 

What is integration? It sounds easy, and it should be, but there are actually two distinct areas that can sometimes get blurred:

  1. Front-end integration, where I perform a specific action to link what I’m doing to something elsewhere. An example of this might be pressing a button in Imgur to send my photo to Facebook, or it might be to drag service performance profiles from APM into a virtual service.
  2. Back-end integration, where a server-side linkage makes a difference to what services are provided to me. An example of this might be the Yahoo! single sign-on for Flickr, or the way your mobile and online banking keep your data in sync.

 

These integrations all sound great! Why wouldn’t we want all these seamless links? They all make our lives better, increasing usability and adding value to every service. In fact, some of the facilities on which we rely just wouldn’t be possible without these integrations. All of this is true, but there are some caveats and problems that don’t become evident until we investigate further.

 

Let’s look at what happens with a product that is designed from the ground-up to be integrated. In the “consumer world”, this type of application would include modern photo sharing apps (I’ll come back to Flickr later), payments for online shopping, mobile banking and checking-in for a flight. In the “corporate world”, this would include a continuous delivery tool chain and straight through processing in an investment bank. In every one of these types of applications, there are commonalities in developing small pieces of functionality very quickly. If the functionality is not developed quickly and released frequently, problems appear such as the issues that ING bank had with their mobile app being voted by users as the worst of all banking apps. How did they fix it? They moved to frequent updates, so the app had added facilities every day. This meant that users would be able to provide a different voting score every day, comment on enhancements, add to the requirements of the app while knowing that their requirements would be instantly catered-for, and the ING mobile app has recently been voted by its users as the best of all banking apps. Empowering users like this is a great way to build brand loyalty, as users feel part of the improvement process.

 

What about applications that aren’t constructed with integration at their heart? For consumers, this might be MS Office. For corporates, this might be SAP. There are various after-market additions that can be made to most consumer applications, and the ability to export files in multiple formats, but for corporates, a long-running integration effort needs to be undertaken to make the data available from the place it’s been created to the place that needs to consume it. This is why SAP implementations can take multiple years, why your Excel spread sheet doesn’t quite display correctly in Libre Office, and why version control for documents has never completely taken over from file system storage.

 

Returning to Flickr … it was an extremely early implementation of social networking. It had built-in facilities that we take for granted nowadays, but there was nothing for it to integrate with, so when it was purchased by Yahoo!, the integration options were to integrate mainly with other Yahoo! services, hence the Yahoo! single sign-on option for logging in to Flickr, or the fact that a Yahoo! search will display indexed Flickr photos. However, this is not what users now want to do – users want to not have to log in, they want to have everything indexed rather than one provider of photos, they want to blog with embedded photos, they want a rich mobile app experience, they want a Web 2.0 experience, and Flickr hasn’t kept up with the trends of user requirements. Visionaries in Flickr and visionaries in Yahoo! had different ideas, and somehow, this meant that users didn’t see continuous improvement.

 

What is “continuous improvement”? A trite response is “make things better, and doing it constantly”. The Flickr / Yahoo! story is about making things better, but who were they making things better FOR? Flickr developers were concentrating on making things better for the Yahoo! corporation, which was not necessarily making things better for users. While they were doing this, users saw no improvement of services, and saw impediments to performing the actions they wanted (for example; needing a Yahoo! login, and the mobile app simply being a façade to the web page), and so other online services gained traction.

 

Let’s bring this to corporate services. We work for a huge software company, where there are some overlapping products and some complementary products. We also have competition who has tools in similar areas, and there are third party tools that could provide interesting value propositions. So, what should we do about them? In CA, we have a large number of developers, both for specific product lines and specifically for integration. Corporate pressures mean that cost justification needs to be made for product enhancements, and the implementation of inter-product integrations can divert development funds away from product lines. This justification is good and necessary for the company, but not necessarily good for users. Users have specific requirements for integration, and we need to make sure we continue to do the right thing for users, assuming it’s also the right thing for the company, whilst making sure we don’t ignore either users OR the company. User requirements can be gathered in a number of ways. For us, the CA Communities are a great resource, where users are able to input their requirements and other users are able to vote on them. If enough users add their vote to a requirement, product management are able to view these and decisions can be made as to whether the user requirements match corporate direction. If so, these can be added to the backlog of future enhancements. A backlog intimates that any development would happen in an agile manner, so the specific piece of functionality to match a validated requirement can be delivered while it’s still relevant to user needs.

 

What about other integration requirements? I know that users have requested version control integration in LISA since I started working with it 5 years ago (5 year anniversary last week!!), that users have always asked about more complete integrations with ALM tools, project management integration, direct linking from development tools, and more recently, continuous delivery and agile portfolio management. Where it has made the most sense, both with internal tools and providing automation to select third party tools, LISA (now “CA DevTest”) already contains integration points, But there is a vast number of other tools where integration might be useful.

 

What kinds of integration would be useful? Earlier in this article, I mentioned two types of integration. For our platform, and the use cases that provides the most value for users, the initial integration point is the front-end, or automation of user operations. This means that DevTest needs to be able to be orchestrated by external tools, such as CA Release Automation , or even Jenkins or Ant.

 

The secondary integration point would be the back-end, or facilities providing data to DevTest that come from remote applications. This could be things like version control, PPM or service catalogue. All of these integrations would be useful, but do they both add value to users and to the company, while being more important than other functionality that we need to keep implementing to keep DevTest innovative, thought-leading and ahead of any competition? This is potentially far more difficult than the initial integration points, as we need to ensure that DevTest does not lose focus, but still keeping everyone happy. I am not sure that we are best positioned to actually do this work!

 

That is a vast and potentially divisive statement! What on earth could I mean by this? Well, keeping DevTest bounded into what it excels at is the best way of not losing focus on it. Much development has been done in the recent past, which is still ongoing, to provide RESTful interfaces to the major functionality provided by DevTest, and DevTest has supported command-line interactions for a long time. By continuing to enhance these open integration server components, we provide the ability for others to actually do the integration work. If others do that work, we don’t have control. What we do have is the focus of doing what we do to the best of our ability. If we don’t have control, who does? Whoever actually writes an integration layer has the control. Integration layers are backbone systems that have been bubbling under the radar of most people for many years, all the way from point-to-point synchronisation servers to move a single piece of data from one database to another, through to enterprise service buses to allow applications to publish information and others to subscribe to it.

 

But how would it work in our environment?

 

We have an integration team. Product lines fund them. Product lines only have a finite amount of funds, and therefore any integration work must divert funds from core functionality, hampering how many innovations can be included. So they need a different funding model. Assuming they add value to cross-product sales, it would logically come from cross-product revenue (eg: ELA renewals). If these integrations aren’t adding value, the technology would presumably be made open source instead of a product.

 

This might, actually, be a good idea. Open-sourcing some internal CA integration bus development work, providing third parties the ability to create their own back-end integrations with our tools, would mean that we have the potential for first-class integrations of our own tools to many applications from many vendors and open source projects. The profile of CA Technologies as a FOSS-friendly company would increase, we would have the opportunity to become thought leaders in the technology side of the application economy, our corporate marketing could make a big push on this and we would even be able to evolve “Business, Rewritten By Software” to something like “Business Reimagined By Software”.

In this blog entry, I intend to argue that test management facilities need to be replaced with a test data management functionality, to automate the creation of our tests, to remove the need for micro-managing the testing process, to increase the velocity of release from testing phases whilst improving the quality of those releases.

 

Ad-hoc Testing

My entire working life has involved testing. I joined TIS Ltd in 1987 as a Commissioning Engineer, and my main task was to configure hardware and install Unix SVR2 onto imported Convergent Technologies systems and soak-test them by running file system checks for days on end. To ease my workload and increase repeatability, I learned how to write and use shell scripts. After a few days, I would simply tick the commissioning document to show that the systems were functional before shipping them.

 

I was first involved with more structured testing when I worked for Mannesmann Keinzle in 1991, when we needed to understand the scalability of the Solicitors’ practice management application we were about to release, so we had a person co-ordinating a load test by standing at the front of a training room and shouting for everyone for press “Enter” on keyboards at the same time. A group of us would run around all the terminals pressing the key and counting how long it took for the application to respond. Database deadlocks were found and the procedure was classed as a success.

 

The Mercury Interactive Years

My personal involvement with test management starts when I joined Mercury Interactive in 1997, and my product training included TestDirector v1.52b. My trainer, Elspeth, described it as the defect tracking tool that our developers wrote for themselves, which we were now providing to customers because it could execute tests written in our “WinRunner” tool.

 

In the UK, the Mercury Interactive pre-sales and sales teams started heavily promoting test management as a way to test more effectively, which we claimed should be a pre-cursor to testing more efficiently. Leading with test management before test automation was implemented to make sure the right tests were being run. In this way, we were able to gain competitive advantage over the other automated testing vendors, who were at that time Rational (with Robot) and Compuware (who were just releasing their own test execution tool, after previously reselling WinRunner).

 

It took some persuasion from the UK team to Mercury Interactive Product Management to raise the internal profile of TestDirector, but we were able to get better test storage & execution, improved metrics, the addition of requirements and improved metrics & reporting. The big change, however, came with the release of TestDirector 6.0, which provided test management functionality in a browser.

 

On investigation of the different versions of TestDirector, I discovered that v2 was written in Borland Delphi coupled with a Borland Paradox database – a straightforward desktop application. TestDirector v4 added support for client-server by the use of the BDE database abstraction layer. On seeing v6 (there was no TestDirector v3 or v5, by the way), I noticed the icons were still classic Borland shapes, with the stylised ticks and crosses to denote passes and failures, so I asked about the technology, and was told that the front-end was still coded in Delphi, but with compilation destination as an ActiveX control so it could be executed in an ActiveX container, such as Microsoft Internet Explorer.

 

Over the years, we repeatedly asked for TestDirector (and “TestDirector for Quality Center”, and “Quality Center”) to be a “real” web application, but the only change made was to provide a limited defect-tracking view in pure HTML to allow developers to access defects without needing to install the whole application into Internet Explorer.

 

As far as I’m aware, the current release of TestDirector, “HP ALM v12”, is still a Delphi application running inside Internet Explorer. There are metrics analysis facilities provided by HP that are built in more recent technologies, but the core product is the same as it was 15 years ago.

 

The IBM View

I left Mercury Interactive in 2007, after we were bought by HP. I moved to IBM, and  I was involved in the beta-testing for the new quality management application, “Rational Quality Manager”, built on the Rational “Jazz” platform. I liked the fact that the storage of test artefacts was synchronised between client nodes, so there was no need for a big central server, although the application still looked and felt like the Eclipse platform, in which it was built, and the platform was more important than user requirements. When you need to understand workspaces and perspectives, regardless of your function in the SDLC, the barrier to usage is increased.

 

Oracle

When I moved to Oracle in 2008, to rejoin Elspeth in a new company, one of the things that attracted me was the ease-of-use advantage that those testing tools claimed, over the HP/Mercury tools. I found that this wasn’t actually the case, as it was really just a different way of looking at testing (directly using the DOM instead of abstracting it away into object maps), and the test management platform was being re-architected to use Oracle facilities, such as WebLogic and Oracle-11g database, but the thought was always that HP/Mercury had the mind-share in test management, and competing head-to-head was always going to be difficult. Having said that, the quality artefacts stored inside Oracle Test Manager are requirements, tests, test runs and defects, with a metrics & reporting layer crossing them, just like ALM, so it looks like Oracle are happy to use HP as the thought leader in this market, and simply claim advantage by the use of a better ESB layer.

 

iTKO & CA

I joined iTKO in December 2009, and it was a completely different way of looking at testing. “Invoke And Verify” was the watch-phrase, performing an automated task at one level in an application and checking for correct behaviour inside the application architecture. Test management was largely ignored, as being process-heavy for this kind of testing. LISA (the automated test tool) tests could be linked to TestDirector using the TestDirector “VAPI-XP” test type (a way to use various scripting languages directly from TestDirector to perform simple automation tasks), and also linked to Rational Quality Manager through the new IBM Open LifeCycle Management API. At this point, I started to wonder at the ongoing viability of “classic” test management platforms. As the SDLC has moved away from waterfall & V-Model methodologies towards agile delivery, DevOps and continuous delivery, I have come to realise that test management might be dying, and test data management is replacing it.

 

Test Management is Dead

Ok, this might be a divisive statement, but bear with me.

So, you’re a test management professional. What are you actually using test management for? These are usually the main reasons:

  • Requirements Management
  • Linking requirements to tests
  • Storage of tests
  • Grouping tests into test runs
  • Input of defects
  • Linking defects to test runs
  • Linking defects to requirements
  • Analysis

 

Let’s arrange those reasons a little differently:

  • Process stuff
    • Requirements management
    • Linking requirements to tests
    • Storage of tests
    • Grouping tests into test runs
    • Input of defects
    • Linking defects to test runs
  • Analysis stuff
    • Linking defects to requirements
    • Analysis

 

Process Stuff

Process stuff is the “what we use test management for” section. It’s not specifically business benefits; it’s use cases.

Requirements Management

Is the test management platform really the best place to store requirements? Requisite-Pro, Caliber-RM, Select Enterprise, etc, all do better jobs of storing requirements. More recently, epics and user stories have started to replace classic requirements, allowing for faster delivery of features that are important to users. The main reason for requirements to be embedded in the test management platform is for linking to tests.

Linking requirements to tests

We need to know that we are creating tests for a reason, and we need to know that our coverage of requirements with tests is complete. There is no point creating a test if it does not meet a requirement. But what should the linkage be? Should it be a description detailing the use case, or should it be the use case itself, as defined, these days, in Cucumber or some other machine-friendly syntax?

Actually, it really should be the data that exercises the specific business flow, and therefore the requirements management platform should be a description of that business flow into which we can add all the data to exercise both positive and negative flows. So we need to revisit requirements management platforms to use something that is data-centric, with user stories and epics, more than classic requirements.

Storage of tests

Tests are stored and linked in the test management tool, along with metadata about the test. But a test is either source code or object code, and source code repositories already exist (such at GIT or SVN) and object code repositories already exist (such as Artifactory or Nexus), so the storage of tests inside test management is really so they can be grouped for execution.

Grouping tests into test runs

I am going to run my tests as a group. Why am I going to do this? Because my business flow has a specific order in which things will be executed.

Where is my business flow described amongst the requirements that have been captured? We haven’t actually defined it, as the number of tools providing this capability is severely limited. We’ve already said that data and business flows are closely linked, so let’s investigate that area instead of manually defining business flows here, in the spirit of automation.

Input of defects

Defects in an application are managed, progressed, analysed and stored in developer-centric defect tracking tools. The only reason for adding defects to test management is to link the defects to the runs, so the developer knows what test caused a defect to occur.

Linking defects to test runs

So the developer needs to enter test management to see what tests caused a defect to happen? There are a couple of reasons for this:

  1. The defects were manually input into the test management platform, and are not available elsewhere, with no automated record being kept of how the test has interacted with the application architecture, being passively-recorded and logged automatically.
  2. The data for the business process wasn’t defined and shared well enough, so the data in the application under test can’t be mined to retrieve details of the problem.

If these two points are fixed, there is no need to join the defect with the run – the defect is simply something unexpected that happens when a specific combination of data causes the business process to act in an unforeseen manner.

 

Analysis Stuff

Analysis stuff is “why we use test management”. What are we testing? Why are we testing it? It’s the business value of test management.

Linking defects to requirements

The process section linked defects to test runs, grouped tests to test runs, and linked tests to requirements. But there is a common theme in the textual arguments in each section, and that theme is data-centric business flows.  Go back and re-read those bits, as they will become increasingly important - I’ll wait here for you.

 

So, if we can get our data right, and get our business flows right, and do it in an automated manner, we remove lots of manual processing overhead.

Analysis

So finally we come to the big reason; the actual reason; the lynchpin as to why we are using test management at all. Test management is there specifically to answer one question. It answers bits of the question in different ways, and it provides measures of justification, risk mitigation and visibility, but when you strip away all the verbiage there is really only one question that test management answers, and that question is:

When can we stop testing?

What a strange thing! That can’t be right! What about “what is our testing progress?”, “what is our requirements coverage?”, or “how many defects have we found in the past week?”? If you look at each of these questions, they all have a subtext, and they are really getting at “when can we stop testing?” so our analysis is there to answer that question.

When Can We Stop Testing?

So now we know that test management is there only to answer the question “when can we stop testing?”, and everything else is done best by the use of data-centric business flows, how is analysis providing this answer? Metrics and reporting.

We can retrieve graphs and reports for requirements coverage, test execution coverage, test progress, defects found over time, and any number of other graphs and tables showing the thousands of tests, test runs and defects generated since the start of this phase of the SDLC, and perhaps even compare them against previous releases. All valuable measures, but perhaps the most valuable is the bell-graph that shows us the measure of defect closure. This graph is not a count of raw defects, or even of closed defects, but it is the trend of the rate at which defects are being closed against the rate at which they are being discovered, over time. Of course, the larger the testing effort, the longer time it takes to get to the ideal trending point, and so some testing phases of projects will be curtailed to meet required release dates, and the analysis metrics will provide some kind of a risk profile for the statement “we need to release now, regardless of whether you have completed your testing”.

 

So, there are two answers to “When can we stop testing?”. One is “when our quality curve has hit a certain angle of decay” and the other is “when we have run out of time”. The answer possibly should be “when we have finished our testing and passed our quality gate”, but we always have too many test cases and never enough time.

 

Test Data Management is the new Test Management

We have too many test cases because we haven’t used data-centric business processes. We have added all our requirements into our test management tool and we’ve crafted tests to meet those requirements, joining those tests together to make sensible runs. We then need to add data to all of that, and suddenly our test effort extends from weeks of careful planning and construction to months of trying to work out what data does what to our business processes, what data scenarios can be lifted from our live databases, how we can circumvent Sarbanes-Oxley, PCI DSS and HIPAA regulations that stop us using our own data to make sure new versions of our applications work as expected. We suddenly have a perfect storm of complications that mean we can never finish our testing!

 

If we had put test data at the heart of our testing, we could have avoided all these problems, but this has its own issues: we need to extract data from live systems where cleanliness is unknown, we need to discover what pieces of data are relevant, we need to mask the data that is sensitive, we need to make sure that we retain referential integrity across data sources that might not be linked, we need to determine negative and boundary scenarios along with positive ones, we need to understand where data is used in business processes, we need to create a store of all this data, we need to find gaps in the data we can mine from production and we need to generate relevant minimised subsets of data to get best coverage from the smallest possible data sets. In effect, we need to implement disciplined, complete test data management. From this test data management, we need to map business processes and generate tests and back-end data to exercise our applications that we need to test.

 

Test data management has existed for as long as test management has existed, but test data is a more difficult challenge to manage effectively. The barriers to test management have always been lower, so test management has been implemented and data has become side-lined. Few organisations have continued with the knotty, mathematical problems associated with combinatorial testing techniques, but no longer!

 

Advances in test data management techniques, along with the ability to define business processes from a data-centric viewpoint and the need to implement continuous delivery practices have recently enabled us to minimise the number of tests being executed with a far better coverage. Instead of applying data sets of thousands or millions of rows of little-understood test data to tests, we can now provide testers with the smallest sets of data that they require to completely and properly exercise their applications. Instead of test runs taking weeks to execute, they can now take minutes, with more combinations being validated. We no longer need to answer “When can we stop testing?” because our minimal set of tests that run in minutes provide our developers with all the defects. Developers can see where tests fail, why tests fail, what data combinations cause tests to fail, what paths are taken by testers through their application infrastructure, how to triage those defects and the root cause of problems.

 

There are a few productised aspects to this:

  1. Complete test data management
  2. Business process design
  3. Passive recording of test paths through applications
  4. Automated data-centric testing

 

These four facilities completely remove the need for test management, because defects are automatically linked to data scenarios, the appropriate business processes are being automatically executed, it’s all happening at the press of a button at code check-in time, at build-time and at deploy-time. It is also happening every few minutes, so if some code is released through an exception process, this is tested before any tester or user is even aware of the change.

 

In Conclusion

We are living in the age of “automate the automation”. Historic manual processes are being removed, technical testers are able to use their expertise delivering quality functionality, business testers are able to run their ad-hoc business processes without the need to log functionality problems, developers are directed at the place in their code where problems exist, and the entire SDLC process is streamlined.

 

The goal that we can now reach, by implementing these new automation processes and concentrating on test data management rather than test management is:

Question: “When can we stop testing?”

Answer: “We already finished testing, the developers already fixed their code, we already verified that it’s working, and the users are already running through their edge cases”

 

Capabilities mentioned here are contained within the following products:

 

CA DataFinder

CA Agile Designer

CA Service Virtualization

CA Continuous Application Insight

 

Copyrights acknowledged for all third-party tools