Skip navigation
All People > Rick.Brown > Rick Brown's Blog

I awoke with a start at 3am this morning, with a “Eureka” moment, and I had to get up and try it out!

 

This is a convergence between my home automation IoT interest and DevTest, where a couple of things have come together to add lots of value to both.

 

First, the home automation side:

I already have both Hive (for my heating and hot water) and Hue (for lights in my top-floor bedrooms). They are both Zigbee gateways, but neither of them provide the flexibility of general-purpose hubs. I wanted to investigate low-power sensors, and I would need to link them into the rest of my automated home, so I investigated whether any of the available smart hubs would integrate with Node-Red or MQTT. Nothing does, out-of-the-box, but there are a couple of options. Custom firmware for the Wink hub can include MQTT, but the Wink is difficult to get in the UK. There’s an open-source project to link SmartThings to MQTT, and Amazon had a good deal for a SmartThings starter kit, so I jumped at it.

 

I configured my new SmartThings hub, installed the smartthings-mqtt-bridge in pm2 alongside Node-Red, and I could now see the SmartThings sensor data when I subscribed to the correct topics.

 

SmartThings sensors are fairly well-priced, but I want to go completely overkill on my sensor readings. Browsing through the gearbest website, I found some great deals on Xiaomi Aqara sensors, so I purchased 10 temperature & humidity sensors, a water leak sensor, a motion detector and a magic cube. I connected all of these to SmartThings using user-provided data handlers, and added them to the smartthings-mqtt-gateway, so now I have Node-Red seeing all the new sensor data. Lovely!

 

Now I need to monitor MQTT, so I can see what topics are configured in case my automation is publishing to topics that I’m not subscribed to. Astonishingly, this functionality doesn’t exist in MQTT, so I’m stuck!

 

Now to DevTest:

I’ve been having conversations with MarcyNunns about the client-side JARs we need for ActiveMQ JMS support, and which we can ignore, because DevTest uses ActiveMQ internally and so we can’t simply use “apache-activemq-all.x.x.x.jar” file for communication. This prompted me to look at all possible ActiveMQ JAR files, and I see some files that I don’t understand, that seem to indicate some functionality for AMQP, for REST, for MQTT, for Stomp, and more. I wonder what those are for?

 

Fast forward to 3am this morning:

What if ActiveMQ is able to be a MQTT broker?

What if I’m able to view the topics using the ActiveMQ admin page?

What if ActiveMQ will somehow bridge between MQTT and JMS?

Could I integrate DevTest with this?

Would that allow me to test and virtualize MQTT services?

Does MQTT have enough structure to provide the payload, header properties, encoding information, and other requirements for scalable enterprise use (i.e.: is it mature enough) to let me create something useful in DevTest?

I had to try it out!

 

First thing – investigate ActiveMQ. I immediately fell down a rabbit-hole of ActiveMQ -> ServiceMix -> Mule -> WSO2 -> JBoss Fuse. Hmm … reading about them, we should be able to apply service virtualization to all of these technology stacks with little difficulty, so long as the mock endpoints are coded during development, but that is all outside the scope of my requirement. I don’t need to integrate with routing gateways, service catalogues, OSGi, Enterprise Integration Patterns, or anything like that right now (although I see immediately where DevTest adds value to all of those and how we could improve integration with them), so I back out until I've returned to ActiveMQ information.

 

According to the documentation, there’s a config file somewhere that lets ActiveMQ be a broker for lots of different protocols, and it apparently translates between them all invisibly! This is looking hopeful!

 

So I install ActiveMQ onto my monitoring server. I configure ActiveMQ for automatic start-up, and I find the config file. It looks like many of the transports are enabled out-of-the-box, and the MQTT connector is running on the default MQTT port, so I halt my Mosquitto broker and start ActiveMQ. My MQTT client processes don’t even notice the change – they just continue working!

 

I visit the ActiveMQ default monitoring page, with hope in my heart. All of the topics are there, and ActiveMQ has even added advisory topics for each one so it can be monitored by JMX. I’m impressed! This solves my home automation problem, giving me a more enterprise grade ESB. But what’s the cost? ActiveMQ uses the Apache licence, so there’s no licensing costs. Looking at my system monitoring, I see an extra 10% of RAM is used by my server, but CPU and disk space don’t seem to be affected. So ActiveMQ is heavier than Mosquitto, but I can cope with that overhead for the benefits I’m getting.

...

 

 

What about the DevTest side? I read a little about how ActiveMQ translates between MQTT and JMS. It looks like “/” separators are replaced by “.”, and “#” is replaced by “>”. It doesn’t look like $SYS/Broker topics are translated, but JMX is a better solution, so I’m not worried about that.

 

I configure a JMS topic in DevTest that happens to have the same name as one of the MQTT topics, and I create a quick subscriber in DevTest. I then publish a message on MQTT, and I immediately see DevTest reading the message! Marvellous! But hold on – what’s this? DevTest is seeing it as a binary message, but everything in my home automation is text strings, JSON objects, XML and arrays. I’ll need to investigate this, after I test virtual service functionality.

 

I configure a Node-Red to use a pair of topics to use client-side and a pair of topics to use server-side.

I build a JSON message and configure it to send to the client-side request topic.

I link the server-side request and server-side response topics.

I listen to the client-side response topic and log the output.

 

 

 

I configure DevTest to use a pair of proxy topics and a pair of live topics. In DevTest nomenclature, “proxy” is short-hand for topics that are usually client-side, and changeable by the team who need to perform service virtualization, and “live” is short-hand for topics that are usually server-side, and not changeable by the team who need to perform service virtualization.

 

I go through the Virtual Service generation wizard, selecting the options to record JMS virtual services.

I start the flow in Node-Red.

 

DevTest starts recording services immediately. I’m impressed!

 

During the recording, I look at the meta-data. MQTT isn’t sending any custom header properties. It looks like it doesn’t support advanced messaging facilities like this.

I look at the recorded data. It’s being shown as binary. I need to investigate this now.

I look again at the meta-data. DevTest is seeing the message as a javax.jms.BytesMessage, but I’m sending text from MQTT.

I go to Node-Red and change my message to a JSON object instead of a JSON string. DevTest sees it as the same sort of message.

I go to Node-Red and change my message to a piece of raw text. DevTest sees it as the same sort of message.

I go to Node-Red, but I can’t see any way of reporting what type of message is being sent.

I read the ActiveMQ documentation. It says that MQTT doesn’t have the capability to notify subscribers of message type, and messages can contain any type of data, so ActiveMQ always reports javax.jms.BytesMessage for safety, and leaves it to the subscriber to understand what is being sent and decode it.

Therefore, I will need to tell DevTest what it’s expecting, and deal with it accordingly.

 

In DevTest, we have the Scriptable Data Protocol Handler, so I take advantage of this facility to decode and normalize the messages.

 

Request-side scriptable DPH:

%beanshell%

import org.apache.commons.lang3.*;

byte[] mqttMessage = lisa_vse_request.getBodyBytes();

String textMessage = new String(mqttMessage);

lisa_vse_request.setBodyText(StringEscapeUtils.unescapeXml(textMessage));

 

Response-side recording scriptable DPH:

%beanshell%

import org.apache.commons.lang3.*;

byte[] mqttMessage = lisa_vse_response.getBodyBytes();

String textMessage = new String(mqttMessage);

lisa_vse_response.setBodyText(StringEscapeUtils.unescapeXml(textMessage));

 

Response-side replay scriptable DPH:

%beanshell%

String textMessage = lisa_vse_response.getBodyText();

byte[] mqttMessage = textMessage.getBytes();

lisa_vse_response.setBodyBytes(mqttMessage);

 

I complete recording and deploy the virtual service.

 

I go to Node-Red and start the flow. My virtual service responds correctly. DevTest is now perfectly supporting MQTT for service virtualization.

 

I go to DevTest workstation and send a JMS message to the request “live” topic. DevTest receives a response from Node-Red. DevTest is now perfectly supporting MQTT for testing.

 

So, simply by deploying ActiveMQ and making sure the transports are enabled, DevTest now has perfect functionality for testing and virtualizing MQTT. Assuming the other transports in ActiveMQ work in the same way, DevTest now has perfect functionality for testing and virtualizing all of them. For protocols that DevTest will work with in this manner, see the webpage https://activemq.apache.org/protocols.html

Over the past couple of years, we've solicited opinion in the DevTest community about MQTT, but there's been little customer traction as yet. I use MQTT at home (see My Current State of Home Automation  for an introduction to my setup), and I'm sure I would make use of DevTest with MQTT, so I've been wondering whether to enable support for a few months.

 

I'm not a member of the Product organisation at CA, so nothing I write is ever going to form any official CA position on any subject - I'm just a hacker who happens to work in Pre-sales, and has lots of experience with DevTest (since LISA v4.x).

 

The final impetus to creating this blog entry was a question on the DevTest community Does DevTest support pub/sub for MQTT?  and so I wondered whether my interest would be more than personal.

The first thing to investigate is whether there's an open source MQTT client presented as a Java library. An instant response from Google pointed me at the Apache Paho project, as explained and described in the link, The link also includes a simp publisher, pasted from a Paho example.

 

Subscribing is a little more tricky. as message reading uses a callback. Fortunately, talented developers around the world share their source code, and I could nab this to re-use in DevTest. I based my subscriber on solace-samples-mqtt/TopicSubscriber.java at master · SolaceSamples/solace-samples-mqtt · GitHub 

 

So I have a pair of simple MQTT test steps

My Publish step is what I pasted in the previous link, but here is it for completeness:

 

 

import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;

 

String topic = "MQTT Examples";
String content = "Message from MqttPublishSample";
int qos = 2;
String broker = "tcp://192.168.1.21:1883";
String clientId = "JavaSample";
MemoryPersistence persistence = new MemoryPersistence();

 

MqttClient sampleClient = new MqttClient(broker, clientId, persistence);
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(true);
_logger.info("Connecting to broker: {}", broker);
sampleClient.connect(connOpts);
_logger.info("Connected");
_logger.info("Publishing message: {}", content);
MqttMessage message = new MqttMessage(content.getBytes());
message.setQos(qos);
sampleClient.publish(topic, message);
_logger.info("Message published");
sampleClient.disconnect();
_logger.info("Disconnected");
return "Message sent";

My Subscribe step is based on that Solace code:

 


import java.sql.Timestamp;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

 

import org.eclipse.paho.client.mqttv3.IMqttDeliveryToken;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
import org.eclipse.paho.client.mqttv3.MqttCallback;

 


String topic = "MQTT Examples";
String content = "";
int qos = 2;
String broker = "tcp://192.168.1.21:1883";
String clientId = "DevTest_subscribe2"; // this needs to be unique. If previously failed, you 'll need a new one
long timeout = 30000; // timeout in milliseconds


MqttClient sampleClient = new MqttClient(broker, clientId);
MqttConnectOptions connOpts = new MqttConnectOptions();
connOpts.setCleanSession(true);
_logger.info("Connecting to broker: {}", broker);
sampleClient.connect(connOpts);
_logger.info("Connected");
//_logger.info("Publishing message: "+content);
//MqttMessage message = new MqttMessage(content.getBytes());
// Latch used for synchronizing b/w threads
final CountDownLatch latch = new CountDownLatch(1);

 

// Topic filter the client will subscribe to
final String subTopic = topic;

 

// Callback - Anonymous inner-class for receiving messages
sampleClient.setCallback(new MqttCallback() {

 

  public void messageArrived(String topic, MqttMessage message) throws Exception {
    // Called when a message arrives from the server that
    // matches any subscription made by the client
    String time = new Timestamp(System.currentTimeMillis()).toString();
    content = new String(message.getPayload());
    _logger.info("\nReceived a Message!\n\tTime: {}\n\tTopic: {}\n\tMessage: {}\n\tQoS: {}", time, topic, content, message.getQos() + "\n");
    latch.countDown(); // unblock main thread
  }

 

  public void connectionLost(Throwable cause) {
    _logger.info("Connection to broker lost! {}", cause.getMessage());
    latch.countDown();
  }

 

  public void deliveryComplete(IMqttDeliveryToken token) {
  }

 

});

 

// Connection Details
var connection_properties = connOpts.getDebug();
_logger.info("MQTT connection properties: {}", connection_properties);

 

// Subscribe client to the topic filter and a QoS level of 0
_logger.info("Subscribing client to topic: {}", topic);
sampleClient.subscribe(topic, qos);
_logger.info("Subscribed");

 

// Wait for the message to be received
try {
  latch.await(timeout, TimeUnit.MILLISECONDS); // block here until message received, and latch will flip
                                                                               // If there are no arguments, this will listen forever
} catch (InterruptedException e) {
  _logger.info("I was awoken while waiting");
}

 

// Disconnect the client
sampleClient.disconnect();
_logger.info("Exiting");

 

return content;

And so now I can send and receive simple messages over MQTT. I could expand the scripts with MqttConnectOptions to add things like SSL support, but I don't currently have a requirement to do that.

 

I scanned through the MQTT specifications for some of the more advanced messaging concepts, but I didn't see information about things like correlation IDs. I also didn't notice anything regarding message structure, and the only thing I found about topic naming was to avoid Unicode U+0000 (null). Enforced metadata is also lacking.

 

So, this will do for now, as a simple driver for MQTT. Any implementation I do with this will include the generation and encoding of data in different steps, along with abstracting the MQTT configuration options. These steps (with minimal changes, such as the subscriber not timing out) can also be the foundation for a half-bridge virtual service, much like the Tibrv one I blogged about.

I was going to write some MQTT work with DevTest here, but I noticed that I haven't made an update about my home automaton setup for a couple of years, since https://communities.ca.com/blogs/rickTesting/2016/07/15/iot-testing-next-small-steps  so I should do that first, as a precursor to why I'm interested in MQTT.

 

Since then, we’ve had some home renovation work done, and I took the opportunity to add more smart devices.

I integrated Hue strip lights and downlighters on my top floor, so I’ve added a Hue gateway (it would be great if a Zigbee controller hub would control all my Zigbee devices, but Hue and Hive don’t interoperate).

I added a Broadlink RM2 Pro Plus (and a RM Mini) for replication of IR and RF remote control functionality.

I bought some simple RF sockets, so I can start to have control of electrical items. I use one for a standard lamp, and others for Christmas lights.

I installed Node-RED to complement my OpenHAB installation. I migrated much of my IoT functionality to Node-RED, as its user experience is more positive to me. I know that OpenHAB is written around IBM’s Eclipse SmartHome, and Node-RED was written by IBM in Hursley (if it had been possible to have all this stuff when I worked for IBM, I might not have left there!), and it’s obvious that they have origins in different decades. As an occasional hacker, the new model-based approach to home automation in Node-RED matches the CA Service Virtualization approach to model-based virtual services and API tests, and the CA Agile Requirements Designer approach to model-based testing, so it all synchronises nicely.

I upgraded my home wifi by installing BT Whole Home Wifi, so I get better wifi on the upper floors in my house. I upgraded from SKY+ to SKY Q. So I now have two wifi mesh networks; one for networking and one dedicated to TV.

I bought my wife a gaming PC, so I added some AV1300 Powerline boxes to give her fast, reliable wired networking.

I implemented the TICK stack for monitoring my home network. This uses Telegraf to capture server stats, InfluxDB to hold time series data, and Chronograf for metrics visualisation.

After implementing all the above, I realised that I needed a broker to store simple numeric data, so I wouldn’t need to hard-code any point-to-point integrations. I know and understand MQ, JMS and AMQP, but none of these is designed for the level of chaos that I would undoubtedly reach with all my disparate systems, so I did a quick test installation of the Mosquitto MQTT broker, and when I realised how easy it would be to read-and-write metric data in topics there, I decided that this would be my preference.

So, all my monitoring devices output to MQTT, my control devices read from MQTT, and I can visualise what’s happening.

I upgraded my broadband with BT, from “Superfast 2” (80Mbps download, 20Mbps upload) to “Ultrafast 2” (314Mbps download, 45Mbps upload). BT wanted to sell me “Ultrafast” (100Mbps download), but I work with BT, so I know the packages they are newly providing, and I requested the fasted package possible. The BT sales person said that this was the first sale of Ultrafast 2, and when the BT Openreach engineer turned up, he said that mine is only the third Ultrafast 2 installation that has happened in the UK, so I feel like a guinea pig for this. I asked about the theoretical maximum performance that BT could support using this technology, and the engineer said that he’s been told that it can support 825Mbps, so there’s somewhere for me to upgrade to, when BT work out how to package this service. This is starting to expose performance issues elsewhere on my LAN, as the broadband link is no longer the bottleneck!

This is what my default monitoring page looks like:

Hmm, looks like (from the first graph above) that my Broadlink isn’t reporting the current temperature in the lounge. After a quick investigation …

The IP address for the RM2 Pro Plus is different (I power-cycled it a week ago so I could add a USB extension cable to it).

I change the IP address from .68 to .157, and the section of the metrics graph (past 6 hours instead of 30 days, so help visualising the change) to:

So now I’m getting lounge temperature reporting again.

The other thing to note from the main metric graph is that I’m seeing about 270Mbps download and 50Mbps upload. This is being measured by Node-RED on my monitoring server, which uses hard-wired networking in my server farm. I check connection to OpenDNS every few seconds. If it’s successful, I do a speed test every 5 minutes. I implemented that functionality by using this Node-RED flow:

I expect to see a lower “real-world” performance than the link speed as measured on the BT SmartHubX, so I’m happy that I’m getting more than 300/50 from the hub to the exchange.

What each of my flows do

I don't intend uploading JSON depictions of my flows, as everyone's flows will be different.

Hive flow

This gets the current temperature (as measured by the Hive controller, which is located in the hallway) and the target temperature using Hive APIs, and writes those numbers to MQTT.

Nest flow

This flow checks the status of all the smoke alarms.

Hue flow

I have a Hue motion sensor, and this device writes the current temperature at its location (top floor) to MQTT.

I can turn the lights on or off on-demand.

Scan LAN flow

This scans the network on deploy, and scans the Broadlink devices on demand.

Public Services flow

As well as getting broadband performance, this flow also gets the weather forecast, storing the current temperature to a topic that Chronograf will read and add to visualisation.

Broadlink flow

This flow enables me to learn RF codes as well as checking status and sending temperatures.

It also determines whether to switch any of the lights on, through the RF sockets.

Lights Schedule flow

This determines when to turn the RF lights on or off. It will also report what it does using text-to-speech. I‘m a fan of cheesy ‘70s sci-fi TV programmes, and there was one called Blake’s Seven, which had three talking computers, all of which would be classed as AI these days. One of the computers was called Slave, and responded obsequiously, and this is how the flow responds to a change of state in the “Standard Lamp” node:

Dashboard Console flow

Node-RED can have a dashboard, so a web page can be built to monitor and control nodes & flows. This flow determines what will be shown there

The web page looks like this:

 

Persistent Storage flow

This flow integrates MQTT with InfluxDB

Alexa flow

I don’t own an Amazon Echo device, but there are third-party ways to connect this device to Node-RED, so this is implemented in case I decide to purchase one.

Tasker MQTT flow

I’m not a fan of Tasker on Android, because my phone always feels sluggish after installing it. It can, however, have a direct connection to MQTT

 

So, this is how my home is configured. You can see that most of my flows use MQTT, and you can see from the first metrics screenshot that I'm writing and reading perhaps 300 messages per second. I only have my implementation, so I don't know if 300 messages per second is small or large; I just know that it's what I'm storing so I can visualise (and debug) what I need.

Now I've introduced you to my setup, I suppose I should get back to DevTest with MQTT, as a different follow-up to IoT Testing - next small steps , so I'll do that in my next blog post.

Introduction

There are cases in many situations where API request messages have a number of repeating blocks of data. CA Service Virtualization (DevTest) will generate virtual services to match the data scenarios that are seen during the generation process. If, on replay, a different number or different combination of repeating nodes & elements are seen, DevTest will drop through to a “no match found” response, and the client application will receive this as a 404 error, with the text “Please consider expanding your virtual service”.

 

An example of the problem:

Let’s consider a shopping cart. During the generation process, perhaps 1 item is listed in the cart, in which case DevTest will see one block of data for the shopping cart contents. On replay, there might be 2 items in the cart, and there’s a good chance that DevTest won’t understand what to do with 2 items:

 

Example:

<Cart>

            <item product_code=123>

                        <name>Mobile Phone</name>

                        <description>Samsung Galaxy 9</description>

                        <quantity>1</quantity>

            </item>

</Cart>

DevTest will see the arguments like this:

Cart_item_name = Mobile Phone

Cart_item_description = Samsung Galaxy 9

Cart_item_quantity = 1

To add “product_code” to the list, the attribute needs to be moved to an argument, and this would be done by the inclusion of an additional Data Protocol Handler (DPH).

 

On replay:

<Cart>

            <item product_code=123>

                        <name>Mobile Phone</name>

                        <description>Samsung Galaxy 9</description>

                        <quantity>1</quantity>

            </item>

            <item product_code=124>

                        <name>SIM</name>

                        <description>PAYG</description>

                        <quantity>1</quantity>

            </item>

</Cart>

DevTest will now see the arguments like this:

Cart_item_name_1 = Mobile Phone

Cart_item_description_1 = Samsung Galaxy 9

Cart_item_quantity_1 = 1

Cart_item_name_2 = SIM

Cart_item_description_2 = PAYG

Cart_item_quantity_2 = 1

 

The DevTest pattern matching will only know about Cart_item_name, Cart_item_description and Cart_item_quantity (and product_code if the extra DPH was included), so it can’t know how to deal with the elements matching “*_1” and “*_2”.

 

Historically, we would have suggested that the service image gets updated with a specific match against 2 items being in the shopping cart, and this specific scenario would then match correctly.

 

Of course, there can be any number of items in the shopping cart, so there would need to be the same number of matching virtual service instances.

 

If any data needs to change outside these blocks, every virtual service instance needs to be updated with the change, and this is now starting to become unwieldy.

 

I have seen situations where specific blocks require different processing, such as the data block containing “SIM” responding with extra fields like an IMEI number, or some responses adding a future shipping date, or different product codes including different response elements, and suddenly the number of virtual service responses explode with specific matches, making them impossible to search, and discouraging their use.

 

If we’ve added the DPH to cope with “product_code” above, we have an even worse problem, in that we now need to support specific combinations of items, and suddenly we need to use exponential mathematics to determine how many responses we need in our virtual service. It’s not uncommon to have 30 different “product_code” entries possible, each combination of response blocks needing a different response instance, and now we have billions of responses. DevTest can’t be architected to support this without the inclusion of alternative methods of processing the request.

 

There are two current out-of-the-box solutions for this, but I’m going to explain a third method.

  1. Incorporate CA TDM into the virtual service generation process. This is the preferred mechanism, as it offloads all processing to TDM, which was specifically built to support this kind of scenario, putting each repeating block into a different database table and providing the tester with a self-service web page to generate their test data (and virtual service data) on demand, so the required services are deployed and provided at the right time. The argument against the use of this mechanism is simply a commercial one, as it requires a separate product.
  2. Investigate Data-Driven Virtual Services. This is a mechanism within the DevTest Portal to attempt support of this kind of virtual service. It replaces optional segments of the virtual service with @name@ blocks, the data for these blocks coming from an attached Excel spreadsheet. Arguments against this mechanism are that it can only support protocols enabled in Portal, it can only support DPHs enabled in Portal, and some DevTest intelligence is impossible because the service image no longer includes valid response data.
  3. Read on for my third method of supporting this scenario. It uses DevTest Workstation, adding additional virtual services for the repeating blocks, adding request and response scriptable DPHs, adding an extra loop inside the service model and modifying parts of both the request and the response on-the-fly. It is not as user-friendly as the first solution, and it’s not as Portal-friendly as the second solution, and it will need you to understand how to do scripting, but it’s all self-contained and only takes a few hours to completely configure, and provides better responses in most scenarios.

 

Attached as a document so screenshots are immediately embedded.

There is the occasional requirement for virtual services to be created for protocols where DevTest does not have out-of-the-box support for the transport protocol(s) in use. There are various ways to create deployable virtual services in these cases, the most common of which is the creation of a custom Transport Protocol Handler. However, this is not the only way to design and deploy virtual services for unknown protocols. This document will explain an alternative method, which is useful for cases where we have test steps for a technology but we don’t have virtual service support. The technique is commonly known as “half-bridging”, as we will be creating a matched pair of custom virtual services, to translate (or bridge) between the unsupported protocol and a supported protocol. After activating these two virtual services, we will be able to use one of the standard supported mechanisms in DevTest for the creation, storage and deployment of virtual services, providing all the usual advantages of DevTest over other service virtualisation tools or stubs & mocks.

The Request Listener

The first step is to create a virtual service from scratch, into which we will insert a listener test step for the protocol, a transformer step for formatting the request into a supported data protocol, and a transmission step to forward the message over a supported transport protocol.

Depending on the transport requirements, the transmission step might provide the facility to record a standard virtual service, or it might provide message samples to use as “r-r pairs” for importing into a recording.

Depending on the data requirements, the transformer step might discard control fields, translate to a common standardised and supported data protocol, create meta-data information or store the entire data message.

The Response Listener

The next step is to create a matching virtual service as the listener, but for the response. It should have the same facilities as the listener.

The Responder

This is the opposite to the response listener. It should take a VSI response, translate it into the required data protocol and transmit it through a receiver step to the original protocol.

The standard virtual service

The most common virtual service to use by default is the HTTP virtual service, as it has no external requirements. Other types of virtual service can be used, depending on the requirements such as multiple responses to a single message, reliable messaging, asynchronous considerations, etc.

The custom protocol

This document will explore the requirements of the TIBCO Rendezvous protocol. I chose this for a number of reasons:

  1. I had a customer requirement to do this
  2. It can be downloaded from the vendor website quickly
  3. The installation process is simple
  4. It is trivial to begin sending and receiving data
  5. DevTest includes a test step for this protocol
  6. It needs a lot of processing to correctly replay messages
  7. The message format is documented

Any techniques explained here should be transferrable to other protocols, such as FTP. The specifics might change, but the goal should be the same and the result should be working, supportable virtual services.

If you would prefer an out-of-the-box solution, a Professional Services engagement would be better for you, where transport and data protocol customisations are all hidden in code.

Download & Install of TIBCO Rendezvous

Rendezvous downloads are available for Windows and for Linux. With a simple registration, or connection through Google+, downloads are immediately available.

Installation takes only a few minutes. After install, copy the C:\tibco\tibrv\8.4 folder (or whichever version of Rendezvous you've downloaded) to C:\tibco\tibrv so TIBCO’s execution assumptions are met.

Configuration of DevTest to support Rendezvous

The DevTest documentation explains what files are required from TIBCO to be copied into DevTest.

https://docops.ca.com/devtest-solutions/10-1/en/administering/general-administration/third-party-file-requirements#Third-PartyFileRequirements-TIBCOFileRequirements

I found that not all the mentioned JAR files were installed by TIBCO. I copied this set of files into DEVTEST_HOME/lib/shared:

So I’m missing the JMS files. No problem – I can use native communication in DevTest.

Execution of a simple TIBCO Rendezvous system is by launching the following application. I put it in a batch file for ease of not having to remember any command-line arguments:

This provides a simple web server to show the status of its running Rendezvous environment

A Simple Rendezvous message publisher & listener

Choose a subject, and start listening to it. Do this in a new shell or command prompt:

You will receive an error every few seconds about the licence being expired. I believe the installation page gave me 30 days, so I’m not sure why it complains immediately, but it doesn’t affect functionality.

Now write something to that subject, using another new shell or command prompt:

Look again at the listener window. It has been updated with that message:

Ok, everything looks good. The message looks like text, although it has a strange wrapping around it of {DATA=”…”}. This will bite us later, but for now, it’s all working.

The TIBCO web page has also been updated, with the registered listener:

TIBCO Rendezvous API Documentation

The TIBCO Rendezvous message format is called “TibrvMsg”, and is documented at:

https://docs.tibco.com/pub/rendezvous/8.3.1_january_2011/pdf/tib_rv_java_reference.pdf

This gives us lots of hints about what that “{DATA=” is doing, in the listener log.

TIBCO Rendezvous Message Requirements

We need to decode & encode the TIBCO Rendezvous message format. In the TIBCO logs, the messages look like text, but they contain more information.

A TIBCO Rendezvous message is a grouping of name-value pair objects, with the addition of “id” and “type” fields.

I need to deal with “aeRvMsg” messages, which can include name values that are not XML-safe.

When “type” is set to “TibrvMsg.MSG” in a TibrvMsg message, it is an object inside that message, so we need to consider how to use recursion to explode through the entire message and extract names, ids, types and values. In Rendezvous, these are called “name”, “id”, “type”, “data”.

One use case for TIBCO Rendezvous is fast messaging. Therefore, timestamps and other dates in TIBCO Rendezvous fields use TIBCO-specific date formats, which can include tenth-of-a-microsecond timings. General purpose operating systems are unable to specify or use dates & times to such precision, and the “TibRvMsg.DATETIME” type will notify us that we need to send a timestamp through the TibrvMsgDate() class.   

A DevTest Rendezvous message sender

We have a simple message being published and received when using built-in Rendezvous facilities. Let’s move on to DevTest.

Before reading the API documentation, I presumed that I would simply be able to send “myMessage” or “{DATA=”myMessage”}” and have it send correctly. When I did this, however, the listener showed “lisa-field {DATA=”myMessage”}”. I subscribed to “mySubject” in DevTest, and immediately republished the captured message, the listener log showed it correctly, so something more involved is happening.

After reading the API documentation, I realised that “myMessage” is the “data” part of the TibrvMsg. It also needs “name” to be set, which would make sure it doesn’t default to “DATA” or to “lisa-field”. The “id” and “type” fields can be ignored for the next couple of paragraphs.

So, I must send an object over the required subject, and this object must have both “name” and “data” associated with it. In DevTest, it’s time for scripting. Let’s create a test with two steps, the first of which will be a script step:

In my DevTest publish step, I send the object “rvMsg”, and the listener log displays it correctly.

Ok, this is what I need to expand on, but I will need to create a complete message like the ones I’ll be using on-site, to make sure I don’t forget to add support for some features that are used. Note that I have masked all potentially identifiable data, so the script shows what I’m doing and how I do it, without including PII.

But wait – there’s something extra in my “real” message. What are those date entries? Well, because Rendezvous does strange things with date precision, I find that I need to use their specific date facilities rather than assuming some compliance with Java SimpleDateTime. The TibrvMsg.add method also allows “type” to be defined, and the “type” for a date field is “TibrvMsg.DATETIME”.

I execute my test, and the listener logs it correctly. Now we’re playing with gas!

I could do with abstracting the TIBCO structure out of my Rendezvous step, so it can be controlled by DevTest config files. This will become important in the next steps, so let’s do that now.

Firstly, restart the listener on the service that I’ll be using:

Then let’s add the TIBCO service definition into project.config:

Now let’s edit the publish step to use those definitions:

Ok, now, if I want to write to different services or subjects, or send to remote Rendezvous servers, I can do that in the config file and leave the test as-is. This would also mean that, eventually, when I want to record different subjects, I can deploy a single virtual service with different config files to record multiple subjects concurrently.

DevTest includes a service called “Continuous Validation Service” (CVS). If I deploy this test to CVS, and run it on a schedule, I should be able to use it as a constant source of TIBCO Rendezvous messages, which will speed implementation and debugging of everything I need.

 

Test Harness

Now we have a publisher, we can craft a responder. Using exactly the same creation mechanisms as above, but using a response message instead of a request message, we build a service model to create and publish a message. Insert a subscriber at the start, and loop from responder to subscriber. Insert a “Messaging Virtualization Marker” step at the start, and we have a deployable responder.

If we deploy the virtual service, we will not see transactions incrementing in the Portal. This is because we’ve hand-crafted our responder, so it doesn’t include a transaction incrementer. I modified my project.config file to separate the request subject and the response subject.

Now we have a request being sent to the request subject every minute, and a response being sent to the response subject as soon as a request is read.

Design Decisions of the Half-bridge

Before we go any further, we need to make some protocol-specific design decisions.

Our requirement is to virtualise TIBCO Rendezvous. This typically uses fire-and-forget publishing, which means that, to record, we need to be subscribing at the moment of publishing, otherwise we won’t see the message.

Other systems use different mechanisms. For example, IBM MQ Series typically uses queues, which persist messages until they are read, and HTTP uses on-the-wire interception, which will fail until a server endpoint is made available.

There are a few options open to us. We need to be constantly listening to Rendezvous subjects, so we can capture the messages, but we need to decide how & where to store them, so our recorder can use a standard mechanism.

We could republish messages to some queues that we can read later, but we would have an external dependency on the queue-based ESB, and I don’t want external dependencies in this situation.

We could try to mash together listening and recording, but we would probably miss many response messages, which will make this approach unsuitable.

We could store messages on the file system, but we would need to make sure we have file system permissions, that the directory structure is known, and the recording would change to generation from message samples, which isn’t our goal.

Thinking more about it, storing messages on the file system is probably ideal in this specific instance, as we can store meta-data along with the messages, we won’t lose any messages, we can make sure they follow whatever correlation scheme we need, we can record at our leisure and everything will be reliable. Also, think times in TIBCO Rendezvous are irrelevant, and we can re-read messages as many times as we want for debugging purposes. The more I think about it, the better this approach looks, so this is what we’ll do.

The Request-Recording Half-bridge

We need to subscribe to the request subject. We need to make sure the timeout is so large that it will effectively listen forever (DevTest documentation claims that a timeout of 0 seconds will force a forever listen, but it just returns immediately, so we will set it to a large number, such as 99999 seconds). We need to receive the message and translate it into something that DevTest can consume, so it should include an operation name, all data to correlate against, all data to match against and all necessary dates, and we need to store it where we can grab it later for virtual service generation. We need to do all this inside a virtual service.

A service image shouldn’t be necessary. We need to use a service model to do this.

Messaging Virtualization Marker step allows this service model to be launched from VSE.

The RV sub step is a standard subscriber to the Rendezvous subject.

It needs a filter to store the TibrvMsg into a property

The Process TibrvMsg script stores the data that we need

 

import com.tibco.tibrv.TibrvMsg; //load the API

TibrvMsg message = testExec.getStateValue("rvMsg"); //retrieve the message

opname = message.get("^data^").get("^class^"); //create operation name from one of the fields

idObj = message.get("^tracking^").get("^id^"); //make sure we store correlation data

payload = message.get("^data^").get("XMLPeticion").get("XMLData"); //this is the data we want in the VSI request

 

payloadInsertPoint = payload.lastIndexOf("</"); //insert our correlation data

payload = payload.substring(0, payloadInsertPoint) + "<id>" + idObj + "</id>" + payload.substring(payloadInsertPoint);

 

stanza = payload.indexOf("xml version="); //If we have a stanza, we want to add our operation name after it

if(stanza > 0) {

    opnameInsertPoint = payload.indexOf("<", stanza);

    payload = payload.substring(0,opnameInsertPoint) + "<" + opname + ">" + payload.substring(opnameInsertPoint) + "</" + opname + ">";

} else {

    payload = "<" + opname + ">" + payload + "</" + opname + ">";

}

 

filenameStartPos = payload.indexOf("<SCRID>") + 7; //Another piece of correlation data. We will use this for correlating filenames

filenameEndPos = payload.indexOf("</SCRID>", filenameStartPos);

filename = payload.substring(filenameStartPos, filenameEndPos) + "-req.xml";

filename = testExec.getStateString("basePath", "") + "/" + filename;

testExec.setStateValue("filename", filename);

return payload;

The Response-Recording Half-bridge

The response half-bridge needs to be an exact representation of the response message, including all TibrvMsg field names, ids, types and data, to allow the response-replay half-bridge to reconstruct the response message exactly.

A service image shouldn’t be necessary. We need to use a service model to do this.

The Messaging Virtualization Marker and RV sub steps are identical to the Request Half-bridge (but listening to the response subject).

The Process TibrvMsg is more involved.

 

import com.tibco.tibrv.TibrvMsg;

import com.tibco.tibrv.TibrvMsgField;

import org.apache.commons.lang.StringEscapeUtils;

 

TibrvMsg rvMessage = testExec.getStateValue("rvMsg");

 

String decodeTibrvMsg(TibrvMsg rvMessage) {

    String xmlMessage = "";

    int total = rvMessage.getNumFields();

    for(int idx = 0; idx < total; idx++) {

        TibrvMsgField field = rvMessage.getFieldByIndex(idx);

        String segment = "";

        if(field.type == TibrvMsg.MSG) {

            segment = decodeTibrvMsg((TibrvMsg)field.data);

            newline = "\n";

 

        } else {

            segment = "" + field.data;

            newline = "";

            if(field.name.equals("XMLData")) {

                segment = StringEscapeUtils.escapeXml(segment);

            }

        }

        xmlMessage = xmlMessage + "<rv_" + field.name + " ID=\"" + field.id + "\" Type=\"" + field.type + "\">" + newline + segment + "</rv_" + field.name + ">\n";

 

    }

    return xmlMessage;

}

finalMessage = "<TibrvMsg>" + decodeTibrvMsg(rvMessage).replace("^", "caret").replace("(", "openBracket").replace(")", "closeBracket") + "</TibrvMsg>";

filenameStartPos = finalMessage.indexOf("lt;SCRID") + 12;

filenameEndPos = finalMessage.indexOf("lt;/SCRID", filenameStartPos) - 1;

filename = testExec.getStateString("basePath", "") + "/" + finalMessage.substring(filenameStartPos, filenameEndPos) + "-rsp.xml";

testExec.setStateValue("filename", filename);

 

return finalMessage;

The standard virtual service

A regular virtual service will be used, with minimal configuration. It is simplest, in this instance, to use a standard HTTP virtual service with XML data protocol. This affects the response-replay half-bridge, as that half-bridge needs to listen to HTTP.

There is a small amount of configuration work to be done, because TIBCO Rendezvous uses date formats that aren’t configured in lisa.properties. I won’t explain datechecker here, but an example of the date format I see in converted TIBCO Rendezvous response messages (converting a date from a TibrvMsg.DATETYPE field) is:

Mon May 29 00:38:02 BST 2017

 

The three lines I added to lisa.properties to support that date format are:

 

 lisa.vse.datechecker.rvtimestampregex=(Mon|Tue|Wed|Thu|Fri|Sat|Sun) (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) (([12]\\d)|(3[01])|(0?[1-9])) (([012]?\\d)|(2[0123])):(([012345]\\d)|(60)):(([012345]\\d)|(60)) ([A-Za-z][A-Za-z][A-Za-z]) \\d\\d\\d\\d

 

lisa.vse.datechecker.rvtimestampformat=EEE MMM dd HH:mm:ss z yyyy

 

lisa.vse.datechecker.rvtimestampformat&\

 

After this, we can generate our service image from rr-pairs, using the standard generation wizard, API call or command-line utility, and it acts like any other well-defined and well-behaved protocol.

The Replay Bridge

Finally, we need to get a request, translate it to XML, forward it to our standard virtual service, get the response from our standard virtual service, translate it into a TibrvMsg and publish it to the response subject.

A service image shouldn’t be necessary. We need to use a service model to do this.

The “RV sub” and “Process TibrvMsg” steps are copied from the request-recording half bridge, but without saving the file. The “web service” step calls our standard virtual service with the XML representation of the message, storing the response. The “process response” message is a new script to translate the response XML into a TibrvMsg, and “RV pub” pushes that message to the response subject.

The “process response” script is more complicated than the recording scripts. Firstly, it needs to convert the XML string to a Document, then it needs to parse the nodes in the document, converting data types as necessary, before storing the TibrvMsg messages and combining them into one response object.

import org.xml.sax.InputSource;

import org.w3c.dom.*;

import javax.xml.parsers.*;

 

import com.tibco.tibrv.TibrvMsg;

import com.tibco.tibrv.TibrvDate;

 

import java.text.SimpleDateFormat;

import java.util.Date;

 

Object createObjectOfContent(String text, short type) {

                                    Object result = null;

                                    switch (type) {

                                    case 0:

                                                      result = Base64.decode(text);

                                                      break;

                                    case 3:

                                                      SimpleDateFormat format = new SimpleDateFormat("EEE MMM dd HH:mm:ss z yyyy");

                                                      Date date = format.parse(text);

                                                      result = new TibrvDate(date);

                                                      break;

                                    case 7:

                                                      result = Base64.decode(text);

                                                      break;

                                    case 8:

                                                      result = text;

                                                      break;

                                    case 9:

                                                      result = Boolean.valueOf(text);

                                                      break;

                                    case 14:

                                                      result = Short.valueOf(text);

                                                      break;

                                    case 15:

                                                      result = Short.valueOf(text);

                                                      break;

                                    case 16:

                                                      result = Short.valueOf(text);

                                                      break;

                                    case 18:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 19:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 20:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 21:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 24:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 25:

                                                      result = Integer.valueOf(text);

                                                      break;

                                    case 32:

                                                      result = Integer.valueOf(text);

                                    case 64:

                                                      result = Long.valueOf(text);

                                                      break;

                                    default:

                                                      result = text;

                                                      break;

                                    }

 

                                    return result;

                  }

 

//String to document

private static Document convertStringToDocument(String xmlStr) {

    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();

    factory.setValidating(true);

    factory.setNamespaceAware(true);

    factory.setIgnoringElementContentWhitespace(true);

    DocumentBuilder builder;

    builder = factory.newDocumentBuilder(); 

    Document doc = builder.parse( new InputSource( new StringReader( xmlStr ) ) );

    doc.getDocumentElement().normalize();

    return doc;

}

 

String unXmlSafe(String safeString) {

    safeString = safeString.substring(safeString.indexOf("_") + 1);

    return safeString.replace("caret","^").replace("openBracket","(").replace("closeBracket",")");

}

 

TibrvMsg convertDocToRvMsg(Node node) {

    TibrvMsg rvSection = new TibrvMsg();

    String thisName = thisData = "";

    int thisId = 0;

    short thisType = 0;

    var thisDataObj;

 

    NodeList nodeList = node.getChildNodes();

    for (int nodeNo = 0; nodeNo < nodeList.getLength(); nodeNo++) {

        Node currentNode = nodeList.item(nodeNo);

        thisName = unXmlSafe(currentNode.getNodeName());

        thisData = currentNode.getTextContent();

        if (currentNode.getNodeType() == Node.ELEMENT_NODE) {

            if (currentNode.hasAttributes()) {

                // get attributes names and values

                NamedNodeMap nodeMap = currentNode.getAttributes();

                for (int attrNo = 0; attrNo < nodeMap.getLength(); attrNo++) {

                    Node node = nodeMap.item(attrNo);

                    if(node.getNodeName().equals("ID")) thisId = Integer.parseInt(node.getNodeValue());

                    if(node.getNodeName().equals("Type")) thisType = Short.parseShort(node.getNodeValue());

                }

                thisDataObj = createObjectOfContent(thisData, thisType);

                _logger.debug("This name: {}", thisName);

                _logger.debug("This ID: {}", thisId);

                _logger.debug("This Type: {}", thisType);

                _logger.debug("This Data Object: {}", thisDataObj);

                _logger.debug("TibrvMsg.MSG = {}", TibrvMsg.MSG);

                if(thisType == TibrvMsg.MSG) {

                    TibrvMsg rvSubsection = new TibrvMsg();

                    rvSubsection = convertDocToRvMsg(currentNode);

                    rvSection.add(thisName, rvSubsection, thisType);

                } else {

                    rvSection.add(thisName, thisDataObj, thisType);

                }

            }

        }

    }

    return rvSection;

}

 

xmlMessage = testExec.getStateString("xmlMessage", "");

myDoc = convertStringToDocument(xmlMessage);

TibrvMsg responseMessage = new TibrvMsg();

responseMessage = convertDocToRvMsg(myDoc.getDocumentElement());

return responseMessage;

 Operation

The standard virtual service is deployed. The replay bridge virtual service is deployed. My “tibrvlisten” monitor is showing that the response message is as I expect.

It takes perhaps 100mS to respond, and my scripting is bound to have introduced memory leaks, so this isn't suitable for performance testing. My requirement is for 1:1 request:response. If your requirements differ, you want to select alternative components for parts of your half-bridge.

So there we have it. A half-bridge implementation for providing virtual services, using out-of-the-box facilities in DevTest and providing both generation and replay of virtual services.

The next stage is to test it with my real TIBCO Rendezvous implementation, to see if there are any tweaks I need to add. As fas as I can tell, though, I've supported everything I need to do.

There are times when I need to change the request arguments en-masse in a service image generation. In historic and current versions of CA SV (DevTest), the arguments are stored in what is called a “ParameterList”, and each argument is a “Parameter” inside that ParameterList.

 

The Scriptable Data Protocol Handler (DPH) is what I use for manipulating these items. DevTest provides the ability to code DPHs, but I try not to use this facility, because it needs re-compiling whenever a DevTest API changes, it cannot be owned or changed by the people who will use it, it complains that “ParameterList” is deprecated, and my coding isn’t good enough to produce anything that I would be comfortable providing.

 

The scriptable DPH provides the following comment when it’s added to a recording:

>/*

>// You can use %beanshell%, %groovy% or %javascript% or some other installed JSR-223 scripting language

>// This example is for beanshell

>import com.itko.util.ParameterList;

>// Manipulate operation

>String operation = lisa_vse_request.getOperation();

>lisa_vse_request.setOperation(operation + " - updated");

>// This is implicitly set by calling setBodyText() or setBodyBytes

>boolean isBinary = lisa_vse_request.isBinary();

>lisa_vse_request.setBinary(false);

>// Manipulate request body text

>String theBody = lisa_vse_request.getBodyText();

>lisa_vse_request.setBodyText("New body");

>// Manipulate request body as binary

>byte[] b = lisa_vse_request.getBodyBytes();

>lisa_vse_request.setBodyBytes(b);

>// Other

>String asString = lisa_vse_request.toString();

>long id = lisa_vse_request.getId();

>// Arguments, Attributes, and Metadata are all ParameterList

>ParameterList args = lisa_vse_request.getArguments();

>lisa_vse_request.setArguments(args);

>ParameterList attributes = lisa_vse_request.getAttributes();

>lisa_vse_request.setAttributes(attributes);

>ParameterList metadata = lisa_vse_request.getMetaData();

>lisa_vse_request.setMetaData(metadata);

>// Working with ParameterList

>ParameterList p = new ParameterList();

>// Do we want to allow dupes or not?

>p.setAllowDupes(true);

>boolean areDupesAllowed = p.isDupesAllowed();

>// Adding parameters

>p.addParameters("key1=val1&key2=val2");  // many at once

>p.addParameter(new Parameter("key3", "val3")); // one at a time

>// Looking up parameters

>String theVal = p.getParameterValue("key1");

>// Updating parameters

>p.setParameterValue("key3", "newVal");

>// Removing parameters

>p.removeParameter("key1");

>// Removing all parameters

>p.clear();

>*/

This is fine as far as it goes, and it hints that there might be other possibilities, but what if I need to do any more advanced things? This week, I had the need to parse-and-change some request blocks. I had parameters called:

Root_path_featureType

Root_path_featureCode

Root_path_optionalFlag

Unfortunately, the request message had a variable number of these blocks, so I ended up with the following (dummy data used):

Root_path_featureType_1    =                “provider”

Root_path_featureCode_1   =                “DevTest”

Root_path_optionalFlag_1   =                “O”

Root_path_featureType_2    =                “version”

Root_path_featureCode_2   =                “10.0.0”

Root_path_optionalFlag_2   =                “O”

etc.

 

I determined that the actual data that a virtual service would care about was featureCode, and the other parameters were simply markers for the application to separate each featureCode. DevTest can’t understand the implications of this this out-of-the-box, so its behavior might not be what’s expected. In my case, what I got was this:

Response_path_provider      =                {{=request_Root_path_featureCode_1/*“DevTest”*/}}

Response_path_version        =                {{=request_Root_path_featureCode_2/*“10.0.0”*/}}

Unfortunately, the application sent the request in random orders, so I occasionally responded with:

Response_path_provider      =                “10.0.0”

Response_path_version        =                “DevTest”

DevTest is unable to do “=request_Root_path_featureCode WHERE request_Root_path_featureType = ‘provider’ “, because there’s no database-like indexing in messages, so I need to give it some help. I reckon DevTest would do best if it stored the parameters as:

Root_path_featureCode_{{featureType}}{{optionalFlag}}

For the above, it would translate to:

Root_path_featureCode_providerO   =   "DevTest"

Root_path_featureCode_versionO     =   "10.0.0"

Much better! Easier to read, easier to understand and decreasing the number of parameters by two thirds!

 

In this way, DevTest is far less likely to hit matching inconsistencies for featureCode, unless featureType and optionalFlag are both identical (in which case, the featureCode values won’t need to be in any specific order, so it’s likely to work anyway).

 

So, what I need to do is to read all the parameters, search for the strings “featureCode”, “featureType” and “optionalFlag”, and if I find one of them, work out how to replace, remove and add parts. What I need it to be able to iterate through the ParameterList like it’s a Java Map, and it’s very similar to a Map, but not similar enough. What I therefore need are some extra methods I can use for ParameterList to make it more flexible.

 

Fortunately, extra methods exist – they just aren’t documented anywhere I can find them!

 

To see what methods are available, you could load the class in an IDE, but that’s too much like developer work for me – what I want is something in DevTest Workstation that’ll let me see what’s happening.

  • In a test, add a step. The step wants to be a Dynamic Java Execution step.
  • You want to “Make new object of class:” com.itko.util.UniqueKeysParameterList
  • Construct/Load Object…
  • Constructor: UniqueKeysParameterList( )
  • Finish
  • Highlight the root element in the Object Call Tree, and you see the “Data Sheet” tab filled
  • Click on the Call Sheet tab

Ok, here are all the methods we can use against ParameterLists! Handy!

Here’s the complete list:

Boolean

containsKey( java.lang.String key )

Boolean

equals( java.lang.Object o )

Boolean

hasDupeKey( boolean ignoreCase )

Boolean

hasEmptyKey( )

Boolean

hasMatchingParameter( com.itko.util.Parameter p )

Boolean

isDupesAllowed( )

Boolean

isEncoded( )

com.itko.util.Parameter

get( int index )

com.itko.util.Parameter

getMatchingParameter( com.itko.util.Parameter p )

com.itko.util.Parameter

getParameter( java.lang.String key )

com.itko.util.Parameter

getParameterByValue( java.lang.String value )

com.itko.util.Parameter

getTerm( java.lang.String key )

com.itko.util.ParameterList

buildTempUnmergedList( )

com.itko.util.ParameterList

cloneQuick( )

Integer

capacity( )

Integer

hashCode( )

Integer

size( )

Integer

totalWeight( )

Class

getClass( )

Object

clone( )

String

get( java.lang.String key )

String

getDelimiter( )

String

getMergeDupesDelimiter( )

String

getParameterValue( java.lang.String key )

String

getTermValue( java.lang.String key )

String

toArgumentString( )

String

toAttributeString( )

String

toDecodedArgumentString( )

String

toEncodedArgumentString( )

String

toString( )

String

writeXMLString( int indent )

String[]

getKeyArray( )

String[]

getValueArray( )

String[]

unMergeParameterValues( com.itko.util.Parameter p )

java.util.Enumeration

terms( )

java.util.Iterator

iterator( )

java.util.Map

getAllKeyValuePairs( )

java.util.Spliterator

spliterator( )

Void

addAll( com.itko.util.ParameterList arg1 )

Void

addAll( com.itko.util.ParameterList newOnes, boolean allowDupes )

Void

addAll( java.util.Map newOnes )

Void

addChildNodes( org.w3c.dom.Element parent )

Void

addNodeAttributes( org.w3c.dom.Element el )

Void

addParameter( com.itko.util.Parameter arg1 )

Void

addParameter( int arg1, com.itko.util.Parameter arg2 )

Void

addParameters( java.lang.String paramData )

Void

addParameters( java.lang.String paramData, boolean encoded )

Void

clear( )

Void

decodeParamsFromUnicode( )

Void

encodeParamsToUnicode( )

Void

forEach( java.util.function.Consumer arg1 )

Void

initialize( org.w3c.dom.Element parent )

Void

moveDown( int row )

Void

moveUp( int row )

Void

put( java.lang.String key, java.lang.String value )

Void

readEncryptedPropertyFile( java.lang.String fname )

Void

readPropertyFile( java.lang.String fname )

Void

readPropertyStream( java.io.InputStream inputStream )

Void

remove( int index )

Void

removeAllParameters( )

Void

removeParameter( java.lang.String key )

Void

removeParamsWithEmptyKey( )

Void

removeTerm( java.lang.String key )

Void

reset( )

Void

setAllowDupes( boolean allowDupes )

Void

setDelimiter( java.lang.String delimiter )

Void

setEncoded( boolean encoded )

Void

setMergeDupesDelimiter( java.lang.String mergeDupesDelimiter )

Void

setParameterValue( java.lang.String key, java.lang.String value )

Void

sortOnKeys( )

Void

sortOnValues( )

Void

writeEncryptedPropsFile( java.io.OutputStream os )

Void

writeEncryptedPropsFile( java.io.PrintWriter ps )

Void

writePropertyFile( java.io.PrintWriter ps )

Void

writeSimpleUnsafeXML( java.io.PrintWriter pw, int indent )

Void

writeXML( java.io.PrintWriter pw, int indent )

Void

writeXML( java.io.PrintWriter pw, int indent, boolean writeTypeMap )

Void

writeXML( java.io.PrintWriter pw, int indent, boolean writeTypeMap, boolean usesUnicode )

I can do just about anything I can think of, using all these methods! So how can I leverage this stuff for what I need to do in this use case? I need to import ParameterList (and Parameter, as I will be adjusting what’s inside the ParameterList), and I want to create a map through which I can iterate, searching for the strings for the data I want to combine, and then remove the matched parameters and add the new parameters. Here’s how I did it:

import com.itko.util.ParameterList;

import com.itko.util.Parameter;

ParameterList args = lisa_vse_request.getArguments();

Map argMap = args.getAllKeyValuePairs();

int arrayLength = 200;

String[][] argArray = new String[arrayLength][4];

for (String key : argMap.keySet()) {

    if(key.contains("FeatureType")) {

        int lineNo = Integer.parseInt(key.substring(key.lastIndexOf("_") + 1));

        argArray[lineNo][0] = args.getParameterValue(key);

        args.removeParameter(key);

        _logger.debug("Key = {}, {}, {}", key, argArray[lineNo][0]);

    } else if(key.contains("FeatureCode")) {

        int lineNo = Integer.parseInt(key.substring(key.lastIndexOf("_") + 1));

        argArray[lineNo][1] = args.getParameterValue(key);

        argArray[lineNo][3] = key.substring(0, key.lastIndexOf("_"));

        args.removeParameter(key);

        _logger.debug("Key = {}, {}", key, argArray[lineNo][1], argArray[lineNo][3]);

    } else if(key.contains("OptionalFlag")) {

        int lineNo = Integer.parseInt(key.substring(key.lastIndexOf("_") + 1));

        argArray[lineNo][2] = args.getParameterValue(key);

        args.removeParameter(key);

        _logger.debug("Key = {}, {}", key, argArray[lineNo][2]);

    }

}

 

for(counter = 1;counter < arrayLength;counter++) {

    if(argArray[counter][0] != null)

        args.addParameter(new Parameter(argArray[counter][3] + "_" + argArray[counter][0] + argArray[counter][2], argArray[counter][1]));

}

Are there more elegant ways of doing this? Undoubtedly (the simplest might be to use argMap.valueSet instead of args.getParameterValue)! But this is the way that I could easily understand and that I could quickly script! If there comes a time when I need improved performance or debug any non-functional issues, I can look at using different Java.

I get this requirement a lot. More than I thought I would. More than I think I should. If I can be blunt, the question is wrong. It demonstrates a lack of understanding of virtual services. It shows that, as a pre-sales person I haven’t positioned virtual services correctly, it shows that the thoughts of the user are stuck in old ways of working, it demonstrates that we have a long way to go before general understanding of virtual services is common.

 

But Rick, how can you say that? It’s an ideal use case for virtual services, as all the functionality to enable keeping count is built-in to DevTest! This is true, and DevTest can also monitor live systems, it can perform software-defined routing, it can do load balancing, it can provide security against SQL injection, it can control IoT devices, and it can (probably – don’t try this one) perform DDoS attacks. But we would never do any of these things with it, because there are other products, better suited to perform all those functions.

 

If you ask me to keep count in a virtual service, three things will happen:

  1. I will tell you you’re wrong.
  2. I will argue against you.
  3. I will laugh at you.

If you still want me to tell you how to keep count in a virtual service, I will explain, and that is what this document is for.

 

Before I do, I need to provide some explanation as to why you’re wrong. Let’s use pieces of this diagram, which is something that I present when introducing the concepts of service virtualization. As you can see, it shows an application that we care about; in this situation, it’s “Order Management”, and many back-end connections that are removed by the implementation of virtual services, to allow us to test “Order Management” without having the constraints of mainframe scheduled access, system-of-record synchronized data, ERP provisioning and lack of availability of third-party links.

It all looks good, and we can provide service responses to any requests made by Order Management. These responses should have certain characteristics:

  • They should be data-aware.
  • They should be context-aware.
  • They should be date-aware.

The three different kinds of awareness listed above are technical inferences that come from the request and matched response. They are completely portable, they are immediately re-creatable, they are always reliable, thy are unconstrained. There is no business knowledge required to generate, manage, maintain and create a virtual service with these characteristics.

 

It’s only a short step from those three kinds of awareness to keeping count, surely?

 

Actually, no! As soon as we try to do any more in a virtual service than what’s listed above, we need to be aware of how the back-ends work, and none of us signed up for being aware of how back-ends work! Keeping count is one of the simplest business functions, but it is a business function nevertheless.

 

Why should something this trivial be such a problem? Surely, Rick, virtual services can’t be so restrictive that we can’t do this?

 

Of course we CAN do this. It’s just that we SHOULDN’T! Think about keeping count. Let’s continue with “Order Management”. What might need keeping count? Stock quantity and order value are the obvious ones. Ok, let’s think these through:

 

Stock quantity:

NFT

We have 1,000,000 doo-widgets in the inventory management system. This is in our ERP system, so we’ve virtualized that constraint away.

My test orders 1 doo-widget.

I want to performance test my order management system. My performance test wants to run for 8 hours, at 100 orders per second, to replicate what might happen on Black Friday.

 

Is it important to my test that I have 999,999 doo-widgets available after I’ve ordered one? It is important to my test that I have 500,000 doo-widgets left after I’ve ordered 500,000 of them? Is it important to my test that it falls over in a heap after 3 hours because I hit an inventory level of 0? If the answer to all of these questions is “no”, then it doesn’t matter what quantity is returned, and you don’t need to keep count in your virtual service; in fact, keeping count is an unnecessary overhead to responding as fast as you want. If the answer to any of these questions is “yes”, then you’re performing functional testing, not performance testing.

 

Ok, ok, so you’re doing functional testing, not performance testing. Let’s reset and start again.

 

FT

We have 1,000,000 doo-widgets in the inventory management system. This is in our ERP system, so we’ve virtualized that constraint away.

My test orders 1 doo-widget.

I want to functionally test my order management system. I want to make sure I can order items. I want to make sure I get a nice message when I’m out of stock. I want to make sure I can only request a valid number of doo-widgets.

 

How long will it take me to run through 1,000,000 inventory items, so I get to 0 and can perform my negative test? Too long. Ok, let’s reset the data to 10 doo-widgets and start again.

 

I run through my test 9 times, and it works each time. I run through it a 10th time, and I expect something different to happen. But this is a service, and services are shared, so there’s no guarantee that I’m the only person requesting inventory. In which case, I can’t be sure that I can run my test 9 times with the same expected result. In any case, why run through the same test 9 times, when this doesn’t hit a boundary condition? It would be better to test one positive condition, the negative condition, one timeout, one instance of each back-end error, one malformed response, each different response linked to a different inventory line item. By changing the way I test to do this, I increase my test coverage and decrease the amount of over-testing I do, whilst making my virtual service accurate, valuable, maintainable and throwaway.

 

But, I hear you argue, my front-end application keeps inventory count, and checks inventory values! I need to keep count. My response to this is that you need to log a defect against your front-end application, as it should never be keeping its own count. You need one master record, and this should be at the back-end.

 

Ok, the doo-widget is a simple counting function. What about order values? Surely these are more important, more complicated, more valuable and a better candidate for keeping count?

 

No, they aren’t. The exact same argument can be used for order values as is used for inventory tracking. If we aren’t hitting a boundary condition, why waste testing time by repeating the same test with the same expected result, rather than expanding our test with boundary and negative conditions. Moving to the DevTest demo application, if I’ve given you a demonstration of DevTest, I will have shown you that the current balance after an initial $100 deposit is $1100. I will also have told you that, on replay, this is our expected result. If the replay shows a different value, the application under test is doing something wrong.

 

Have I convinced you yet that keeping count in a virtual service is wrong? If not, please respond to this so we can discuss your specific use case. There is a chance that you might have a use case where you need to share data between different invocations of a service, and I’ve seen valid examples of these in the past, but I always treat these as exceptions to good service virtualization practice.

 

Now you know it’s wrong, and you know why it’s wrong, it’s time to explain how we enable you to do the wrong thing.

There are various mechanisms that we can use in DevTest to do this, depending on the scope required. The scope can be:

  • Contained within a single invocation of a running virtual service.
  • Shared within multiple invocations of a virtual service within one virtual service environment.
  • Shared within multiple invocations of multiple virtual services within one virtual service environment.
  • Shared within multiple invocations of multiple virtual services running in multiple virtual service environments.

Where might each of these be created?

Scope

Conversational Service Image

Service Model and/or Image

Shared Model Map

Persistent Model Map

Self-contained

 

 

 

 

Shared amongst invocations of a single virtual service

 

 

 

 

Shared amongst multiple virtual services in one VSE

 

 

 

 

Shared amongst multiple virtual services in multiple VSEs

 

 

 

 

We need some examples of these. Before we do, there are some restrictions we need to be aware of:

Anything in a block of XML is text. There is no concept of variable typing inside a message. But if we add two pieces of text together, we get concatenated values (“2” + “2” = “22”). So, if we’re using XML, we might need to convert from strings to numbers so we can perform simple mathematical operations, and back again to make them valid in a message. Some of the techniques listed below will contain this restriction, but DevTest is, wherever possible, type-less, so direct in-line property manipulation will work as expected (2 + 2 = 4).

 

As we progress towards the right in the above table, things get more flexible and more complicated (such as needing to be aware of type, as explained above). We can use a persistent model map for a self-contained service, it is usually easier & more intuitive to use in-line functionality.

 

We will be making extensive use of DevTest properties in the next few sections. If you need an introduction to these, you might like to refer to the DevTest Scripting Guide. I will try to explain what I’m using, but the scope of this document doesn’t extend to a scripting primer.

 

There are various places in a virtual service where properties can be defined and manipulated. These include:

  • Datasets applied to steps in service models
  • Script steps in service models
  • Scriptable data protocol handlers in service models
  • Routing steps in service models
  • Match scripts in service images
  • Inline scripts in service image data values

All of these are explained in the DevTest Scripting Guide. The following sections use a couple of them.

 

Self-contained

There is a pre-requisite for a conversational service image. It needs to have a token or session marker, sent in a response, which is re-used in each subsequent request. If your server application doesn’t have this, you will have troubles generating a conversational service image, and you will need to find a different mechanism for keeping count.

A Self-contained counter should allow addition and subtraction for one client, without altering the value for any other client. An example of this kind of virtual service is included in your installation of DevTest, in the “examples” project; the “kioskV5-dynamic” service image. Let’s break it down to see how it works:

The first thing is the session marker. This is in the response to login, and is represented in the service image as {{lisa.vse.session.key}}. This is a special property generated by DevTest, providing a unique and unchanging number to use throughout the conversation.

What about a counter for the current balance? The initial balance is set in the metadata to the “login” request. The login has no balance in the response, so the current balance is calculated in metadata:

The balance needs to be set here, so that it’s initialized to a value. The balance is then maintained in each response. It is possible that your transport protocol is sensitive to custom metadata keys, in which case you’ll need to set the property in a match script instead of the transaction metadata. A match script might contain:

testExec.setStateValue(“currentBalance”, 10000);

return defaultMatcher.matches();

DepositMoney response contains this:

so it is adding request_amount to currentBalance, and storing the result in currentBalance. Note the syntax for this kind of operation is {{resultProperty=sourceProperty1+sourceProperty2;}}. Don’t forget that semi-colon, in any of these property manipulations.

WithdrawMoney has no balance in the response, so the current balance is calculated in metadata:

This is subtracting request_amount from currentBalance, and storing the result in currentBalance.

GetAccount reports the current value:

By modifying each response in this way, the current balance is local to the session in which it’s being manipulated, so there is no pollution of data and the response values are always direct calculations. This means that no asynchronous processes can update the balance, such as interest being added by the server, or account debits taken by the server.

 

Shared between invocations:

If our virtual service is stateless, the same balance manipulation can be used as in the previous example. However, every time someone logs into the virtual service, as any user, the current balance is overwritten, and every transaction overwrites the balance, regardless of the user or account being used. The data is completely polluted, and the tester can never rely on an expected response value. At the same time, the current balance isn’t exposed to other virtual services, so the same property name can be used by different virtual services concurrently with no danger of inter-service interaction.

 

Shared amongst virtual services inside a single VSE:

Now we get to some of the more sophisticated property manipulations in DevTest, starting with the sharedModelMap. A strange name, it is a Java Map, designed for service models, with data shared inside a Java virtual machine (JVM). All virtual services running in a single VSE share the same JVM, so we have control over what and how we use these properties.

All values stored into a shared model map are strings, so any manipulation of a value will need to convert the string value to a number, and convert back to a string before storing in the map. This will mean that it’ll be very confusing to add inside property blocks in a service image, so we want to find other ways of adding scripting functionality for maximum flexibility, but with ease-of-use (or even hidden) technicalities.

My currentBalance value is a number, so we need to convert between strings and the number format. In the case of integers, we can do the following:

int currentBalanceInt = Integer.parseInt(currentBalanceString);

String currentBalanceString = String.valueOf(currentBalanceInt);

However, the numbers in the demo application are decimals, and there’s a specific support in Java for decimal numbers:

BigDecimal currentBalance=new BigDecimal("10000.00");

String currentBalanceString = currentBalance.toString();

BigDecimal currentBalance = new BigDecimal(currentBalanceString);

currentBalance = currentBalance.add(amount);

currentBalance=currentBalance.subtract(amount);

There are many things that can be done with a sharedModelMap, and they are described in the DevTest Scripting Guide. We will mainly be using the get and set methods.

com.itko.lisa.vse.sharedModelMap.put(namespace, key, value);

com.itko.lisa.vse.sharedModelMap.get(namespace, key);

The “namespace” argument is a way of separating the same variable against different criteria. This provides us extra flexibility. For example, I might use:

com.itko.lisa.vse.sharedModelMap.put(accountIDString, “currentBalance”, currentBalanceString);

String currentBalanceString = com.itko.lisa.vse.sharedModelMap.get(accountIDString, “currentBalance”);

With some careful use of type conversion, I can do the same things to “currentBalance” in my response as in the previous examples. In addition, I have the ability to get and set the currentBalance for the specific account numbers, storing and manipulating the balance of every account without polluting the balance of any other account. Each balance is also shared between any virtual services running in the same VSE, so we’ve enabled data sharing inside our container as well as keeping count.

If we’re going to try to use shared model maps in-line, it’s going to get very complicated. It will be more readable (and maintainable) if we put them into scripts.

Because we’re setting properties in a script, away from the actual message, we will need to change our data values to properties in the service image. {{currentBalance}} would be used wherever the current balance is required.

Firstly, a response-side scriptable data protocol handler, to link login names and checking accounts:

%beanshell%

opName = incomingRequest.getOperation();

_logger.debug("\n\n\n\n\nResponse DPH: Operation = {}", opName);

if("listUsers".equals(opName)) {

    _logger.debug("setting account numbers for login names ...");

    myMessage = lisa_vse_response.getBodyText();

    String[] segment = myMessage.split("<return>");

    for(i=1;i<segment.length; i++) {

        if(segment[i].contains("CHECKING")) {

            loginStartPos = segment[i].indexOf("<login>") + 7;

            loginEndPos = segment[i].indexOf("</login>");

            idStartPos = segment[i].indexOf("<id>") + 4;

            idEndPos = segment[i].indexOf("</id>");

            String login = segment[i].substring(loginStartPos, loginEndPos);

            String id = segment[i].substring(idStartPos, idEndPos);

            _logger.debug("Response DPH: login: {}, id: {}", login, id);

            com.itko.lisa.vse.SharedModelMap.put("account", login, id);

            _logger.debug("Response DPH: Check - Retrieved ID: {}", com.itko.lisa.vse.SharedModelMap.get("account", login));

        }

    }

}

_logger.debug("\n\n\n");

A quick note here – we have control of the message, and what to do inside the message. I could have chosen to convert the response message to an XML Document, and performed XML parsing on it. However, it occurred to me that, although the message format resembles XML, it can contain any content, so it’s just as valid to perform string manipulation as DOM traversal, and it’s easier for me to understand, so I’ll do it that way!

Now for match scripts:

Login:

//getNewToken match script

if (!incomingRequest.getOperation().equals("getNewToken"))

  return defaultMatcher.matches();

import com.itko.util.ParameterList;

import java.math.BigDecimal;

ParameterList args = incomingRequest.getArguments();

_logger.debug("\n\n\n\n\nMatch script getNewToken: Argument list:\n{}", args);

String login = args.getParameterValue("username");

if(login!=null) {

    BigDecimal currentBalance=new BigDecimal("10000.00");

    _logger.debug("Match script getNewToken: Allocating user {} an amount of {}", login, currentBalance);

    String accountID = com.itko.lisa.vse.SharedModelMap.get("account", login);

    _logger.debug("Match script getNewToken: ... to accountID: {}", accountID);

    com.itko.lisa.vse.SharedModelMap.put(accountID, "currentBalance", currentBalance.toString());

}

return defaultMatcher.matches();

Deposit:

// depositMoney match script

if (!incomingRequest.getOperation().equals("depositMoney"))

  return defaultMatcher.matches();

import com.itko.util.ParameterList;

import java.math.BigDecimal;

ParameterList args = incomingRequest.getArguments();

String amountString = args.getParameterValue("amount");

if(amountString != null) {

    _logger.debug("depositMoney match script: Amount = {}", amountString);

    BigDecimal amount = new BigDecimal(amountString);

    String accountID = args.getParameterValue("accountID");

    BigDecimal currentBalance = new BigDecimal(com.itko.lisa.vse.SharedModelMap.get(accountID, "currentBalance"));

    currentBalance = currentBalance.add(amount);

    com.itko.lisa.vse.SharedModelMap.put(accountID, "currentBalance", currentBalance.toString());

    testExec.setStateValue("currentBalance", currentBalance.toString());

}

return defaultMatcher.matches();

Get balance:

// getAccount match script

if (!incomingRequest.getOperation().equals("getAccount"))

  return defaultMatcher.matches();

import com.itko.util.ParameterList;

import java.math.BigDecimal;

ParameterList args = incomingRequest.getArguments();

String accountID = args.getParameterValue("accountID");

BigDecimal currentBalance = new BigDecimal(com.itko.lisa.vse.SharedModelMap.get(accountID, "currentBalance"));

testExec.setStateValue("currentBalance", currentBalance.toString());

return defaultMatcher.matches();

Withdraw:

// withdrawMoney match script

if (!incomingRequest.getOperation().equals("withdrawMoney"))

  return defaultMatcher.matches();

import com.itko.util.ParameterList;

import java.math.BigDecimal;

ParameterList args = incomingRequest.getArguments();

String amountString = args.getParameterValue("amount");

if(amountString != null) {

    BigDecimal amount = new BigDecimal(amountString);

    String accountID = args.getParameterValue("accountID");

    BigDecimal currentBalance = new BigDecimal(com.itko.lisa.vse.SharedModelMap.get(accountID, "currentBalance"));

    currentBalance=currentBalance.subtract(amount);

    com.itko.lisa.vse.SharedModelMap.put(accountID, "currentBalance", currentBalance.toString());

    testExec.setStateValue("currentBalance", currentBalance.toString());

    }

return defaultMatcher.matches();

I have removed all the mathematical functions from in-line properties in my service image, and I simply use {{currentBalance}} in my responses.

 

Sharing amongst multiple VSEs:

A Persistent Model Map works in the same way as a Shared Model Map, but its scope is greater. It stores data in the DevTest database to which the Registry is connected. It, therefore, can share data between all components running against that registry.

 

The getMapValue and putMapValue APIs in a Persistent Model Map work the same way as the get & put APIs in the Shared Model Map, the difference being that the values are persisted across reboots, across multiple VSEs, across multiple tests, for as long as needed (30 days is the default elapsed time before deleting them).

 

Remember, the Persistent Model Map permits values to be stored and retrieved, so you could use a test running in CVS to add interest to various accounts, take asynchronous debits from accounts, make the current balance invalid, and any other things that would make your testing impossible to manage deterministically.

 

I hope you can use my examples here to make your own virtual services keep count and share data. I hope MORE that you can look at my examples and decide that you would prefer your virtual services to add maximum benefit to the removal of constraints in test by not implementing storage and sharing of data, so you’re able to discard them at will, telling your tester what the expected result should be for any combination of input parameters.

So, you've read my previous blog post? No? Start at API Testing for IoT  and then come back.

 

So, you've now read my previous blog post. I've updated that UDP query script to return a sorted tree map, so duplicate UDP responses are automatically removed, providing a list of each advertised service, attached here as "ssdp search.tst".

 

I thought it would probably be a good idea to add a port scanning facility for any device on my LAN. In my usual manner, I adapted some code I found on the Internet, so it can now take some (optional) DevTest properties:

hostToScan = the IP address on your LAN that you want to scan, e.g.: 192.168.1.178

ports = group of ports to scan, e.g.: 8000-9000

protocol = IP protocol to scan, tcp or udp.

Attached here as "portscan.tst"

 

Here's a screenshot of what I've created for my Hive installation:Screen Shot 2016-07-15 at 21.58.08.png

 

For Nest, I need two tests, because access token provisioning is a separate process to executing API commands:

Screen Shot 2016-07-15 at 21.59.39.png

Screen Shot 2016-07-15 at 22.00.18.png

 

More to come as I find more public APIs for the IoT devices I own. My Panasonic phone appears to have zero open TCP ports, and many UDP ports that time out but don't respond, and neither that device nor my SKY+HD receivers have APIs that I've been able to find documentation for.

Rick.Brown

API Testing for IoT

Posted by Rick.Brown Employee Jul 3, 2016

APIs are central to the Internet of Things (IoT). DevTest is an extremely capable API testing tool. Many corporate organisations are already implementing IoT devices, such as set-top boxes (STBs), smart meters, point-of-sale devices, etc, and even more are developing software to run in an IoT infrastructure, like banking apps.

 

Over the past couple of days, I've been looking at my home network with regard to IoT. I didn't think I had much call for management, but I find that my home LAN includes:

2 SKY+HD STBs

3 Roku LT players

3 Nest smoke alarms

Panasonic wifi speakers

     Sound bar, subwoofer & 2 satellite speakers for the lounge TV

     2 speakers in the basement allocated to L and R channels for immersive music

     1 speaker in both of the 1st floor bedrooms

BG Hive hub

     Zigbee master thermostat

     Zigbee boiler & water tank controller

LAN servers

     WD myBookWorld, originally for iTKO backups, but now running a version of Debian Linux

     Amahi home server for DNS, DHCP, network shares, etc

     Proxmox virtualised server platform

          Plex software for networked video

          Logitech Squeeze Server software for networked audio

          My wife's business website server at http://www.ena-marine.co.uk

          Nessus for LAN security

          etc

1 Panasonic wifi landline phone

1 Panasonic smart TV

1 Samsung wifi blu-ray player

1 Epson wifi printer

1 BT Home Hub 5 wifi router

1 Thomson wifi access point (powerline ethernet to wifi router)

1 Nintendo Wii

1 Android tablet

5 Android phones

1 iPhone

1 Windows laptop

2 Mac laptops

1 nettop PC to explore low-power media player software

 

All of this stuff needs monitoring, so I'm wondering whether CA UIM would be useful, or if I would do better completing my evaluation of openHAB which seems to be designed for IoT monitoring, but everything has also been tested and continues to need testing, and I know that the software release cycle for my smoke alarms is different to that of my speakers, because they auto-upgrade at different times.

 

Someone must be testing all the APIs contained in each of the devices. Every device is a server, and every one of them communicates over the LAN, some upstream to b2c servers, some downstream to clients on my LAN. For example, my personal mobile phone includes the Panasonic Smart Phone app to take landline calls, the Plex client to watch my classic Doctor Who collection, the Nest app to alert me when I'm burning dinner again (for details, visit http://cookerydisasters.blogspot.co.uk), the Hive app to turn the heating on because Summer still hasn't arrived (it's JULY already!), and many others.

 

So, as I said, every device is a server. Servers need APIs. What, where and how could an API work for my speakers, for example? Well, I noticed that, when I start the Panasonic music player app on my phone, it forces me to wait for a few seconds before it shows all my speakers. There must be some discovery happening on my network, and network discovery is something that I should be able to accomplish in DevTest, in preparation for querying and updating via standard APIs.

 

LAN discovery is generally accomplished by querying using "Simple Service Discovery Protocol" (ssdp), enhanced by Universal Plug'n'Play (uPnP), using multicast HTTP (HTTP over UDP, not over TCP). UDP is a strange protocol. There's no point in properly performance testing UDP, because servers advertise when they are initially connected and then repeat the advertisement at their expire time. Clients send a discovery query with a timeout, and the functional specification for ssdp says that every server must respond with a message for every service it provides before the timeout period is reached. Network load is directly proportional to the number of services advertised on the LAN multiplied by the number of client requests over time, but because there is no direct connection between clients and servers, there is no way to test dropped messages or to assert on errors. Therefore, it's simply a measure of network traffic, and doesn't need a testing tool.

 

So, what about functional testing? Specifically, how would a query be performed that will provide a response that can be used as a part of a business process? It might be best to show a client discovery request and a response.

 

Request:

M-SEARCH * HTTP/1.1

Host: 239.255.255.250:1900

MX: 5

Man: "ssdp:discover"

ST: ssdp:all

 

Note the blank line at the end of the request.

 

Sample response:

INFO  - NOTIFY * HTTP/1.1

INFO  - Host: 239.255.255.250:1900

INFO  - Cache-Control: max-age=1800

INFO  - Location: http://192.168.1.165:2870/dmr.xml

INFO  - NT: urn:schemas-upnp-org:service:RenderingControl:1

INFO  - NTS: ssdp:alive

INFO  - Server: NFLC/2.3 UPnP/1.0 DLNADOC/1.50

INFO  - USN: uuid:12345678-1234-1234-1234-123456789abc::urn:schemas-upnp-org:service:RenderingControl:1

 

This response provides us some useful information. Interesting things for this specific response might be:

It contains a URL, which we can then use to query the server

It is a RenderingControl, so it's something to do with media

It is DLNA, so it conforms to the Digital Living Network Alliance operability guidelines for media appliances

 

Navigating to the URL provided gives this response:

<root>

     <specVersion>

          <major>1</major>

          <minor>0</minor>

     </specVersion>

     <device>

          <dlna:X_DLNADOC>DMR-1.50</dlna:X_DLNADOC>

          <dlna:X_DLNACAP/>

          <deviceType>urn:schemas-upnp-org:device:MediaRenderer:1</deviceType>

          <friendlyName>KitchenR</friendlyName>

          <manufacturer>Qualcomm AllPlay</manufacturer>

          <manufacturerURL>http://www.qualcomm.com</manufacturerURL>

          <modelDescription>AllPlay capable network audio module.</modelDescription>

          <modelName>SamAudio</modelName>

          <modelNumber>CUS227 1.0</modelNumber>

     further details snipped

 

So we can see that this is one of my Panasonic speakers; specifically, the one allocated to the right stereo channel in the Kitchen.

 

Ok, so that is an example of the data format, now start doing it in DevTest! So far, I have been looking at this in my spare time this weekend around the sport, and there has been a LOT of sport on TV: Euro2016 football, ODI cricket, Wimbledon, Tour de France, F1 GP, etc, so I have implemented a simple ssdp test step in bean shell, adapted from something I found on the Internet

 

import java.io.BufferedInputStream;

import java.io.BufferedOutputStream;

import java.io.ByteArrayInputStream;

import java.io.IOException;

import java.io.InputStream;

import java.io.OutputStream;

import java.net.DatagramPacket;

import java.net.InetAddress;

import java.net.MulticastSocket;

import java.net.SocketTimeoutException;

 

/**

* UPNP/SSDP client to demonstrate the usage of UDP multicast sockets.

*

* @throws IOException

*/

public void multicast() throws IOException {

    int numberOfResponses = 0;

    try {

        InetAddress multicastAddress = InetAddress.getByName("239.255.255.250");

        // multicast address for SSDP

        final int port = 1900; // standard port for SSDP

        MulticastSocket socket = new MulticastSocket(port);

        socket.setReuseAddress(true);

        socket.setSoTimeout(15000);

        socket.joinGroup(multicastAddress);

 

        // send discover

        byte[] txbuf = DISCOVER_MESSAGE.getBytes("UTF-8");

        DatagramPacket hi = new DatagramPacket(txbuf, txbuf.length,

                multicastAddress, port);

        socket.send(hi);

        _logger.debug("SSDP discover sent");

 

        do {

            byte[] rxbuf = new byte[8192];

            DatagramPacket packet = new DatagramPacket(rxbuf, rxbuf.length);

            socket.receive(packet);

            dumpPacket(packet);

            numberOfResponses++;

        } while (true); // should leave loop by SocketTimeoutException

    } catch (SocketTimeoutException e) {

        _logger.debug("Multicast timed out after {} responses", numberOfResponses);

    }

}

 

private void dumpPacket(DatagramPacket packet) throws IOException {

    InetAddress addr = packet.getAddress();

    _logger.debug("Response from: {}", addr);

    ByteArrayInputStream in = new ByteArrayInputStream(packet.getData(), 0, packet.getLength());

    copyStream(in, System.out);

}

 

private void copyStream(InputStream in, OutputStream out) throws IOException {

    BufferedInputStream bin = new BufferedInputStream(in);

    BufferedOutputStream bout = new BufferedOutputStream(out);

    int c = bin.read();

    while (c != -1) {

        out.write((char) c);

        c = bin.read();

    }

    bout.flush();

}

 

private final static String DISCOVER_MESSAGE

        = "M-SEARCH * HTTP/1.1\r\n"

        + "HOST: 239.255.255.250:1900\r\n"

        + "MAN: \"ssdp:discover\"\r\n"

        + "MX: 5\r\n"

        + "ST: ssdp:all\r\n"

        + "\r\n";

 

multicast();

 

Run this step and it'll spend a good few seconds being unresponsive, but it'll eventually complete with a large amount of logging, showing all (well, perhaps not all - it depends on how well your servers respond to discovery messages) your ssdp & upnp servers on your network.

Introduction

There are various trading messaging protocols used by investment banks, some of which are supported within DevTest out-of-the-box and some that need some work in DevTest to provide virtual services. FIX is one of those protocols that need some work.

We have virtualized FIX a few times, for a number of banks, but we’ve never had a critical mass to add support to the base product. A field-developed FIX add-in was created a number of years ago, but subsequent versions of DevTest have superseded some of the functionality provided, and the protocol support is only partial functional in the latest version of DevTest (this journal entry is written for DevTest v8.4). If the FIX add-in is updated after the release of this document, that will be the preferred reference.

Technicalities

The FIX trading protocol is a data format, so it should fit neatly as a Data Protocol in DevTest. It runs directly over TCP transport, so there is the potential for missing or out-of-order reception of transmitted messages, so the protocol includes mandatory sections for message sequence numbers, body length calculation and body checksum. It also uses its own field delimiter character and a string as the record delimiter, and there are some request messages that expect multiple responses. Multiple date/time strings are used, and the formats do not match standards used elsewhere (and thereby supported out-of-the-box in DevTest). All of these technicalities have the potential to cause issues for virtual services, so I will describe each of them in detail.

We’ll go in order of when each of these become important in the process of virtualization, but before we do so, let’s start with a frightening thing;

A Sample Message

A FIX message contains name-value pairs, with delimiters. Each of the names is a number that refers to a specific piece of data. The FIX protocol is open-source, so there are look-up tables online that explain what each of the numbers refer to. I occasionally visit these websites for reference:

http://fixwiki.org/fixwiki/FIXwiki

http://www.onixs.biz/fix-dictionary/

http://www.fixtradingcommunity.org/

FIX is nearly 15 years old, so there have been many versions of the protocol. The most commonly used versions are 4.x, but the current version is 5.0sp2.

Here is a sample. To find this message in the above websites, we need to look at the 8= and the 35= fields:

8=FIX.4.49=7935=A56=Server49=Client 34=152=20151015-11:04:23.24298=0108=60141=Y10=236

This message has a number of name-value pairs. Let’s look at what this means:

8=FIX.4.4

  • “8=” means “BeginString”. It is always the first field in a FIX message, it is always unencrypted, it is required.
  • “FIX.4.4” means “FIX version 4.4”

9=79

  • “9=” means “BodyLength”. It is always the second field in a FIX message, it is always unencrypted, it is required.
  • “79” means there are 79 characters between the end of this field and the start of the “10=” field. If the number here does not match the actual length of the FIX message, the FIX component receiving the message should throw an error.

35=A

  • “35=” means “MsgType”. It is always the third field in a FIX message, it is always unencrypted, it is required.
  • “A” means “Logon”. This is the first message in a conversation.

56=Server

  • “56=” means “TargetCompID”. It is the destination of this message. It is always unencrypted, it is required.
  • “Server” means the server hostname (or other connection description information)

49=Client

  • “49=” means “SenderCompID”. It is the source of the message. It is always unencrypted, it is required.
  • “Client” means the client hostname (or other connection description information)

34=1

  • “34=” means “MsgSeqNum”. This is required, and is a constantly-incrementing number
  • “1” means the sequence number sent in this message. The destination checks to make sure this number is higher than any previous FIX message sent from “TargetCompID” to “SenderCompID”, unless field 141 is set to “Y”, in which case, the value here should be “1”. TargetCompID->SenderCompID is a different counter to SenderCompID->TargetCompID, and both machines check the message sequence number of messages sent to them.

52=20151015-11:04:23.242

  • “52=” means “SendingTime”, It is a timestamp, and is required.
  • “20151015-11:04:23.242” is the time this message was sent. The format used here is yyyyMMdd-HH:mm:ss.SSS and is represented in UTC timezone

98=0

  • “98=” means “EncryptMethod”. It is always unencrypted, it is required.
  • “0” means “not encrypted”

108=60

  • “108=” means “HeartBtInt”. It is required
  • “60” means “60 seconds between heartbeat messages being sent”. Heartbeat messages should be sent every 60 seconds by both SenderCompID and TargetCompID, unless a transactionam message has been sent the other way in the past 60 seconds.

141=Y

  • “141=” means “ResetSeqNumFlag”.
  • “Y” indicates that both sides of the conversation should reset the value in their 34 field to 1

10=236

  • “10=” means “CheckSum”. It is always the final field in a FIX message. It is always unencrypted, it is required.
  • “236” is the modulo-256 of the message, preceding the 10 field.

FIX Technicalities

Delimiter Characters

The FIX field delimiter is the ASCII character commonly referred-to as SOH. This is Control-A, or 0x01, and when looking at FIX discussions online, people sometimes use the Caret character, sometimes the pipe character, sometimes the newline character, sometimes <SOH> and occasionally whatever makes most sense in the discussion. Where possible in this document, I will use | [to be confirmed, when I review what I have used in this document]

The FIX record delimiter is an ASCII string of 8 characters in length. The FIX protocol states that the checksum field must come last in a record, so the string is <field_delimiter><checksum_marker><field_delimiter>. I will explain the checksum later in this document; for now, it is sufficient to say that an example of a FIX record delimiter might be |10=123|

DateTime Formats

Some fields, such as SendingTime, use the datetime format of yyyyMMdd-HH:mm:ss.SSS. Others, such as SettlDate, use yyyyMMdd. To perform useful mathematics on date values, DevTest mustn’t mis-identify 8-digit field values as dates, as this would invalidate other field values.

DevTest should correctly identify the longer datetime format, and this is not one of the standard formats built-in to DevTest, and there are two ways of supporting this.

  1. Alter the lisa.properties file to add this date format
  2. Convert this date format to a format that is built-in to DevTest

 

Message Sequence Number

As explained in the message sample section, the message sequence number is a counter per connection, starting from 1 whenever ResetSeqNumFlag is set to “Y” and incrementing for every message sent.

DevTest needs to keep count of MsgSeqNum for each connection between itself and the client that requests messages from it.

Body Length

The value of BodyLength is set at runtime to the number of characters between the delimiter after the BodyLength field (after its end delimiter) and the start of the CheckSum field (after its start delimiter).

DevTest needs to calculate BodyLength for each response at the time when that response is selected.

Checksum

The value of CheckSum is set at runtime to the modulo-256 of the byte value of each character in the message, from the beginning of the message until after the start delimiter of the CheckSum field.

DevTest needs to calculate CheckSum for each response at the rime when that response is selected.

Multiple Responses

Some requests made to a server require multiple responses. For example, a TradeCaptureReportRequest expects a TradeCaptureReportRequestAck followed by a number of TradeCaptureReport messages.

DevTest needs to keep the TCP connection open and return multiple responses when a request requires them.

The FIX Add-in

A FIX transport & data protocol handler was written in the field, a few years ago. It pre-dates the TCP transport support in DevTest. As DevTest has changed and matured over the years, the amount of functionality provided by this add-in has either decreased or been made unreliable. Looking at the source code for this add-in, it has its own TCP transport handler, it includes FIX field delimiters, it converts FIX field numbers into FIX field names in the request, it writes MsgSeqNum, it writes BodyLength and it writes CheckSum. Of these, the only ones that reliably work are the request fieldname conversion and the field delimiter support, so those are what I use it for, and I do everything else myself, in scriptable data protocol handlers.

Scriptable Data Protocol Handler

Scriptable support in DevTest is in multiple places. For FIX support, we will create three scripts. The first script is for any additional manipulation required of the request messages, the second is to make sure the response is stored in a nicely-formatted manner in the service image, and the third is where the heavy lifting happens, to perform all the runtime calculations that are needed to ensure successful acceptance of the response.

Request Script

%beanshell%

import com.itko.util.ParameterList;

import com.itko.util.Parameter;

import java.text.SimpleDateFormat;

 

private convertDateFromFIX(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd'-'hh:mm:ss");

    Date parsedDate = sdf.parse(dateTime);

sdf.applyPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZZZZ");

    dateTime = sdf.format(parsedDate);

    return dateTime;

}

private convertDateFromFIXShort(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd");

    Date parsedDate = sdf.parse(dateTime);

sdf.applyPattern("yyyy-MM-dd");

    dateTime = sdf.format(parsedDate);

    return dateTime;

}

 

ParameterList args = lisa_vse_request.getArguments();

 

//Change all the date formats. There are two date formats in use, in four fields

String sendingTime = args.getParameterValue("SendingTime_52");

String OrigSendingTime = args.getParameterValue("OrigSendingTime_122");

String transactTime = args.getParameterValue("TransactTime_60");

String LegSettlDate = args.getParameterValue("LegSettlDate_588");

 

sendingTime = convertDateFromFIX(sendingTime);

  1. args.setParameterValue("SendingTime_52", sendingTime);

 

if(OrigSendingTime != null) {

OrigSendingTime = convertDateFromFIX(OrigSendingTime);

args.setParameterValue("OrigSendingTime_122", OrigSendingTime);

}

if(transactTime != null) {

transactTime = convertDateFromFIX(transactTime);

args.setParameterValue("transactTime_60", transactTime);

}

if(LegSettlDate != null) {

LegSettlDate = convertDateFromFIXShort(LegSettlDate);

args.setParameterValue("LegSettlDate_588", LegSettlDate);

}

// end date formats

 

//Store unwanted arguments as attributes

ParameterList atts = lisa_vse_request.getAttributes();

String[] argsToRemove = {

    "BodyLength_9",

"MsgType_35",

"MsgSeqNum_34",

"CheckSum_10"

};

for(argToRemove : argsToRemove) {

    String thisValue = args.getParameterValue(argToRemove);

args.removeParameter(argToRemove);

atts.addParameter(new Parameter(argToRemove, argToRemove, thisValue));

}

lisa_vse_request.setAttributes(atts);

lisa_vse_request.setArguments(args);

Response Record Script

%beanshell%

 

import com.itko.lisa.ext.util.fix.FIXDictionaryParser;

import java.text.SimpleDateFormat;

 

private convertDateFromFIX(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd'-'hh:mm:ss");

    Date parsedDate = sdf.parse(dateTime);

sdf.applyPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZZZZ");

    dateTime = sdf.format(parsedDate);

    return dateTime;

}

private convertDateFromFIXShort(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd");

    Date parsedDate = sdf.parse(dateTime);

sdf.applyPattern("yyyy-MM-dd");

    dateTime = sdf.format(parsedDate);

    return dateTime;

}

 

private getFIXFieldName(FIXFieldNumber) {

FIXDictionaryParser fixDict = FIXDictionaryParser.getInstance();

FIXFieldName = fixDict.getFieldName(FIXFieldNumber);

_logger.debug("Field map get value for key {} = {} ", FIXFieldNumber, FIXFieldName);

if(FIXFieldName == null) FIXFieldName = "fix_unknown_tag";

    return FIXFieldName;

}

 

 

boolean runningAsVSE;

String message = testExec.getStateValue("flMessage");

if (message == null) {

    message = lisa_vse_response.getBodyText();

//    args = lisa_vse_request.getArguments();

runningAsVSE = true;

} else {

runningAsVSE = false;

}

 

output = "<fix_response>\n";

messageLine = message.split("\n");

 

for(thisLine : messageLine) {

    linePart = thisLine.split("=", 2);

fieldName = getFIXFieldName(linePart[0]);

fieldValue = linePart[1];

tagStartValue = "<" + fieldName + "_" + linePart[0] + ">";

tagEndValue = "</" + fieldName + "_" + linePart[0] + ">";

_logger.debug("Start tag: {}", tagStartValue);

_logger.debug("End tag: {}", tagEndValue);

 

////manipulate the message sequence number

//if(fieldName.equals("MsgSeqNum"))   fieldValue = "{{=request_MsgSeqNum_34+1;/*" + linePart[1] + "*/}}";

if(fieldName.equals("MsgSeqNum"))       fieldValue = "replace_at_runtime";

////manipulate body length and checksum, as we'll do those at runtime

if(fieldName.equals("BodyLength"))      fieldValue = "replace_at_runtime";

if(fieldName.equals("CheckSum"))        fieldValue = "replace_at_runtime";

//manipulate the date if it's a date field

if(fieldName.equals("SendingTime"))     fieldValue = convertDateFromFIX(fieldValue);

if(fieldName.equals("OrigSendingTime")) fieldValue = convertDateFromFIX(fieldValue);

if(fieldName.equals("TransactTime"))    fieldValue = convertDateFromFIX(fieldValue);

if(fieldName.equals("SettlDate"))       fieldValue = convertDateFromFIXShort(fieldValue);

if(fieldName.equals("LegSettlDate"))    fieldValue = convertDateFromFIXShort(fieldValue);

if(fieldName.equals("TradeDate"))       fieldValue = convertDateFromFIXShort(linePart[1]);

outputLine = tagStartValue + fieldValue + tagEndValue;

_logger.debug("Message line: {}", outputLine);

    output = output + outputLine + "\n";

}

 

output = output + "</fix_response>";

 

 

_logger.debug("Parsed message:\n{}\n\n", output);

if(runningAsVSE) lisa_vse_response.setBodyText(output);

return output;

Response Replay Script

%beanshell%

 

//import com.itko.util.ParameterList;

//import com.itko.util.Parameter;

import java.text.SimpleDateFormat;

import com.itko.lisa.ext.util.fix.FIXDictionaryParser;

 

// Shared functions

private getFIXValue(String key, String message) {

    // FIX delimiter is SOH (AKA 0x01 or Control-A)

    char CtrlA = 0x1;

    String controlA = Character.toString(CtrlA);

 

MsgValueStartPos = message.indexOf(controlA + key + "=") + key.length() + 2;

    MsgValue = message.substring(MsgValueStartPos, message.indexOf(controlA, MsgValueStartPos));

if(MsgValue == null) MsgValue = "";

    return MsgValue;

}

 

private convertDateLong(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZZZZ");

//_logger.debug("Original date: {}", dateTime);

    Date parsedDate = sdf.parse(dateTime);

    sdf.applyPattern("yyyyMMdd'-'HH:mm:ss.SSS");

    dateTime = sdf.format(parsedDate);

//_logger.debug("Replaced date: {}", dateTime);

    return dateTime;

}

 

private convertDateShort(dateTime) {

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");

//_logger.debug("Original date: {}", dateTime);

    Date parsedDate = sdf.parse(dateTime);

sdf.applyPattern("yyyyMMdd");

    dateTime = sdf.format(parsedDate);

//_logger.debug("Replaced date: {}", dateTime);

    return dateTime;

}

 

private getFIXkey(FIXvalue) {

FIXDictionaryParser fixDict = FIXDictionaryParser.getInstance();

    fieldMap = fixDict.getFieldMap();

msgTypeMap = fixDict.getMsgTypeMap();

    Iterator iterator = fieldMap.keySet().iterator();

    String key = "";

    String value = "";

while(iterator.hasNext() && !value.equals(FIXvalue)){

key   = iterator.next();

      value = fieldMap.get(key);

    }

//_logger.debug("Field map get key for value {} = {} ", value, key);

    return key;

}

 

//var message = "";

//var args = "";

byte[] rawMessage;

String message;

//

// read the raw message

//

boolean runningAsVSE;

 

String message = testExec.getStateValue("flMessage");

if (message == null) {

rawMessage = lisa_vse_response.getBodyBytes();

    message = new String(rawMessage);

    //args = lisa_vse_request.getArguments();

runningAsVSE = true;

} else {

runningAsVSE = false;

}

 

String myBody = message;

 

_logger.debug("Message to parse is {}", myBody);

myBody = myBody.substring(myBody.indexOf(">") + 2, myBody.lastIndexOf("<") - 1);

 

 

line = myBody.split("\n");

myBody = "";

for(thisLine : line) {

// _logger.debug("Looking at line: {}", thisLine);

endTagPos = thisLine.indexOf(">");

startTagPos = thisLine.substring(0,endTagPos).lastIndexOf("_") + 1;

startValPos = endTagPos + 1;

endValPos = thisLine.lastIndexOf("<");

// _logger.debug("Tag {} to {}, val {} to {}", startTagPos, endTagPos, startValPos, endValPos);

resultLine = thisLine.substring(startTagPos, endTagPos) + "=" + thisLine.substring(startValPos, endValPos);

    myBody = myBody + resultLine + "\n";

}

 

 

//At some point, we need to expand all the properties.

// We need to do this before calculating the body length and checksum

// and it might be a good idea to do it before we replace the ^J with ^A, so let's try it here

//_logger.debug("Before parseInState:\n{}", myBody);

myBody = testExec.parseInState(myBody);

//_logger.debug("After parseInState:\n{}", myBody);

 

//Convert all the date fields to FIX format

String[] longDateFields = {

"SendingTime",

"OrigSendingTime",

"TransactTime"

};

String[] shortDateFields = {

"TradeDate",

"LegSettlDate"

};

 

for(dateField : longDateFields) {

dateFieldKey = getFIXkey(dateField);

    startPos = myBody.indexOf("\n" + dateFieldKey + "=");

    endPos = myBody.indexOf("\n", startPos + 1);

    // This is general for a set of virtual services, but only some FIX services use

    //   the fields that we're looking for. Therefore, let's check to make sure we

    //   aren't trying to replace a non-existent value.

if(startPos > 0) {

//_logger.debug("Found {} {} from position {} to position {} in myBody", dateField, dateFieldKey, startPos, endPos);

dateToConvert = myBody.substring(startPos + dateFieldKey.length() + 2, endPos);

//_logger.debug("Date to convert: {}", dateToConvert);

convertedDate = convertDateLong(dateToConvert);

myBody = myBody.substring(0,startPos) + "\n" + dateFieldKey + "=" + convertedDate + myBody.substring(endPos);

    } else {

//_logger.debug("Date field {} not found in message", dateField);

    }

}

for(dateField : shortDateFields) {

dateFieldKey = getFIXkey(dateField);

    startPos = myBody.indexOf("\n" + dateFieldKey + "=");

    endPos = myBody.indexOf("\n", startPos + 1);

    // This is general for a set of virtual services, but only some FIX services use

    //   the fields that we're looking for. Therefore, let's check to make sure we

    //   aren't trying to replace a non-existent value.

if(startPos > 0) {

//_logger.debug("Found {} {} from position {} to position {} in myBody", dateField, dateFieldKey, startPos, endPos);

dateToConvert = myBody.substring(startPos + dateFieldKey.length() + 2, endPos);

//_logger.debug("Date to convert: {}", dateToConvert);

convertedDate = convertDateShort(dateToConvert);

myBody = myBody.substring(0,startPos) + "\n" + dateFieldKey + "=" + convertedDate + myBody.substring(endPos);

    } else {

//_logger.debug("Date field {} not found in message", dateField);

    }

}

 

 

  1. testExec.setStateValue("Whole message",message);

 

char CtrlA = 0x1;

String controlA = Character.toString(CtrlA);

//

//Perhaps I should leave something for the FIX DPH to do?

//

//controlA = "\n";

 

myBody = myBody.replace("\n", controlA);

  1. testExec.setStateValue("myBody", myBody);

//_logger.debug("body after replacing ^L with ^A:\n{}", myBody);

 

 

//FIX requires certain control fields to have the correct values

//"MsgSeqNum" must be consistent and incrementing with every message transferred from client-> server, and a separate counter for server->client.

//  Client will check that MsgSeqNum is higher than previously used, unless ResetSeqNumFlag__141 is set to Y, in which case, it's set back to 1

//"BodyLength" is the number of characters (including delimiters) between the end delimiting of the "BodyLength" field and the beginning character of the "Checksum" field

//"Checksum" is the modulo of the message, preceding the checksum name

 

//Store response MsgType in case we need to loop responses

  1. testExec.setStateValue("responseMsgType", getFIXValue(getFIXkey("MsgType"), myBody));

 

//Use a SharedModelMap for MsgSeqNum, so we can have lots of counters, each one defined for a specific server and client

SMM_ns = getFIXValue(getFIXkey("SenderCompID"), myBody);

SMM_key = getFIXValue(getFIXkey("TargetCompID"), myBody);

resetMsgSeqNum = getFIXValue(getFIXkey("ResetSeqNumFlag"), myBody);

String newMsgSeqNumStr = "1";

if(resetMsgSeqNum.equals("Y")) {

com.itko.lisa.vse.SharedModelMap.put(SMM_ns, SMM_key, "1");

newMsgSeqNumStr = "1";

} else {

newMsgSeqNumStr = com.itko.lisa.vse.SharedModelMap.get(SMM_ns, SMM_key);

 

if(newMsgSeqNumStr == null) newMsgSeqNumStr = "1";

 

    int newMsgSeqNum = Integer.parseInt(newMsgSeqNumStr);

newMsgSeqNum++;

newMsgSeqNumStr = String.valueOf(newMsgSeqNum);

com.itko.lisa.vse.SharedModelMap.put(SMM_ns, SMM_key, newMsgSeqNumStr);

 

}

MsgSeqNumField = getFIXkey("MsgSeqNum");

MsgSeqNumFieldValue = getFIXValue(MsgSeqNumField, myBody);

myBody = myBody.replace(controlA + MsgSeqNumField + "=" + MsgSeqNumFieldValue, controlA + MsgSeqNumField + "=" + newMsgSeqNumStr);

 

 

//count characters for BodyLength

startPosBodyLength = myBody.indexOf(controlA + getFIXkey("BodyLength") + "=");

startPos = myBody.indexOf(controlA, startPosBodyLength + 1) + 1;

endPos = myBody.indexOf(controlA + getFIXkey("CheckSum") + "=") + 1;

String bodyPart = myBody.substring(startPos,endPos);

int bodyLength = bodyPart.length();

myBody = myBody.substring(0,startPosBodyLength) +

controlA +

getFIXkey("BodyLength") + "=" + Integer.toString(bodyLength) +

controlA +

myBody.substring(startPos);

//calculate correct checksum for this message

 

endPos = myBody.indexOf(controlA + getFIXkey("CheckSum") + "=") + 1;

char[] inputChars = myBody.substring(0,endPos).toCharArray();

int checkSum = 0;

for(char aChar : inputChars) {

    checkSum += aChar;

}

String myChecksum = Integer.toString(checkSum%256);

while(myChecksum.length() < 3) {

myChecksum = "0" + myChecksum;

}

//myBody = myBody.replaceAll(controlA + getFIXkey("CheckSum") + "=" + ".*?" + controlA, controlA + "10=" + myChecksum + controlA);

myBody = myBody.substring(0,endPos) +

getFIXkey("CheckSum") + "=" + myChecksum +

controlA;

 

 

rawMessage = myBody.getBytes();

_logger.debug("Final message in text:\n{}", myBody);

_logger.debug("Final message in bytes\n{}", rawMessage);

if(runningAsVSE) lisa_vse_response.setBody(myBody);

return myBody;

Multiple Responses

There are many ways in which multiple responses can be supported in DevTest. The method I chose was to create a scripted assertion. This assertion is set to loop back to the response selection step when the response MsgType is set to specific values, which I set to TradeCaptureReportRequestAck and TradeCaptureReport. To enable this, the response replay script above sets a property called “responseMsgType”, which is read by this scripted assertion:

// This script should return a boolean result indicating the assertion is true or false

 

/*

 

We need to get the response type from the response.

We also need to have a counter, so we know how many times we've responded

 

This assertion needs to make sure that, when we've responded to a TradeReportRequest with a TradeReportRequestAck,

we will then respond with a number of TradeRequestReport messages.

 

I have created multiple responses to TradeReportRequest, and I hope to be able to simply

loop around the VSM to pick the next response from the list, until my counter is satisfied

 

*/

 

_logger.debug("TradeReportLoopAssertion: Looping to find multiple responses");

 

int responsesToSend = 10;

int responseNumber;

 

myResponse = testExec.getStateString("responseMsgType", "none");

_logger.debug("TradeReportLoopAssertion: Found MsgType {}", myResponse);

 

switch(myResponse) {

    case "AE":

        //we are looping, up to responsesToSend times

responseNumber = testExec.getStateInt("responseNumber", 0);

responseNumber++;

if(responseNumber <= responsesToSend) {

_logger.debug("TradeReportLoopAssertion: Looping turn {} of {}", responseNumber, responsesToSend);

return true;

        } else {

_logger.debug("TradeReportLoopAssertion: Finished looping after turn {} of {}", responseNumber, responsesToSend);

break;

    case "AQ":

        //we want to start looping

testExec.setStateValue("responseNumber", 1);

_logger.debug("TradeReportLoopAssertion: Setting responseNumber to 1");

break;

    default:

        //this isn't a response we need to loop

_logger.debug("TradeReportLoopAssertion: Not looping response MsgType of {}", myResponse);

break;

 

}

 

return false;

One thing to note here is that multiple responses to a single TCP request is not a common use case. It is not supported in the DevTest recording wizard, so I needed to find a different way to add multiple responses. The client application to this FIX server was logging all FIX messages, so I was able to create a test in DevTest to parse the log files and generate request-response messages, which were then able to be included in my service images as multiple responses to a single request.

 

Recording Sequence

Make sure the FIX JAR add-in is copied to the DevTest hotDeploy directory. This will add FIX as an option in Transport Protocols (which I do not use), FIX delimiters (which I use) and FIX Data Protocol (which I use)

Picture1.png

Picture2.png

Picture3.png

Picture4.png

Picture5.png

Picture6.png

Picture7.png

 

After Recording

Add the scripted assertion to the respond step

 

Picture8.png

Final Thoughts

If the FIX add-in was updated to full functionality for DevTest 8.x, would I use it? I probably would, as long as I could rely on it to perform the runtime actions required and it converted the response message to XML. I would probably still need to add a scriptable DPH, for the date conversions and to extract the response MsgType so I could loop for multiple responses when required.

Is my script optimal? Actually no, there are many ways in Java scripts to perform actions, and you can see in my script examples that I use iterators, ArrayLists and for loops with no concept of a style guide. When I see a style guide, I will update my scripts accordingly.

I wrote this on 17 December 2014, but never uploaded it here. As DevTest development has become even more agile, and discussions have continued regarding version control, perhaps I should publish it?

 

We’ve all been there. In a meeting with an existing or a prospective customer, the question will be asked

Does your product integrate with our project planning* / ALM* / source control* / logging* / ESB* / banking system* / SAP* / spread sheet* / word processing* / search tool* / development tool* / continuous delivery mechanism* / single sign on*?

*delete or add as appropriate

The answer is sometimes yes, but more often no.

Why is this, what are the implications, should we provide integrations, is there value in doing so?

 

As a real example, my partner wanted to upload a photo at the weekend, because the Internet doesn’t have enough cat photos. Her go-to photo sharing application has changed over the years, as many sites have that functionality. She had a choice of Facebook, Instagram, Photobucket, Flickr, Picasa, LiveJournal, iCloud, or even our own media server at home from where we can make photos available on the cloud. She decided that Flickr would be where she wanted to put this photo.

 

Fast forward 30 minutes and much swearing at poor user experiences, and the photo appeared on Imgur.

The photo

A quick aside here – the hyperlink above might not display correctly in all browsers, and isn’t as in-line as others you might have seen. This is because I am writing this article in MS Word rather than in a browser-based editor – I have been stung too many times by integration failures where pressing the “backspace” button does a “back” in my browser and loses my whole entry, where an integration link between cloud-based storage and cloud-based version control fails and loses my whole entry, where things just don’t work cleanly because providers try, and fail, to provide seamless integrations.

 

It’s hardly a ground-breaking photo - the soppy cat is cute but I don’t think much of the accompanying human, but it should have been simple to share, so I asked her what had been wrong with Flickr that meant she uploaded to Imgur. She mentioned a painful login process, not knowing where her old Flickr login name and password have gone, why the login page was plastered by Tesco advertisements, why Yahoo! Logos were all over the place, and a complete lack of enjoyment in the experience.

 

I am aware that Flickr was bought by Yahoo! a few years ago, and that it was a great photo sharing site before then, but has something gone wrong with it? I must admit, I don’t take many photos, and when I do, my phone just syncs any photos to whatever service I have it linked to. I’m not certain, but I presume my Android phones automatically upload to some Google service, and my iPhone automatically uploads to Dropbox (unless I have blown my usage limit … again). So, perhaps some investigation is needed into Flickr, and the replacement of that site as the preferred place for sharing photos.

 

I found various articles talking about a slow demise; an increasing lack of relevance of Flickr for today’s consumers. This is strange, as Flickr had user-contributed tags, comments, multiple photo sizes, in fact all the facilities I would expect in a photo sharing service, and it had these 10 years ago, when Facebook was a University project, when Twitter was what birds did, when Imgur was simply a misspelling. Investigating further, it seems that the Flickr engineering teams were tasked with integrating Flickr into the Yahoo! ecosystem instead of creating new innovative facilities that would have kept it at the forefront of photo sharing. The descriptions of its decreased usage explained about mobile strategy coming from Yahoo!, about how a large company can mismanage a small one whilst making good business decisions, how the reason for takeovers can sometimes not match the perception of either company in the marketplace, how user requirements can sometimes be neglected in the push for business requirements.

 

I wondered if I can draw parallels from this in my own working life, as I have been employed by a couple of companies who have been subject to takeover, and I have occasionally been frustrated that our product does not integrate cleanly, or sometimes at all, with complementary products, both sold by our company and third party products.

 

What is integration? It sounds easy, and it should be, but there are actually two distinct areas that can sometimes get blurred:

  1. Front-end integration, where I perform a specific action to link what I’m doing to something elsewhere. An example of this might be pressing a button in Imgur to send my photo to Facebook, or it might be to drag service performance profiles from APM into a virtual service.
  2. Back-end integration, where a server-side linkage makes a difference to what services are provided to me. An example of this might be the Yahoo! single sign-on for Flickr, or the way your mobile and online banking keep your data in sync.

 

These integrations all sound great! Why wouldn’t we want all these seamless links? They all make our lives better, increasing usability and adding value to every service. In fact, some of the facilities on which we rely just wouldn’t be possible without these integrations. All of this is true, but there are some caveats and problems that don’t become evident until we investigate further.

 

Let’s look at what happens with a product that is designed from the ground-up to be integrated. In the “consumer world”, this type of application would include modern photo sharing apps (I’ll come back to Flickr later), payments for online shopping, mobile banking and checking-in for a flight. In the “corporate world”, this would include a continuous delivery tool chain and straight through processing in an investment bank. In every one of these types of applications, there are commonalities in developing small pieces of functionality very quickly. If the functionality is not developed quickly and released frequently, problems appear such as the issues that ING bank had with their mobile app being voted by users as the worst of all banking apps. How did they fix it? They moved to frequent updates, so the app had added facilities every day. This meant that users would be able to provide a different voting score every day, comment on enhancements, add to the requirements of the app while knowing that their requirements would be instantly catered-for, and the ING mobile app has recently been voted by its users as the best of all banking apps. Empowering users like this is a great way to build brand loyalty, as users feel part of the improvement process.

 

What about applications that aren’t constructed with integration at their heart? For consumers, this might be MS Office. For corporates, this might be SAP. There are various after-market additions that can be made to most consumer applications, and the ability to export files in multiple formats, but for corporates, a long-running integration effort needs to be undertaken to make the data available from the place it’s been created to the place that needs to consume it. This is why SAP implementations can take multiple years, why your Excel spread sheet doesn’t quite display correctly in Libre Office, and why version control for documents has never completely taken over from file system storage.

 

Returning to Flickr … it was an extremely early implementation of social networking. It had built-in facilities that we take for granted nowadays, but there was nothing for it to integrate with, so when it was purchased by Yahoo!, the integration options were to integrate mainly with other Yahoo! services, hence the Yahoo! single sign-on option for logging in to Flickr, or the fact that a Yahoo! search will display indexed Flickr photos. However, this is not what users now want to do – users want to not have to log in, they want to have everything indexed rather than one provider of photos, they want to blog with embedded photos, they want a rich mobile app experience, they want a Web 2.0 experience, and Flickr hasn’t kept up with the trends of user requirements. Visionaries in Flickr and visionaries in Yahoo! had different ideas, and somehow, this meant that users didn’t see continuous improvement.

 

What is “continuous improvement”? A trite response is “make things better, and doing it constantly”. The Flickr / Yahoo! story is about making things better, but who were they making things better FOR? Flickr developers were concentrating on making things better for the Yahoo! corporation, which was not necessarily making things better for users. While they were doing this, users saw no improvement of services, and saw impediments to performing the actions they wanted (for example; needing a Yahoo! login, and the mobile app simply being a façade to the web page), and so other online services gained traction.

 

Let’s bring this to corporate services. We work for a huge software company, where there are some overlapping products and some complementary products. We also have competition who has tools in similar areas, and there are third party tools that could provide interesting value propositions. So, what should we do about them? In CA, we have a large number of developers, both for specific product lines and specifically for integration. Corporate pressures mean that cost justification needs to be made for product enhancements, and the implementation of inter-product integrations can divert development funds away from product lines. This justification is good and necessary for the company, but not necessarily good for users. Users have specific requirements for integration, and we need to make sure we continue to do the right thing for users, assuming it’s also the right thing for the company, whilst making sure we don’t ignore either users OR the company. User requirements can be gathered in a number of ways. For us, the CA Communities are a great resource, where users are able to input their requirements and other users are able to vote on them. If enough users add their vote to a requirement, product management are able to view these and decisions can be made as to whether the user requirements match corporate direction. If so, these can be added to the backlog of future enhancements. A backlog intimates that any development would happen in an agile manner, so the specific piece of functionality to match a validated requirement can be delivered while it’s still relevant to user needs.

 

What about other integration requirements? I know that users have requested version control integration in LISA since I started working with it 5 years ago (5 year anniversary last week!!), that users have always asked about more complete integrations with ALM tools, project management integration, direct linking from development tools, and more recently, continuous delivery and agile portfolio management. Where it has made the most sense, both with internal tools and providing automation to select third party tools, LISA (now “CA DevTest”) already contains integration points, But there is a vast number of other tools where integration might be useful.

 

What kinds of integration would be useful? Earlier in this article, I mentioned two types of integration. For our platform, and the use cases that provides the most value for users, the initial integration point is the front-end, or automation of user operations. This means that DevTest needs to be able to be orchestrated by external tools, such as CA Release Automation , or even Jenkins or Ant.

 

The secondary integration point would be the back-end, or facilities providing data to DevTest that come from remote applications. This could be things like version control, PPM or service catalogue. All of these integrations would be useful, but do they both add value to users and to the company, while being more important than other functionality that we need to keep implementing to keep DevTest innovative, thought-leading and ahead of any competition? This is potentially far more difficult than the initial integration points, as we need to ensure that DevTest does not lose focus, but still keeping everyone happy. I am not sure that we are best positioned to actually do this work!

 

That is a vast and potentially divisive statement! What on earth could I mean by this? Well, keeping DevTest bounded into what it excels at is the best way of not losing focus on it. Much development has been done in the recent past, which is still ongoing, to provide RESTful interfaces to the major functionality provided by DevTest, and DevTest has supported command-line interactions for a long time. By continuing to enhance these open integration server components, we provide the ability for others to actually do the integration work. If others do that work, we don’t have control. What we do have is the focus of doing what we do to the best of our ability. If we don’t have control, who does? Whoever actually writes an integration layer has the control. Integration layers are backbone systems that have been bubbling under the radar of most people for many years, all the way from point-to-point synchronisation servers to move a single piece of data from one database to another, through to enterprise service buses to allow applications to publish information and others to subscribe to it.

 

But how would it work in our environment?

 

We have an integration team. Product lines fund them. Product lines only have a finite amount of funds, and therefore any integration work must divert funds from core functionality, hampering how many innovations can be included. So they need a different funding model. Assuming they add value to cross-product sales, it would logically come from cross-product revenue (eg: ELA renewals). If these integrations aren’t adding value, the technology would presumably be made open source instead of a product.

 

This might, actually, be a good idea. Open-sourcing some internal CA integration bus development work, providing third parties the ability to create their own back-end integrations with our tools, would mean that we have the potential for first-class integrations of our own tools to many applications from many vendors and open source projects. The profile of CA Technologies as a FOSS-friendly company would increase, we would have the opportunity to become thought leaders in the technology side of the application economy, our corporate marketing could make a big push on this and we would even be able to evolve “Business, Rewritten By Software” to something like “Business Reimagined By Software”.

In this blog entry, I intend to argue that test management facilities need to be replaced with a test data management functionality, to automate the creation of our tests, to remove the need for micro-managing the testing process, to increase the velocity of release from testing phases whilst improving the quality of those releases.

 

Ad-hoc Testing

My entire working life has involved testing. I joined TIS Ltd in 1987 as a Commissioning Engineer, and my main task was to configure hardware and install Unix SVR2 onto imported Convergent Technologies systems and soak-test them by running file system checks for days on end. To ease my workload and increase repeatability, I learned how to write and use shell scripts. After a few days, I would simply tick the commissioning document to show that the systems were functional before shipping them.

 

I was first involved with more structured testing when I worked for Mannesmann Keinzle in 1991, when we needed to understand the scalability of the Solicitors’ practice management application we were about to release, so we had a person co-ordinating a load test by standing at the front of a training room and shouting for everyone for press “Enter” on keyboards at the same time. A group of us would run around all the terminals pressing the key and counting how long it took for the application to respond. Database deadlocks were found and the procedure was classed as a success.

 

The Mercury Interactive Years

My personal involvement with test management starts when I joined Mercury Interactive in 1997, and my product training included TestDirector v1.52b. My trainer, Elspeth, described it as the defect tracking tool that our developers wrote for themselves, which we were now providing to customers because it could execute tests written in our “WinRunner” tool.

 

In the UK, the Mercury Interactive pre-sales and sales teams started heavily promoting test management as a way to test more effectively, which we claimed should be a pre-cursor to testing more efficiently. Leading with test management before test automation was implemented to make sure the right tests were being run. In this way, we were able to gain competitive advantage over the other automated testing vendors, who were at that time Rational (with Robot) and Compuware (who were just releasing their own test execution tool, after previously reselling WinRunner).

 

It took some persuasion from the UK team to Mercury Interactive Product Management to raise the internal profile of TestDirector, but we were able to get better test storage & execution, improved metrics, the addition of requirements and improved metrics & reporting. The big change, however, came with the release of TestDirector 6.0, which provided test management functionality in a browser.

 

On investigation of the different versions of TestDirector, I discovered that v2 was written in Borland Delphi coupled with a Borland Paradox database – a straightforward desktop application. TestDirector v4 added support for client-server by the use of the BDE database abstraction layer. On seeing v6 (there was no TestDirector v3 or v5, by the way), I noticed the icons were still classic Borland shapes, with the stylised ticks and crosses to denote passes and failures, so I asked about the technology, and was told that the front-end was still coded in Delphi, but with compilation destination as an ActiveX control so it could be executed in an ActiveX container, such as Microsoft Internet Explorer.

 

Over the years, we repeatedly asked for TestDirector (and “TestDirector for Quality Center”, and “Quality Center”) to be a “real” web application, but the only change made was to provide a limited defect-tracking view in pure HTML to allow developers to access defects without needing to install the whole application into Internet Explorer.

 

As far as I’m aware, the current release of TestDirector, “HP ALM v12”, is still a Delphi application running inside Internet Explorer. There are metrics analysis facilities provided by HP that are built in more recent technologies, but the core product is the same as it was 15 years ago.

 

The IBM View

I left Mercury Interactive in 2007, after we were bought by HP. I moved to IBM, and  I was involved in the beta-testing for the new quality management application, “Rational Quality Manager”, built on the Rational “Jazz” platform. I liked the fact that the storage of test artefacts was synchronised between client nodes, so there was no need for a big central server, although the application still looked and felt like the Eclipse platform, in which it was built, and the platform was more important than user requirements. When you need to understand workspaces and perspectives, regardless of your function in the SDLC, the barrier to usage is increased.

 

Oracle

When I moved to Oracle in 2008, to rejoin Elspeth in a new company, one of the things that attracted me was the ease-of-use advantage that those testing tools claimed, over the HP/Mercury tools. I found that this wasn’t actually the case, as it was really just a different way of looking at testing (directly using the DOM instead of abstracting it away into object maps), and the test management platform was being re-architected to use Oracle facilities, such as WebLogic and Oracle-11g database, but the thought was always that HP/Mercury had the mind-share in test management, and competing head-to-head was always going to be difficult. Having said that, the quality artefacts stored inside Oracle Test Manager are requirements, tests, test runs and defects, with a metrics & reporting layer crossing them, just like ALM, so it looks like Oracle are happy to use HP as the thought leader in this market, and simply claim advantage by the use of a better ESB layer.

 

iTKO & CA

I joined iTKO in December 2009, and it was a completely different way of looking at testing. “Invoke And Verify” was the watch-phrase, performing an automated task at one level in an application and checking for correct behaviour inside the application architecture. Test management was largely ignored, as being process-heavy for this kind of testing. LISA (the automated test tool) tests could be linked to TestDirector using the TestDirector “VAPI-XP” test type (a way to use various scripting languages directly from TestDirector to perform simple automation tasks), and also linked to Rational Quality Manager through the new IBM Open LifeCycle Management API. At this point, I started to wonder at the ongoing viability of “classic” test management platforms. As the SDLC has moved away from waterfall & V-Model methodologies towards agile delivery, DevOps and continuous delivery, I have come to realise that test management might be dying, and test data management is replacing it.

 

Test Management is Dead

Ok, this might be a divisive statement, but bear with me.

So, you’re a test management professional. What are you actually using test management for? These are usually the main reasons:

  • Requirements Management
  • Linking requirements to tests
  • Storage of tests
  • Grouping tests into test runs
  • Input of defects
  • Linking defects to test runs
  • Linking defects to requirements
  • Analysis

 

Let’s arrange those reasons a little differently:

  • Process stuff
    • Requirements management
    • Linking requirements to tests
    • Storage of tests
    • Grouping tests into test runs
    • Input of defects
    • Linking defects to test runs
  • Analysis stuff
    • Linking defects to requirements
    • Analysis

 

Process Stuff

Process stuff is the “what we use test management for” section. It’s not specifically business benefits; it’s use cases.

Requirements Management

Is the test management platform really the best place to store requirements? Requisite-Pro, Caliber-RM, Select Enterprise, etc, all do better jobs of storing requirements. More recently, epics and user stories have started to replace classic requirements, allowing for faster delivery of features that are important to users. The main reason for requirements to be embedded in the test management platform is for linking to tests.

Linking requirements to tests

We need to know that we are creating tests for a reason, and we need to know that our coverage of requirements with tests is complete. There is no point creating a test if it does not meet a requirement. But what should the linkage be? Should it be a description detailing the use case, or should it be the use case itself, as defined, these days, in Cucumber or some other machine-friendly syntax?

Actually, it really should be the data that exercises the specific business flow, and therefore the requirements management platform should be a description of that business flow into which we can add all the data to exercise both positive and negative flows. So we need to revisit requirements management platforms to use something that is data-centric, with user stories and epics, more than classic requirements.

Storage of tests

Tests are stored and linked in the test management tool, along with metadata about the test. But a test is either source code or object code, and source code repositories already exist (such at GIT or SVN) and object code repositories already exist (such as Artifactory or Nexus), so the storage of tests inside test management is really so they can be grouped for execution.

Grouping tests into test runs

I am going to run my tests as a group. Why am I going to do this? Because my business flow has a specific order in which things will be executed.

Where is my business flow described amongst the requirements that have been captured? We haven’t actually defined it, as the number of tools providing this capability is severely limited. We’ve already said that data and business flows are closely linked, so let’s investigate that area instead of manually defining business flows here, in the spirit of automation.

Input of defects

Defects in an application are managed, progressed, analysed and stored in developer-centric defect tracking tools. The only reason for adding defects to test management is to link the defects to the runs, so the developer knows what test caused a defect to occur.

Linking defects to test runs

So the developer needs to enter test management to see what tests caused a defect to happen? There are a couple of reasons for this:

  1. The defects were manually input into the test management platform, and are not available elsewhere, with no automated record being kept of how the test has interacted with the application architecture, being passively-recorded and logged automatically.
  2. The data for the business process wasn’t defined and shared well enough, so the data in the application under test can’t be mined to retrieve details of the problem.

If these two points are fixed, there is no need to join the defect with the run – the defect is simply something unexpected that happens when a specific combination of data causes the business process to act in an unforeseen manner.

 

Analysis Stuff

Analysis stuff is “why we use test management”. What are we testing? Why are we testing it? It’s the business value of test management.

Linking defects to requirements

The process section linked defects to test runs, grouped tests to test runs, and linked tests to requirements. But there is a common theme in the textual arguments in each section, and that theme is data-centric business flows.  Go back and re-read those bits, as they will become increasingly important - I’ll wait here for you.

 

So, if we can get our data right, and get our business flows right, and do it in an automated manner, we remove lots of manual processing overhead.

Analysis

So finally we come to the big reason; the actual reason; the lynchpin as to why we are using test management at all. Test management is there specifically to answer one question. It answers bits of the question in different ways, and it provides measures of justification, risk mitigation and visibility, but when you strip away all the verbiage there is really only one question that test management answers, and that question is:

When can we stop testing?

What a strange thing! That can’t be right! What about “what is our testing progress?”, “what is our requirements coverage?”, or “how many defects have we found in the past week?”? If you look at each of these questions, they all have a subtext, and they are really getting at “when can we stop testing?” so our analysis is there to answer that question.

When Can We Stop Testing?

So now we know that test management is there only to answer the question “when can we stop testing?”, and everything else is done best by the use of data-centric business flows, how is analysis providing this answer? Metrics and reporting.

We can retrieve graphs and reports for requirements coverage, test execution coverage, test progress, defects found over time, and any number of other graphs and tables showing the thousands of tests, test runs and defects generated since the start of this phase of the SDLC, and perhaps even compare them against previous releases. All valuable measures, but perhaps the most valuable is the bell-graph that shows us the measure of defect closure. This graph is not a count of raw defects, or even of closed defects, but it is the trend of the rate at which defects are being closed against the rate at which they are being discovered, over time. Of course, the larger the testing effort, the longer time it takes to get to the ideal trending point, and so some testing phases of projects will be curtailed to meet required release dates, and the analysis metrics will provide some kind of a risk profile for the statement “we need to release now, regardless of whether you have completed your testing”.

 

So, there are two answers to “When can we stop testing?”. One is “when our quality curve has hit a certain angle of decay” and the other is “when we have run out of time”. The answer possibly should be “when we have finished our testing and passed our quality gate”, but we always have too many test cases and never enough time.

 

Test Data Management is the new Test Management

We have too many test cases because we haven’t used data-centric business processes. We have added all our requirements into our test management tool and we’ve crafted tests to meet those requirements, joining those tests together to make sensible runs. We then need to add data to all of that, and suddenly our test effort extends from weeks of careful planning and construction to months of trying to work out what data does what to our business processes, what data scenarios can be lifted from our live databases, how we can circumvent Sarbanes-Oxley, PCI DSS and HIPAA regulations that stop us using our own data to make sure new versions of our applications work as expected. We suddenly have a perfect storm of complications that mean we can never finish our testing!

 

If we had put test data at the heart of our testing, we could have avoided all these problems, but this has its own issues: we need to extract data from live systems where cleanliness is unknown, we need to discover what pieces of data are relevant, we need to mask the data that is sensitive, we need to make sure that we retain referential integrity across data sources that might not be linked, we need to determine negative and boundary scenarios along with positive ones, we need to understand where data is used in business processes, we need to create a store of all this data, we need to find gaps in the data we can mine from production and we need to generate relevant minimised subsets of data to get best coverage from the smallest possible data sets. In effect, we need to implement disciplined, complete test data management. From this test data management, we need to map business processes and generate tests and back-end data to exercise our applications that we need to test.

 

Test data management has existed for as long as test management has existed, but test data is a more difficult challenge to manage effectively. The barriers to test management have always been lower, so test management has been implemented and data has become side-lined. Few organisations have continued with the knotty, mathematical problems associated with combinatorial testing techniques, but no longer!

 

Advances in test data management techniques, along with the ability to define business processes from a data-centric viewpoint and the need to implement continuous delivery practices have recently enabled us to minimise the number of tests being executed with a far better coverage. Instead of applying data sets of thousands or millions of rows of little-understood test data to tests, we can now provide testers with the smallest sets of data that they require to completely and properly exercise their applications. Instead of test runs taking weeks to execute, they can now take minutes, with more combinations being validated. We no longer need to answer “When can we stop testing?” because our minimal set of tests that run in minutes provide our developers with all the defects. Developers can see where tests fail, why tests fail, what data combinations cause tests to fail, what paths are taken by testers through their application infrastructure, how to triage those defects and the root cause of problems.

 

There are a few productised aspects to this:

  1. Complete test data management
  2. Business process design
  3. Passive recording of test paths through applications
  4. Automated data-centric testing

 

These four facilities completely remove the need for test management, because defects are automatically linked to data scenarios, the appropriate business processes are being automatically executed, it’s all happening at the press of a button at code check-in time, at build-time and at deploy-time. It is also happening every few minutes, so if some code is released through an exception process, this is tested before any tester or user is even aware of the change.

 

In Conclusion

We are living in the age of “automate the automation”. Historic manual processes are being removed, technical testers are able to use their expertise delivering quality functionality, business testers are able to run their ad-hoc business processes without the need to log functionality problems, developers are directed at the place in their code where problems exist, and the entire SDLC process is streamlined.

 

The goal that we can now reach, by implementing these new automation processes and concentrating on test data management rather than test management is:

Question: “When can we stop testing?”

Answer: “We already finished testing, the developers already fixed their code, we already verified that it’s working, and the users are already running through their edge cases”

 

Capabilities mentioned here are contained within the following products:

 

CA DataFinder

CA Agile Designer

CA Service Virtualization

CA Continuous Application Insight

 

Copyrights acknowledged for all third-party tools