11. The Trend Analysis Web Application

Using the Trend Analysis Graphic, SLA and tooling configuration

The Trend Analysis Web Application is at the heart of Mark59, and where we believe the real value of the framework lies. It consists of a number of web pages which perform the following functions:

  • The Application Dashboard. Overview page of all recorded applications

  • The Trend Analysis Graphic. Graphical representation of the run history of an application.

  • Application Run List. Maintenance and summary of the runs for an application

  • SLA Transactions and SLA Metrics. Used to set expected Application SLAs

  • Event Mapping Admin. Used to match, map and name a test result to a specific Metric data type.

  • Graph Mapping Admin. Panel which allows maintenance of the graphs list in Trend Analysis.

The Application Dashboard

The current status of applications under test at a glance. The SLA state refers to the last test for the application. 'Active' or 'All' applications can be displayed. An entire application can be deleted from here - it must be set to be inactive first.

By default only Active applications are listed in the Trend Analysis drop down

The Trend Analysis Graphic

The main selectors

Should be sufficient for the majority of cases (listed across the top of the page):

  • Active selector - change use to list inactive applications as well.

  • applications - the application as set from the database load

  • graph - selection of graph (as defined on the Graph Mapping Admin page)

  • latest runs - number of runs (from the most recent) to display

  • latest baselines - how may extra baselines outside the 'latest runs', to display

  • display txn names at point - help make the graph more readable on some angles

  • range bars - toggles the range bar display

  • Reset Graph Button - re-sets the graph back to its original aspect

  • Table Button - display/hide the comparison table

    • 'More Runs' button will show data for up to 5 further runs

    • "Txn Name", {metric}, Difference, Change(percent) can be used to sort data

  • Draw Button, re-draws the graph using the chosen selection criteria

The Trend Analysis main selectors

Advanced Filters

Are selectors meant to be less used, intended for when you are drilling down into the data for a specific reason, maybe to present a particular result you want to highlight to the user community. There are subtleties with the way the selectors work and can interact with each other, a good way to learn is just to play around, the DataHunter samples should be good for this.

Transaction Display Filters:

  • 'select txn (SQL like)', 'exclude txn ( NOT like )' : refines selection for transactions using the Sql 'like' syntax. For example entering del_te in '"select txn" for the dataHunter sample (and clicking Draw) will display just the DH-lifecycle-0100-deleteMultiplePolicies and DH-lifecycle-9999-finalize-deleteMultiplePolicies transactionson the graph

  • Manually Select Txns - allows a list of transactions to be specified

  • use raw 'Txn' SQL [Experimental] with 'select transactions SQL' : presents a text box with a rather scary looking bit of sql. This is the sql this is used to select the transaction names to display on the graph. You can use it as a basis to pretty well select whatever set of transactions you want; particularly powerful once to know the metrics database structure. There is no validation - you need to know what your are doing.

    • Hint 1 : The columns need to remain named as is for the main SELECT statement

    • Hint 2 : Use MySQL Workbench to help work out complex queries

Run Display Filters:

Work in the same way as described for the Transaction Display Filters, except of cause allowing run selection. It does allow for some leniency in that date formatting is ignored. For example entering "2019.__.10T11:__ and 2019__1011:__ in the select 'run date-time (SQL like)' field select the same runs (run occurring on the 10th day of any month in 2019, started in the hour from 11 a.m. on that day) .

The Trend Analysis Advanced Filters

The graphical display

Should, we hope, generally make sense, although there's a few things worth commenting on.

The range bars (and Range Bar Legend in the bottom right of the graph canvas ) will depend on the graph being displayed. If you want to know exactly what the Range Bar is for a graph, you can see the SQL being used setting in the Graph Mapping Admin page.

A transaction breaching a SLA will be displayed on the graphic with a red exclamation mark (!) beside it. In the Table data, the transaction name will be in red. However, only metrics that are included that caused the failure will be shown in red. In the following example, the Pass Count SLA has been edited for transaction DH-lifecycle-0100-deleteMultiplePolicies so that it fails that SLA , but its 90th percentile response time is okay.

SLA Pass Count failure on a transaction as display on the TXN_90TH graph
SLA Pass Count failure on a transaction as display on the TXN_PASS graph

Where a transaction name exists on the SLA tables and has SLA set for it, but the transaction doesn't appear in the results for the last displayed run, it will appear on a table called 'Missing Transactions List' .

A similar table, the "Missing Metrics List", will display on the particular graph relevant to a metric that has had a metric SLA set, but the metric was not captured during the run.

One further option worth mentioning is the ability to remove transactions for whatever reason you don't want to see on the graph. You set the flag via the Transaction SLA Reference Maintenance screen. Ignored transactions will be listed below the main transaction table as a reminder.

Runs List

A straightforward page providing functionality to delete an application run, or to edit it. The most important aspect of the edit function is the ability to mark a run as 'Baseline'. This allows selection of the run as a baseline on the Trend Analysis Graph. The latest baseline can also be used to automatically create transactional SLAs with defaulted values, as described in the next section.

Transaction SLA Maintenance

Maintenance of transaction-based SLAs for an application, also bulk creation of SLA data for an entire application, and ability to copy/delete an entire application's SLA data. We suggest you play around with the SLA data as uploaded from supplied sample for DataHunter to get a feel for the screens. Any changed SLA are applied to the Trend Analysis page graphic as soon as you re-Draw the graph, so you can see what constitutes passed/failed/ignored transactions as you make changes.

Summary of field values:

  • Transaction - matched to the transaction names as loaded into Trend Analysis.

  • Application - application + transaction form a unique key for SLA checking.

  • Ignore Txn on Graphs? - not really SLA related, but the only logical place to put the flag. If set to Y(es), the transaction will not appear on the Trend Analysis Graphic.

  • 90thResponse - the maximum response time allowed for a transaction at the 90th percentile.

  • Pass Count and Pass Count Variance % - the number of successful transactions accepted for a test, +/- the variance. For example a Pass Count of 500 with a Variance of 10% allows for a range of 450 to 550 transactions inclusive.

  • Fail Count - the maximum number of failures allowed for a transaction.

  • Failure Percent - the maximum percentage of failures allowed for a transaction.

  • Reference URL - useful to note when/why a SLA is set. It is auto-populated with the baseline run date when using the bulk load option.

The SLA Bulk Load Option

The idea of this option is to let you create a set of default-valued transactional SLAs when you add or significantly change an application. The list of transaction names to be added will be based on the most recent baseline for the application. Only those transactions that do not already have a SLA entry will be added. Default values can be set, which will be the values placed on the new SLAs. The transaction Pass Count SLA will be based on the count for the transaction in the baseline.

The 'Copy All' and 'Delete All' options

Can be handy when you are setting up new application ids where the application is a close copy of an existing test.

Metric SLA Maintenance

Maintenance of metric based SLAs for an application - DATAPOINT, CPU_UTIL and MEMORY statistics. As for transactional SLAs, any changes are applied to the Trend Analysis page graphic as soon as you re-Draw the graph, so you can see what constitutes passed or failed SLAs as you make changes.

Summary of field values:

  • Application - The Trend Analysis application id.

  • Transaction - matched to the transaction names as loaded into Trend Analysis.

  • Metric Type - DATAPOINT, CPU_UTIL or MEMORY. Application + Transaction + Metric Type form a unique key for SLA checking

  • Value Derivation - choice of: Minimum, Maximum, Average, StdDeviation, 90th, Pass, Fail, Stop, First, Last, Sum, PercentOver90. Not all values make sense for all metric types - discussed next.

  • Min Value, Max Value - set the allowed range of the metric

Not all Value Derivations for all Metric Types are graphed, or even captured, as they may be marginal or nonsensical. A default value of -1 is used when a metric value derivation is not recorded. The "Stop" value derivation for example, is only relevant to certain LoadRunner transactions so is never captured for JMeter tests. The Minimum recorded value for CPU_UTIL is actually captured, but is generally considered of little value so is not graphed. However, if for some reason you wanted to set an SLA against CPU_UTIL Minimum you could:

Setting an unusual metric SLA : CPU_UTIL Minimum

Such an SLA will still be checked - it just will not appear on any graph.

Recorded failure of the CPU_UTIL Minimum metric SLA

If you considered this SLA metric relevant to your test results, it is actually possible to create a graph for it, as covered in the Graph Mapping Administration section below.

Metrics Event Mapping Administration

This page provides a mechanism to match groups of similarly named metric transactions of a metric type to a set of attributes relevant to those transactions. This functionality tends to be particularly relevant to LoadRunner, but can be very useful in JMeter too.

In LoadRunner, you can choose which SiteScope entries you want (Metric Source of Loadrunner_SiteScope) by setting SQL 'like' checks in the 'Match When Source Name Like' - and then setting that what type of Mark59 metric data type those entries should map to (CPU_UTILS, MEMORY or DATAPOINT). Its a similar process for Loadrunner_DataPoint. Also, SiteScope entries tend to be very long, so if you can find a common Left Boundary and/or Right Boundary around the actual metric name you want to graph/set an SLA against, you can set boundaries (potentially useful for JMeter testing too).

It's possible for Unix machines you are monitoring you can only capture the idle percent (via LPARSTAT idle). In that case, it can be 'inverted' to a CPU_UTIL by setting the 'Is Inverted %' flag to 'Y'

Basic samples of the above situations are provided in the sample data.

In JMeter, a potentially useful feature is the ability to re-map an input 'metric source' to a different metric type. The most important re-mapping is probably from an input 'metric source' of Jmeter_TRANSACTION to the metric data types of CPU_UTIL or MEMORY. This will be relevant when you are capturing metrics via some 3-party tool, and cannot set the datatype of the transaction (eg, via a Mark59 aware program).

Pretending a DataHunter transaction is actually a CPU_UTIL ...
.. will mean the transaction behaves as if it was a CPU_UTIL in Trend Analysis

Note that this re-mapping does not apply to Enhanced JMeter reporting, as just the raw csv JMeter output is used.

The matching procedure and using a generic 'catch all' entry.

As alluded to above, during the Trend Analysis Database Load (runcheck), transaction/metric entries are attempted to be matched against the 'Match When Source Name Like' field using a SQL 'like' query. In the situation where multiple matched are possible, the topmost entry on the page is the one that will be matched. See EventMappingDAOjdbcTemplateImpl.findAnEventForTxnIdAndSource() for a description of the ordering algorithm.

One particular entry type is work noting: a Metric Source with a 'Match When Source Name Like' field of just '%'. If one of these entries exist, it will always be the last entry match against, but it also means that that Metric Source will always have a match. It's possible you may be in the situation for example, in which you only want DATAPOINTs that are specifically matched the other entries in the mapping table to be included in Trend Analysis. In that case you would want to remove this type of 'catch-all' entry, so that DATAPOINTs you don't want are simply bypassed.

If it exists the "%" only catch-all entry is always selected last

Graph Mapping Administration

Is an administrative function that we not expect most users to be using often, if at all. The main graphs that have been found useful over time are supplied in the sample data. Tweaking a Range Bar for a graph might be useful. We suggest the best way to get a feel for the options available is to examine the graphs setup as provided by the samples, and to look at the details provided on the 'Add new Graph' and 'Edit Graph Mapping Details' pages.

Just to give an example of adding a new graph, we will add a graph for Minimum CPU_UTIL - which would display the unusual SLA that was discussed in the Metric SLA Maintenance section above. The following entry would create a graph type called LOWEST_CPU_UTIL:

Next, we will change the permitted range to something more sensible (just to get the range bars to display nicely):

And here is graph you would see: