9. JMeter execution

Hints and Tips for running JMeter Test Plans which use Mark59 scripts.

There's nothing particularly special about running a JMeter test plan that includes one or more Mark59 selenium scripts. This chapter's primary purpose is to give suggestions and discuss some techniques we have found useful.

For a sample, please review the JMeter Test Plan DataHunterSeleniumTestPlan.jmx in the dataHunterPVTest project.

Thread Group Iteration Pacing

One difficulty we found with JMeter is that the provided Timers did not produce consistent pacing for Mark59 selenium scripts (in particular the end of the test, where threads tended to fire off too close). Our solution has been to create a BeanShell Timer, the Iteration Pacing Timer (see the DataHunterSeleniumTestPlan). It gets passed a parameter that is the number of seconds between the start script runs per thread, and a optional second parameter to allowing for randomisation between start times. If the script takes overruns the next start time, it just starts immediately.

If you look at the code for the 'Iteration Pacing Timer' you will see that it contain several commented out println statements like: //System.out.println(" pausing for calculated delay of " + delay + " ms" ); If you run the Thread Group containing with a single user, preferably over several iterations, with the println statement un-commented, you get a good idea of how the timing calculation works, and also how long your script typically runs for.

Scripts implementing SeleniumIteratorAbstractJavaSamplerClient

These scripts should have all pacing and control achieved using the provided 'iteration settings' parameters (details and example settings are available in the class Javadoc).

SeleniumIteratorAbstractJavaSamplerClientshould only be used when necessary. As noted in the Javadoc, keeping a single browser / thread open and iterating for an entire test means no intermediate results can be reported, as all the data needs to be held in memory. It also makes debug and sometimes script logic more challenging, particularly when re-start conditions after failure have to be considered.

The classic use case for this sort of iterative processing goes something like "the users stay logged on for four hours at a time, and so each user in the the test can only log on and off at the start and end". Well, maybe, but our own experience is that for the vast majority of applications it simply makes little or no difference. Where it may be justified is in applications where (in-memory) session data keeps significantly growing for each business iteration a user performs, which really should be called out by the application team and explicitly tested (in practice this should be a rare - it's hard to think of a typical business scenario where this would need to be part of a sound design). If the issue is the behaviour of the corporate Identity Management system, then perhaps that should be explicitly tested separately rather than as part of an application test... A compromise may to run a reasonable number of iterations, say 10 to 20 or so per cycle, rather than potentially 100's if you just let theiterateSeleniumTest()method loop for an entire test.

Summary Report

When creating the JMeter test plan we suggest always including a Summary Report Listener. For the JMeter summary report, it is recommended that the default configuration is used (which will produce a .csv formatted file). The one addition we suggest may be of benefit is to tick the "Save Hostname" option (helps when reporting on distributed tests). The filename must end in .csv, .xml or .jtl.

Other configurations may still work. For example the framework allows for xml file output ("Save As XML" is ticked). In fact if at least the fields "timeStamp", "elapsed", "label", "dataType" and "success" are present, Sub Results are saved, and for csv a valid header present, the file may be capable of being processed by the Mark59 framework, but we suggest unless there is good reason just stick to the recommended settings.

Upon completion of test execution, the output file can be used in the Mark59 framework as:

The recommended Summary Report Listener settings

Note that the test output file name can also be specified from the command line (when running in non-GUI mode). Details in the Continuous Integration notes (to be linked)

Server Metrics Capture

Included as part of the Mark59 framework is the mark59-server-metrics project, a simple agent-less metrics monitoring feature which enables you to monitor Windows, Unix, Linux, and New Relic.

In JMeter test Plan, you would use the class "com.mark59.servermetrics.ServerMetricsCapture" or com.mark59.servermetrics.NewRelicServerMetricsCapture to capture the server metrics.

A quick guide to the available parameters:

  • MONITOR CPU, MONITOR MEMORY : set to TRUE or FALSE

  • SYSTEM INFO currently unused

  • OPERATING SYSTEM : set to WINDOWS, LINUX or UNIX

  • SERVER if not 'localhost' USERNAME and PASSWORD / PASSWORD_CIPHER must be specified.

  • PASSWORD_CIPHER : a basic obfuscation just to prevent clear-case passwords. Please review com.mark59.core.utils.SimpleAES for usage.

  • ALTERNATE SERVER ID a useful parameter when you are reporting multiple 'localhost' servers. Generally this would occur in a distributed test when you are capturing metrics for test injectors on the test injectors themselves, by creating a ServerMetricCapture requests for each injector that will only run locally. That can be done, for example, by using the "Restrict_To_Only_Run_On_IPs_List" parameter. Unless you set this parameter, they would all be reported as 'localhost' - and get combined. So to distinguish between them you can use this parameter (the reported value would be 'mylocal' in the example), but generally the recommendation would be to use in the special value HOSTID, in which case the host id of the computer will be used.

  • CONNECTION PORT, CONNECTION TIMEOUT required for Linux/Unix.

  • "Restrict_To_Only_Run_On_IPs_List". In case of a multi node distributed test, this can be used to specify the machine IP on which the Server Monitoring script is configured to run from. For example if the value is set to "111.222.333.04,111.222.333.05" then slave with IP 111.222.333.05 will run the thread, but slave 111.222.333.06 won't. Please review com.mark59.core.IpUtilities.localIPisNotOnListOfIPaddresses() for details.

Summary of Commands used to derive cpu/memory output values:

  • Windows CPU: wmic cpu get loadpercentage

  • Windows MEMORY: wmic OS get FreeVirtualMemory / OS get FreeVirtualMemory

  • Linux CPU: mpstat 1 1

  • Linux MEMORY: free -m 1 1

  • UNIX CPU lparstat 5 1

  • UNIX MEMORY: a shell script is executed designed to give values relevant to a Unix LPAR server -

    • pgsp_aggregate_util (pagespace percent) - should not increase to more that 50

    • pinned_percent - should not increase to more than 30

    • numperm_percent - considered informational only (memory being used for file caching)

Limitations of Server Metric Capture

Please be aware at this point Server Metrics Capture has only been completely tested running from a Windows machines (ie, we know it can execute and successfully report all operating systems from Windows)

In our next major release, we hope to substantially improve our capacity for metric capture. In particular:

  • ensure execution from Linux (possibly including AWS instances)

  • enablement of server metrics from Cloud back to non-Cloud servers (possibly via API/http calls)

    UPDATE: We have built a basic web application to do this - hopefully will be in the 2.3 release (the mark59-server-metrics-web project - currently in our 'wip' repo).

New Relic Metrics

Functionality is included for basic New Relic CPU and memory statistics capture. The parameters specific to New Relic are:

Output is of the form:

  • CPU_{application_instances_id}

  • MEMORY_{application_instances_id}

Refer to com.mark59.servermetrics.driver.NewRelicServerMetricsDriver for more information, or as a basis for your own customisation.