Performance Test Methodology and Test Automation - EcoSys - Help - Hexagon PPM

EcoSys Performance Tuning

English (United States)
Search by Category
EcoSys Version

Pre-Flight Checklist for Executing Performance Tests

  1. Capture environment details for Environment Configuration (see template below)

  2. Verify that the environment matches the test plan and is configured with the appropriate components and connections. Verify that the dataset loaded in the database is intended for the scenarios to be tested.

  3. Verify that log retention is sufficient to capture the required details. On the application server, edit /EcoSys/ESFM_HOME/log4j2.xml, and set SizeBasedTriggeringPolicy to 20000KB, with DefaultRolloverStrategy set to 5 or higher. After making the changes, you must restart the application service.

  4. Start an Oracle snapshot to include execution plans and timings from the OS:

    1. alter system set statistics_level = all;

    2. exec dbms_workload_repository.create_snapshot;

  5. Verify that there are no users logged on to the system that are not part of the test. You can view users in System Info > Current Sessions. You can lock out other users and prevent additional log ins using Sys Admin > Application Settings > Maintenance Mode.

  6. Verify that no application servers that are not part of the test are connected to the database.

  7. Set application tracing suitable for the test. Go to Sys Admin > Application Settings > Application Tracing. Click Set to Defaults, and then configure SERVER METRICS to update the log every 60 seconds, with a SAMPLE interval of 5 or 10 seconds.

Benchmark Test Steps

The template steps for performance/benchmark tests are listed below.

  1. Log on to EcoSys as the application user designated for the test

  2. If the application server has not been restarted as part of the preparation, clear the query cache (System Info > Cache > Query Cache > Clear).

  3. Apply a log marker (CTRL+Double-Cclick on the tools icon) to mark the beginning of the test

  4. Modify tracing if non-default values are applicable to this test (Sys Admin > Application Settings > Application Tracing)

  5. If possible, run each process two or three times to measure effects of caching. Apply a log marker before each run. For each test, capture the following data in a spreadsheet:

    1. The application user running the test

    2. The menu path invoked to load the screen

    3. All input runtime parameters

    4. The steps taken and values used for edit/save/refresh

    5. The expected/observed results.

  6. Apply a log marker for the end of the tests.

  7. Reset application tracing to defaults, if needed.

Test Result Data Capture

After the test is complete, capture the following data:

  1. EcoSys application logs. You can access these logs in the /EcoSys /logs folder or within the application using Sys Admin > Display Log > Download. Submit the logs in a .zip file to the analysts.

  2. Oracle AWR in text format, using $ORACLE_HOME/RDBMS/ADMIN/awrrpt.sql

  3. Operating system and network performance data (if part of the capture, will be platform dependent)

  4. Observed results and detailed steps from test run.

Multi-User Load Test Methodology

This section is intended to provide a starting methodology for multi-user load tests.

Step 1:  Model User Activity by Role

  1. Choose 2 – 4 roles of users (such as cost account manager, portfolio manager, financial administrator) that will use EcoSys. 

  2. Identify the security rights for each role in the software. For example, you can assign read/write for one project, read for all projects, or system global admin.

  3. For each role, identity two or three key reports, spreadsheets, and actions they will use. For these, write down:

    1. Subject area and columns of the report, spreadsheet, or action

    2. How many rows are touched by each role

    3. How many times per week does each role view the data, and how many times per week does each role edit/save the data

  4. Translate those roles and user actions into real-world wait times, apply the appropriate scaling factor for the load to generate, and then compute the virtual user load to be run.

Step 2:  Prepare System Under Test

  1. Review the system settings and configuration with EcoSys support.

  2. Follow the recommended performance tuning and system maintenance for the environment to be tested.

  3. Seed any data required for the test, such as generating data in cost objects, transactions, and so forth.

Step 3:  Script and Execute the Tests

  1. Write the scripts to simulate virtual users. EcoSys has used three different load test tools: 

    • HP LoadRunner (powerful, expensive)

    • WebLoad (a mid-tier, fully featured product with a commercial or open source license)

    • JMeter (more technically oriented, flexible, free). 

  2. Test the scripts to confirm that they can run repeatedly without generating errors, and that they can have adjustable wait times. We recommend a slight random factor to wait times in order to produce a more uniform load.

  3. Build scenarios to run: one scenario for each user role, ramping up from one user to the target concurrent load. Then (if needed), build one scenario that combines the roles into a single scenario that models the expected real-world ratio of the different roles (for example, one financial administrator, three portfolio managers, and 10 cost account managers with their respective wait times).

  4. Execute the tests. For the period of the test, capture the following metrics as a function of the number of virtual users:

    1. Response times of the different scripted requests

    2. Any response failures of the different scripted requests

    3. CPU load on application server

    4. Java JVM heap usage on the application server

    5. Network I/O on the application server

    6. CPU load on the database server

    7. Memory utilization on the database server

    8. Disk I/O on the database server

    9. Database-specific performance metrics (such as, on Oracle, the AWR report)

  5. Save the application logs from the EcoSys application.

Step 4:  Gather and Analyze the Results

  1. Review the EcoSys application log for any errors, exceptions, or warnings logged during tests.

  2. Correlate the performance metrics with the script timings, and consolidate them into a single report by test type. This is typically done as a graph with number of virtual users on the x-axis, and the captured metrics and timings above on the y-axis.

  3. Determine if/when the response times exceed acceptable limits.

Step 5:  Performance Tuning and Troubleshooting

  1. Identify error conditions (for example, out of memory).

  2. Identify resource bottlenecks. For example: CPU at 100%, memory at 100%, I/O at 100%. Apply tuning and/or augment resources as appropriate (for example, increase memory available to the database, or upgrade to faster CPU, and so forth.).

Identify which spreadsheets, reports, and actions are exceeding target response times. For each one, apply the performance troubleshooting guidelines to identify the bottlenecks and tune the specific configurations.

Automated Test Script Recommendations

When using an automated test scripting tool, there are a few considerations and settings necessary for working with EcoSys. Some of the settings below are specific to the WebLOAD tool, but most have relevant application in any platform.

RandomKey Parameter Generation

EcoSys utilizes a random key generation to prevent automatic replay of requests by some proxy servers. The EcoSys server checks for duplicate calls to the server and will not execute a duplicate request within that user session. Automated testing scripts that make GET and POST calls (wlHttp.Post or wlHttp.Get) that contain the RandomKey URL parameter should be parameterized (generated by the script) to ensure uniqueness of this value. Otherwise, replayed requests appear as duplicates to the server.

Random Key Syntax for URL Parameters!doLogin.action?RandomKey=797654427237.888

In WebLOAD Script

wlRandom = wlRand.Range(1000000, 9999999);

Proxy Timeout

The default timeout must be increased to prevent timeouts during recording when working with large spreadsheets. If using WebLOAD, look for <WebLOAD>\Include\wlproxyinclude.js, and change the value:

ProxyObject.RProxyCOptConnectionTimeOut = 1000;

XML Buffer Size

Increase the default buffer value to allow room for large amounts of xml data to be saved. Under <WebLOAD>\bin\webload.ini, change the value:


Preventing XML Parsing by the Test Script

WebLOAD parses XML and builds a structure that can be accessed for future query. There is a performance hit in performing this function on the client side. This manifests as heavy CPU usage on the system generating the tests (not the application server).

If you experience slow times during spreadsheet loading, you can override the data type and have the server stream a different data type. After recording, alter the login portion of the script as shown below:

  • After Recording


  • Override to Plain Text


This instructs the EcoSys application to serve all XML as text, which then lets WebLOAD skip the parsing of that XML and reduces the workload of the system generating the test requests. This does not reduce in any way the actual load placed on the EcoSys application server.

Transaction and Sleep Boundaries

Sleep times allow the script to more accurately simulate what a user would be doing. Sleep times should be outside of transaction definitions. WebLOAD provides timings for each individual call if needed so a transaction is usually a larger measurement. For example, a login might consist of an authentication call and several html calls for screen display. WebLOAD will generate time for all individual statements but we must define the "login" transaction.

/***** After browser launch sleep 5 seconds before login *****/


/***** Begin Login Transaction *****/

/***** All login code goes here *****/


/***** End Login Transaction *****/

/***** After login sleep 10 seconds before next action *****/