Performance Testing

Performance tests are executed to determine how a system behaves in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

To alleviate the risk of degrading server performance for customers’ applications which run on shared NDIT infrastructure, all web applications (NDIT written and vendor) must be performance tested prior to production use. NDIT manages a testing environment which enables our staff to test web applications to ensure they perform efficiently. A performance test is executed prior to an application’s initial production deployment and before a modified application is re-deployed to production. Cosmetic changes are exempt from the performance test requirement.

Types of Performance Testing

  • Load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
  • Stress testing is a form of deliberately intense or thorough testing used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
  • Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing.


  • Analyze the application’s testing needs.
  • Suggest possible performance test scenarios that will optimize customers’ use of performance testing services.
  • Mentor the customer in completing the test requirements needed for setting up test cases.
  • Initiate the test execution process. 
  • Assist customer in the review and analysis of the performance test results.  The tester will provide information on the condition of the application being tested and specific information on areas to improve the application.
  • Retain test scripts for subsequent testing, however, changes to the GUI interface may require starting the process over in order to incorporate the changes within the automated test scripts.  

Requesting Services

  • Applications written by NDIT will go through performance testing as part of the normal project development lifecycle.
  • Applications purchased as a package or written by another vendor will require that the agency submit a Work Management System (WMS) “Load Test” Service Request specifying the application that requires performance testing.
  • Executing a performance test requires the NDIT performance tester to gather relevant data prior to beginning the performance test.  A meeting with the requestor will be set up to gather the information required.  To prepare for the meeting, please review the performance test interview questions to provide your resource with the information needed.  A demo of the application will assist the NDIT performance tester in determining what areas of the application should be tested.
  • A WMS Load Test request should be submitted at least two weeks before the “required completion” date due to resource availability, project schedules, issues that may arise and other load test requests.

Effective Performance Test Scenarios

  • In order to do a thorough performance test, all major functionality within the application needs to be tested.
  • Testing all major functionality means that the scenarios you create are similar to all data intensive processes a typical end user would experience within your application. Your scenarios must include steps to read, add and update data. If reports are generated within your application, those must also be included in the scenarios.


The rates for Performance Testing of applications are variable and will be based on the number of estimated users for the application.  The costs of performance testing are used to cover a portion of the license and support costs for the testing tool.  

  • For new applications and vendor applications, the costs will range from $500.00 up to $1000.00 based on the number of users as defined below. 
    • Up to 100 users - $   500.00
    •          200 users - $   600.00
    •          300 users - $   700.00
    •          400 users - $   800.00
    •          500 users - $1,000.00
  • For existing applications, the costs will range from $100.00 up to $600.00 based on the number of users as defined below.
    • Up to 100 users - $   100.00
    •          200 users - $   200.00
    •          300 users - $   300.00
    •          400 users - $   400.00
    •          500 users - $   600.00
  • Time spent by the NDIT Performance Testing staff will be billed to the Work Management service request using the following rate guidelines: 
    • Applications written by NDIT will be billed at the NDIT developer’s current rate.
    • Applications purchased as a package or written by another vendor will be billed at NDIT’s consultant rate, as we factor this testing as a service to that vendor rather than to our internal staff.

Key Performance Indicators (KPIs)

Key Performance Indicators or KPIs are quantifiable measures used by an organization to evaluate its success or the success of a particular activity. For Performance Testing, KPIs are used to ensure efficient server performance.

If minimum KPIs are not met, a dedicated environment may be required which will result in additional costs to the agency.

Application Key Performance Indicator Target Description Comment
Any Application HTTP Response Code 2XX and 3XX Acceptable

Any response codes beginning with a 2 and 3 are acceptable. For example, 200 and 300 are acceptable.

Any response codes beginning with a 4 or 5 are unacceptable. For example, 401 -  Unauthorized, 403 – Security, 404 - Missing images/includes, 407 - Proxy authentication, 500 - Server down, 502 - Proxy errors, 503 - Service Unavailable

Any Application End User Response Time <= 1.5 sec/action step Response times > 1.5 seconds/action step need to be investigated. Known to have above average response times: File uploads and downloads, Cognos and Crystal report generation, and GIS.
Database Key Performance Indicator Target Description Comment
SQL Server  Buffer Cache Hit Ratio >= 90% Indicates poor performing queries. This measures the health of the data cache or shared pool of the database.   At < 90%, the application may be performing large table scans, index scans or the memory allocated to the buffer/shared pool is too small
SQL Server  Cache Hit Ratio >= 90% Indicates lack of dynamic SQL or bind variables. This measures that SQL Statements are properly prepared   At < 90%, there may be a dynamic SQL issue:
  1. code should be changed to use bind variables
  2. use the sp_executesql stored procedure
SQL Server   Average Disk Queue Length <= 2 This is the number of disk requests waiting to be serviced   If this number is high, there are probably full table or clustered index scans.
SQL Server Avg. Wait Time < 500 ms Causing too many locks for too long a period of time If the average waits are high, then it’s possible that the code needs to be tweaked
SQL Server Full Scans/Sec = baseline There are table scans implemented rather than index lookups.  Compare this to baseline High number of table scans will cause bottlenecks
SQL Server   % Processor Time (CPU)   <= 80% Percentage of time executing non-idle threads on the server   If % processor time is > 80, the application code may be looping excessively or may be performing table/index scans causing locks within the database
Oracle Workload Repository (CPU time) <= 20% Database (DB) Time/Elapsed (minutes) If CPU time is > 20% of the clock time, then this should be investigated and other Oracle indicators will be investigated.
Application Server Platform Key Performance Indicator  Target  Description  Comment
Linux CPU Utilization <= 40% The percentage of CPU time used by processes Values > 40% for extended periods of time should be investigated
Linux Available Memory <= 80% Amount of physical RAM used by processes Values > 80% should be investigated
Threshold: (Total amount of RAM Used – Total amount of RAM Buffered – Total amount of RAM cached) / Total amount of RAM
Linux Swap Space = 0% The server should not be swapping before/during/after the test (Similar to the ‘page file’ in windows) Available Swap Space/Total Swap Size Values > 0% should be investigated.
Windows CPU Utilization   <= 70% The percentage of time the threads of this process spend using the processors Values  > 70% for extended periods of time should be investigated
Windows Memory\Available Mbytes  >= 20% Amount of physical RAM available Values < 20% should be investigated
Application Server Key Performance Indicator  Target  Description  Comment
IIS CPU Utilization   <= 70% The percentage of time the threads of this process spend using the processors Values  > 70% for extended periods of time should be investigated
IIS Handle Count <= 5000 Number of threads active in this process Thread count often increases when the load is too high
IIS Requests Rejected = 0 Number of rejected requests. Requests are rejected when one of the queue limits is exceeded Values > 0 should be investigated
IIS Application Restarts = 0 The number of application restarts Values > 0 should be investigated