Performance Testing and Core Performance Testing Activities



Performance testing is intended to determine the throughput, reliability, responsiveness and scalability of a system under a given workload. 
Performance testing is used to accomplish the following:
- Assess production readiness
- Evaluate against performance criteria
- Compare performance characteristics of multiple systems or system configurations
- Find the source of performance problems
- Support system tuning
- Find throughput levels

Why do Performance Testing?

At the highest level, performance testing is almost always conducted to address one or more risks related to expense, opportunity costs, continuity and corporate reputation.

Some more specific reasons for conducting performance testing include:

1. Assessing release readiness by:
- Enabling you to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions. These predictions are also valuable to the stakeholders who make decisions about whether an application is ready for release or capable of handling future growth or whether it requires a performance improvement/hardware upgrade prior to release.
- Providing data indicating the likelihood of user dissatisfaction with the performance characteristics of the system.
- Providing data to aid in the prediction of revenue losses or damaged brand credibility due to scalability or stability issues or due to users being dissatisfied with application response time.

2. Assessing infrastructure adequacy by:
- Evaluating the adequacy of current capacity.
- Determining the acceptability of stability.
- Determining the capacity of the application’s infrastructure, as well as determining the future resources required to deliver acceptable application performance.
- Comparing different system configurations to determine which works best for both the application and the business.
- Verifying that the application exhibits the desired performance characteristics, within budgeted resource utilization constraints.

3. Assessing adequacy of developed software performance by:
- Determining the application’s desired performance characteristics before and after changes to the software.
- Providing comparisons between the application’s current and desired performance characteristics.

4. Improving the efficiency of performance tuning by:
- Analyzing the behavior of the application at various load levels.
- Identifying bottlenecks in the application.
- Providing information related to the speed, scalability, and stability of a product prior to production release, thus enabling you to make informed decisions about whether and when to tune the system.

Types of Performance Testing

The following are the most common types of performance testing for Web applications.

1. Performance Testing:
A Performance Testing is a technical investigation done to determine or validate the speed, responsiveness, scalability and stability characteristics of the product under test.
Purpose:
To determine or validate speed, scalability and stability.
Benefits:
- Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
- Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
- Identifies mismatches between performance-related expectations and reality.
- Supports tuning, capacity planning, and optimization efforts.
Challenges:
- May not detect some functional defects that only appear under load.
- If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
- Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.

2. Load Testing:
- Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service
level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.
- Endurance Testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
- Endurance Testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.
Purpose:
To verify application behavior under normal and peak load conditions.
Benefits:
- Determines the throughput required to support the anticipated peak production load.
- Determines the adequacy of a hardware environment.
- Evaluates the adequacy of a load balancer.
- Detects concurrency issues.
- Detects functionality errors under load.
- Collects data for scalability and capacity planning purposes.
- Helps to determine how many users the application can handle before performance is compromised.
- Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
Challenges:
- It is not designed to primarily focus on speed of response.
- Results should only be used for comparison with other related load tests.

3. Stress Testing:
- The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions and memory leaks. Stress testing enables you to identify your application’s weak points and shows how the application behaves under extreme load conditions.
- Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
Purpose:
To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
Benefits:
- Determines if data can be corrupted by overstressing the system.
- Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
- Allows you to establish application-monitoring triggers to warn of impending failures.
- Ensures that security vulnerabilities are not opened up by stressful conditions.
- Determines the side effects of common hardware or supporting application failures.
- Helps to determine what kinds of failures are most valuable to plan for.
Challenges:
- Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
- It is often difficult to know how much stress is worth applying.
- It is possible to cause application or network failures that may result in significant disruption if not isolated to the test environment.

4. Capacity Testing:
- Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity or network bandwidth) are necessary to support future usage levels.
- Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
Purpose:
- To determine how many users or transactions a given system will support and also meet performance goals.
Benefits:
- Provides information about how workload can be handled to meet business requirements.
- Provides actual data that capacity planners can use to validate or enhance their models and predictions.
- Enables you to conduct various tests to compare capacity planning models or predictions.
- Determines the current usage and capacity of the existing system to aid in capacity planning.
- Provides the usage and capacity trends of the existing system to aid in capacity planning.
Challenges:
- Capacity model validation tests are complex to create.
- Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.

Core Activities of Performance Testing

Performance testing is typically done to identify bottlenecks in a system, establish a baseline for future testing, support a performance tuning effort, determine compliance with performance goals and requirements and collect other performance related data to help stakeholders make informed decisions related to the overall quality of the application being tested. The results from performance testing and analysis can help you to estimate the hardware configuration required to support the application when you “Go Live” to production operation.

Core Activities of Performance Testing - Software Testing Diary by Prashant Vadher

Activity 1. Identify the Test Environment:

Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle.
The degree of similarity between the hardware, software, and network configuration of the application under test conditions and under actual production conditions is often a significant consideration when deciding what performance tests to conduct and what size loads to test. It is important to remember that it is not only the physical and software environments that impact performance testing, but also the objectives of the test itself. Often, performance tests are applied against a proposed new hardware infrastructure to validate the supposition that the new hardware will address existing performance concerns.


Consider the following key points when characterizing the test environment:
- Although few performance testers install, configure, and administrate the application being tested, it is beneficial for the testers to have access to the servers and software or to the administrators who do.
- Identify the amount and type of data the application must be seeded with to emulate real-world conditions.
- Identify critical system components. Do any of the system components have known performance concerns? Are there any integration points that are beyond your control
for testing?
- Get to know the IT staff. You will likely need their support to perform tasks such as monitoring overall network traffic and configuring your load-generation tool to simulate a realistic number of Internet Protocol (IP) addresses.
- Check the configuration of load balancers.
- Validate name resolution with DNS. This may account for significant latency when opening database connections.
- Validate that firewalls, DNS, routing, and so on treat the generated load similarly to a load that would typically be encountered in a production environment.
- It is often appropriate to have systems administrators set up resource-monitoring software, diagnostic tools and other utilities in the test environment.
Input:
1. Logical and physical production architecture
2. Logical and physical test architecture
3. Available tools
Output:
1. Comparison of test and production environments
2. Environment-related concerns
3. Determination of whether additional tools are required

Activity 2. Identify Performance Acceptance Criteria :

Identify the response time, throughput and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.
It generally makes sense to start identifying or at least estimating, the desired performance characteristics of the application early in the development life cycle.
This can be accomplished most simply by noting the performance characteristics that your users and stakeholders equate with good performance. The notes can be quantified at a later time.
Classes of characteristics that frequently correlate to a user’s or stakeholder’s satisfaction typically include:
- Response time : For example, the product catalog must be displayed in less than three seconds.
- Throughput : For example, the system must support 25 book orders per second.
- Resource utilization : For example, processor utilization is not more than 75 percent. Other important resources that need to be considered for setting objectives are memory, disk input/output (I/O) and network I/O.

Consider the following key points when identifying performance criteria:
- Business requirements
- User expectations
- Contractual obligations
- Regulatory compliance criteria and industry standards
- Service Level Agreements (SLAs)
- Resource utilization targets
- Various and diverse, realistic workload models
- The entire range of anticipated load conditions
- Conditions of system stress
- Entire scenarios and component activities
- Key performance indicators
- Previous releases of the application
- Competitor’s applications
- Optimization objectives
- Safety factors, room for growth and scalability
- Schedule, staffing, budget, resources and other priorities

Input:
1. Client expectations
2. Risks to be mitigated
3. Business requirements
4. Contractual obligations
Output:
1. Performance-testing success criteria
2. Performance goals and requirements
3. Key areas of investigation
4. Key performance indicators
5. Key business indicators

Activity 3. Plan and Design Tests :

Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data and establish metrics to be collected.
Consolidate this information into one or more models of system usage to be implemented, executed and analyzed.
Planning and designing performance tests involves identifying key usage scenarios, determining appropriate variability across users, identifying and generating test
data and specifying the metrics to be collected. Ultimately, these items will provide the foundation for workloads and workload profiles.
When designing and planning tests with the intention of characterizing production performance, your goal should be to create real-world simulations in order to provide
reliable data that will enable your organization to make informed business decisions. Real-world test designs will significantly increase the relevancy and usefulness of results data.
Key usage scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script.

Consider the following when identifying key usage scenarios:
- Contractually obligated usage scenario(s)
- Usage scenarios implied or mandated by performance-testing goals and objectives
- Most common usage scenario(s)
- Business-critical usage scenario(s)
- Performance-intensive usage scenario(s)
- Usage scenarios of technical concern
- Usage scenarios of stakeholder concern
- High-visibility usage scenarios
When identified, captured and reported correctly, metrics provide information about how your application’s performance compares to your desired performance characteristics. In addition, metrics can help you identify problem areas and bottlenecks within your application. It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.
Consider the following key points when planning and designing tests:
- Realistic test designs are sensitive to dependencies outside the control of the system, such as humans, network activity and other systems interacting with the
application.
- Realistic test designs are based on what you expect to find in real-world use, not theories or projections.
- Realistic test designs produce more credible results and thus enhance the value of performance testing.
- Component-level performance tests are integral parts of realistic testing.
- Realistic test designs can be more costly and time-consuming to implement, but they provide far more accuracy for the business and stakeholders.
- Extrapolating performance results from unrealistic tests can create damaging inaccuracies as the system scope increases and frequently lead to poor decisions.
- Involve the developers and administrators in the process of determining which metrics are likely to add value and which method best integrates the capturing of those metrics into the test.
- Beware of allowing your tools to influence your test design. Better tests almost always result from designing tests on the assumption that they can be executed and then adapting the test or the tool when that assumption is proven false, rather than by not designing particular tests based on the assumption that you do not have access to a tool to execute the test.
Input:
1. Available application features or components
2. Application usage scenarios
3. Unit tests
4. Performance acceptance criteria
Output:
1. Conceptual strategy
2. Test execution prerequisites
3. Tools and resources required
4. Application usage models to be simulated
5. Test data required to implement tests
6. Tests ready to be implemented

Activity 4. Configure the Test Environment:

Prepare the test environment, tools and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
Preparing the test environment, tools and resources for test design implementation and test execution prior to features and components becoming available for test can significantly increase the amount of testing that can be accomplished during the time those features and components are available.
Load-generation and application-monitoring tools are almost never as easy to get up and running as one expects. Whether issues arise from setting up isolated network
environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing or version compatibility between monitoring software and server operating systems, issues always seem to arise from somewhere. Start early, to ensure that issues are resolved before you begin testing.
Additionally, plan to periodically reconfigure, update, add to or otherwise enhance your load-generation environment and associated tools throughout the project. Even if the application under test stays the same and the load-generation tool is working properly, it is likely that the metrics you want to collect will change. This frequently implies some degree of change to or addition of, monitoring tools.

Consider the following key points when configuring the test environment:
- Determine how much load you can generate before the load generators reach a bottleneck. Typically, load generators encounter bottlenecks first in memory and then in
the processor.
- Although it may seem like a commonsense practice, it is important to verify that system clocks are synchronized on all of the machines from which resource data will be collected. Doing so can save you significant time and prevent you from having to dispose of the data entirely and repeat the tests after synchronizing the system clocks.
- Validate the accuracy of load test execution against hardware components such as switches and network cards. For example, ensure the correct full-duplex mode
operation and correct emulation of user latency and bandwidth.
- Validate the accuracy of load test execution related to server clusters in load-balanced configuration. Consider using load-testing techniques to avoid affinity of clients to servers due to their using the same IP address. Most load-generation tools offer the ability to simulate usage of different IP addresses across load-test generators.
- Monitor resource utilization (CPU, network, memory, disk and transactions per time) across servers in the load-balanced configuration during a load test to validate that the load is distributed.
Input:
1. Conceptual strategy
2. Available tools
3. Designed tests
Output:
- Configured load-generation and resource-monitoring tools
- Environment ready for performance testing.

Activity 5. Implement the Test Design:

Develop the performance tests in accordance with the test design. The details of creating an executable performance test are extremely tool-specific. Regardless of the tool that you are using, creating a performance test typically involves scripting a single usage scenario and then enhancing that scenario and combining it with other scenarios to ultimately represent a complete workload model. 
Load-generation tools inevitably lag behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies and, even then, these have to become prominent before the support can be built. This often means that the biggest challenge involved in a performance-testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.

Consider the following key points when implementing the test design:
- Ensure that test data feeds are implemented correctly. Test data feeds are data repositories in the form of databases, text files, in-memory variables or spreadsheets that are used to simulate parameter replacement during a load test. For example, even if the application database test repository contains the full production set, your load test might only need to simulate a subset of products being bought by users due to a scenario involving, for example, a new product or marketing campaign. Test data feeds may be a subset of production data repositories.
- Ensure that application data feeds are implemented correctly in the database and other application components. Application data feeds are data repositories, such as product or order databases, that are consumed by the application being tested. The key user scenarios, run by the load test scripts may consume a subset of this data.
- Ensure that validation of transactions is implemented correctly. Many transactions are reported successful by the Web server, but they fail to complete correctly.Examples of validation are, database entries inserted with correct number of rows, product information being returned, correct content returned in html data to the clients etc.
- Ensure hidden fields or other special data are handled correctly. This refers to data returned by Web server that needs to be resubmitted in subsequent request, like
session IDs or product ID that needs to be incremented before passing it to the next request.
- Validate the monitoring of key performance indicators (KPIs).
- Add pertinent indicators to facilitate articulating business performance.
- If the request accepts parameters, ensure that the parameter data is populated properly with variables and unique data to avoid any server-side caching.
- If the tool does not do so automatically, consider adding a wrapper around the requests in the test script in order to measure the request response time.
- It is generally worth taking the time to make the script match your designed test, rather than changing the designed test to save scripting time.
- Significant value can be gained from evaluating the output data collected from executed tests against expectations in order to test or validate script development.
Input:
- Conceptual strategy
- Available tools/environment
- Available application features or components
- Designed tests
Output:
- Validated, executable tests
- Validated resource monitoring
- Validated data collection

Activity 6. Execute the Test:

Run and monitor your tests. Validate the tests, test data and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Executing tests is what most people envision when they think about performance testing. It makes sense that the process, flow and technical details of test execution are extremely dependent on your tools, environment and project context. Even so, there are some fairly universal tasks and considerations that need to be kept in mind when executing tests. Much of the performance testing–related training available today treats test execution as little more than starting a test and monitoring it to ensure that the test appears to be running as expected. In reality, this activity is significantly more complex than just clicking a button and monitoring machines.

Consider the following key points when executing the test:
- Validate test executions for data updates, such as orders in the database that have been completed.
- Validate if the load-test script is using the correct data values, such as product and order identifiers, in order to realistically simulate the business scenario.
- Whenever possible, limit test execution cycles to one to two days each. Review and reprioritize after each cycle.
- If at all possible, execute every test three times. Note that the results of first-time tests can be affected by loading Dynamic-Link Libraries (DLLs), populating server-side caches or initializing scripts and other resources required by the code under test. If the results of the second and third iterations are not highly similar, execute the test again. Try to determine what factors account for the difference.
- Observe your test during execution and pay close attention to any behavior you feel is unusual. Your instincts are usually right or at least valuable indicators. No matter how far in advance a test is scheduled, give the team 30-minute and 5-minute warnings before launching the test (or starting the day’s testing) if you are using a shared test environment. Additionally, inform the team whenever you are not going to be executing for more than one hour in succession so that you do not impede the completion of their tasks.
- Do not process data, write reports or draw diagrams on your load-generating machine while generating a load, because this can skew the results of your test.
- Turn off any active virus-scanning on load-generating machines during testing to minimize the likelihood of unintentionally skewing the results of your test.
- While load is being generated, access the system manually from a machine outside of the load-generation environment during test execution so that you can compare your observations with the results data at a later time.
- Remember to simulate ramp-up and cool-down periods appropriately.
- Do not throw away the first iteration because of application script compilation, Web server cache building or other similar reasons. Instead, measure this iteration separately so that you will know what the first user after a system-wide reboot can expect.
- Test execution is never really finished, but eventually you will reach a point of diminishing returns on a particular test. When you stop obtaining valuable information, move on to other tests.
- If you feel you are not making progress in understanding an observed issue, it may be more efficient to eliminate one or more variables or potential causes and then run the test again.
Input:
- Task execution plan
- Available tools/environment
- Available application features or components
- Validated, executable tests
Output:
- Test execution results

Activity 7. Analyze Results, Report and Retest:

Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Re-prioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.
Managers and stakeholders need more than just the results from various tests — they need conclusions, as well as consolidated data that supports those conclusions.
Technical team members also need more than just results — they need analysis, comparisons and details behind how the results were obtained. Team members of all types
get value from performance results being shared more frequently. Before results can be reported, the data must be analyzed.

Consider the following important points when analyzing the data returned by your performance test:
- Analyze the data both individually and as part of a collaborative, cross-functional technical team.
- Analyze the captured data and compare the results against the metric’s acceptable or expected level to determine whether the performance of the application being tested shows a trend toward or away from the performance objectives.
- If the test fails, a diagnosis and tuning activity are generally warranted.
- If you fix any bottlenecks, repeat the test to validate the fix.
- Performance-testing results will often enable the team to analyze components at a deep level and correlate the information back to the real world with proper test design and usage analysis.
- Performance test results should enable informed architecture and business decisions.
- Frequently, the analysis will reveal that, in order to completely understand the results of a particular test, additional metrics will need to be captured during subsequent test-execution cycles.
- Immediately share test results and make raw data available to your entire team.
- Talk to the consumers of the data to validate that the test achieved the desired results and that the data means what you think it means.
- Modify the test to get new, better or different information if the results do not represent what the test was defined to determine.
- Use current results to set priorities for the next test.
- Collecting metrics frequently produces very large volumes of data. Although it is tempting to reduce the amount of data, always exercise caution when using data reduction techniques because valuable data can be lost.
Input:
- Task execution results
- Performance acceptance criteria
- Risks, concerns and issues
Output:
- Results analysis
- Recommendations
- Reports


Thanks and Regards,
Prashant Vadher | QC Engineer

Introduction to JMeter : Load Testing using JMeter


JMeter is load testing tool used for testing load on site by multiple requests of users. JMeter version 2.4 does not support https requests. If you are using JMeter 2.4 version then you can use BadBoy recording tool for processing https requests. But Latest version of JMeter available in market is 2.8 which supports https requests.

Operating System:

 
JMeter is a 100% Java application and should run correctly on any system that has a compliant Java implementation.

JMeter has been tested and works under:
• Unix (Solaris, Linux, Fedora etc)
• Windows (98, NT, XP, Vista, 7 etc)
• OpenVMS Alpha 7.3+

Install JMeter:

1. JRE or SDK first needs to be installed with a JAVA_HOME environment variable.
2. Download Latest version of JMeter from http://jmeter.apache.org/download_jmeter.cgi
3. Extract this .zip or .tgz file in any directory.
4. Go to that path and open jmeter folder.
5. Type ./bin/jmeter on command prompt(for Unix) or Run bin/jmeter.bat (for Windows)
6. JMeter screen will open with 2 main components:
    - Test Plan
    - Workbench
 

Recording in JMeter:

1. JMeter:
- Right click on Workbench component and go to Add >> Non-Test Elements >> HTTP Proxy Server.
- Right click on TestPlan component and go to Add >> Threads (Users) >> Thread Group.
- Right click on Thread Group component and go to Add >> Logic Controller >> Recording Controller.


Introduction to JMeeter Load Testing using JMeter

2. Firefox settings:
- In JMeter, recording supports in Firefox browser. So, open Firefox and go to Tools >> Options >> Network >> Settings >> Manual proxy cofiguration.

- Add “localhost” or “127.0.0.1” in HTTP Proxy and "8080" in Port field. If added port is in use, we can add some other port number also.

Introduction to JMeeter Load Testing using JMeter

3. JMeter:
- In HTTP Proxy server screen, add same port number in Port field which we have entered in Firefox browser.
- From Target Controller dropdown, select option “Thread Group > Recording Controller”
- In some cases, when we don't want to consider page load time for images, javascripts, etc. then we can exclude them. In “URL Patterns to Exclude” section, Click on ADD button and Add exclude formats in the way like this .*\.jpg
- Now from HTTP Proxy settings screen, click on Start button to start recording.


Introduction to JMeeter Load Testing using JMeter





4. Firefox: Open URL which needs to be recorded.

5. JMeter: Check the recorded requests are coming under “Thread Group >> Recording Controller” component. Click on Stop button, when you want to stop the recording.

Add Listeners in JMeter to check results of the recorded requests:

1. Right click Thread Group and go to Add >> Listener >> Aggregate Report. We can also add Summary Report to check the details.
2. To verify whether requests are running correctly or not, we can verify it by adding a listener as “View Results Tree”. We can check error by clicking on each request in View Results Tree section.
 

Use of Thread Group Properties to apply Load to the recorded requests:
    
1. Click on Thread group.
2. In Thread group screen, check the “Thread Properties” option.
3. We can add the number of users for load test in “Number of Thread (Users)” option.
4. To increase the load gradually, add total time to add full load in “Ramp-Up Period (in sec)” field. The Ramp-Up Period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If Number of Thread (Users) is 10 and the Ramp-Up Period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
5. To run the script multiple times, we can add value in Loop count field.

Example 1: 
Number of Threads (Users) = 100
Ramp up period(in seconds) = 600
Loop Count = 1

Every 6 second (600/100) one request hits the server. 

Example 2: 
Number of threads(users) = 100
Ramp up period(in seconds) = 600
Loop Count = 3

Every 6 second (600/100) 3 request hits the server.

Run the script:

From the Menu, go to Run >> Start and script will start executing.

Verify result in Reports:

As mentioned above, we can check results of the script in Aggregate Report, Summary Report, View Results Tree etc and also we can export the results to a file by using “Save Response to a File”. This file will get saved in /bin folder in 1.html format.











Thanks and Regards,

Prashant Vadher | QC Engineer

Distributed Load Testing in JMeter

ApacheJMeter is open source desktop application tool designed to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. The reason of using more than one system for load testing is the limitation of single system to generate large number of requests, threads and users.

In JMeter this is achieved by creating a Master - Slave configuration. Distributed load testing is the process using which multiple systems are used for simulating load of large number of users. In a distributed setup there is one controller called master and number of slaves controlled by master.

Distributed load testing can be tricky and may produce incorrect results if not configured correctly. Test results depends on the following factors.
  1. Network bandwidth. 
  2. Master and Slave Systems configuration including memory, processor speed.
  3. Configuration of the load testing tool like number of users running in parallel. 
For distributed load testing we need to create Master-slave configuration using following steps where Master will control all the slaves and collect the test results.

Step 1: Points to remember
  1. Make sure that all the systems are running exactly the same version of Jmeter. Mixing versions may not work correctly.
  2. As far as possible, also use the same version of Java on all systems. Using different versions of Java may work-but is best avoided.
  3. To make the system work firewall needs to be turned off. 
  4. All the systems need to be in same Subnet. 
  5. If the test uses any data files, note that these are not sent across by the client so make sure that these are available in the appropriate directory on each server. If necessary you can define different values for properties by editing the user.properties or system.properties files on each server. These properties will be picked up when the server is started and may be used in the test plan to affect its behavior (Exa. Connecting to a different remote server). Alternatively use different content in any datafiles used by the test (e.g. if each server must use unique ids, divide these between the data files).
Step 2: Configure Slave Systems
    1. Go to Jmeter/bin folder and Open jmeter-server.bat in a Text Editor.
    2. Find “:setCP” and Edit “START rmiregistry” to the full path. Example: “START C:\j2sdk1.4.2\jre\bin\rmiregistry”
      Distributed Load Testing in JMeter
    1. Now Execute jmeter-server.bat (jmeterserver on unix).
    2. On windows, you should see a dos window appear with “jre\[version]\bin\rmiregistry.exe”. If this doesn't happen, it means either the environment settings are not right, or there are multiple JRE installed on the system. [version] would be the jre version installed on the system.
    Step 3: Configure Master Systems
    1. Master system will act as the console.
    2. Open windows explorer and go to jmeter/bin directory
    3. Open jmeter.properties in notepad or wordpad.
    4. Find the property named, "remote_hosts" and Edit the line “remote_hosts=127.0.0.1”. Add the Slave System's IP address where jmeter server is running.
      For example, If Jmeter server running on 192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4 and 192.168.0.5 Slave Systems then the entry would like like this:
      remote_hosts=192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5

    Distributed Load Testing in JMeter
    1. Start jmeter file by executing jmeter.bat file.
    2. Open the test plan you want to use.
    3. You will notice that the Run menu contains two new sub-menus: "Remote Start" and "Remote Stop". These menus contain the client that you set in the properties file.
    4. To Execute script in all Slave Systems select “Remote Start All” option from the Run menu.

    Distributed Load Testing in JMeter












    Thanks & Regards,

    Prashant Vadher | QC Engineer

    Crowdsource Testing

    Crowdsourced Testing

    What is Crowdsource Testing?

    Crowdsource testing is an emerging trend in software testing which covers benefits, effectiveness, and efficiency of crowdsourcing and the cloud platform. It differs from traditional testing methods in that the testing is carried out by a number of different testers from different places, and not by hired consultants and professionals. The software is put to test under diverse realistic platforms which makes it more reliable, cost-effective, fast, and bug-free.
    This method of testing is considered when the software is more user-centric: i.e., software whose success is determined by its user feedback and which has a diverse user space. It is frequently implemented with gaming, mobile applications, when experts who may be difficult to find in one place are required for specific testing, or when the company lacks the resources or time to carry out the testing internally. Crowdsourcing your software testing consists of delegating onto a number of internet users the task of testing your web or software project while in development to ensure that it contains no defects.
    Crowdsource testing companies provide the platform for the testing cycles. They then crowdsourse the Product to a community of testers, who register for testing the software voluntarily.
    Testers are paid per bug, depending on type of bug and its market price. The crowdsource testing team is usually in addition to the organization's testing team, and not usually a replacement.

    Crowdsource testing vs Outsource testing 

    The main difference is that, in crowdsource testing, testers may belong to different workplaces. In outsource testing, the testers are from the same company or workplace that is responsible for outsourcing. In crowdsource testing, people voluntarily test a software with the possibility of not being paid (if no bugs are discovered). Outsource testers always get paid for their work.


    Advantages of Crowdsource testing:
    • Real insights from the real world, not just made up test case results 
    • Rapid feedback right away. 
    • With crowdsourcing, testers automatically test your interactive project against a variety of platforms 
    • It is cost effective, as the product company pays only for the valid bugs reported. 
    • The pool of testers is diverse with variations in languages as well as locales. This helps in testing applications which are based on localization. 
    • Testing done by hundreds of people at the same time. As there are large number of testers testing a software simultaneously, testing can be done quickly, giving more time to market. Time to test the software is comparably less. 
    • Leads to better productivity. 
    Disadvantages of Crowdsource testing: 
    • Security and Confidentiality : When offering a project to a crowd for testing, it is exposed to a large number of internet users worldwide. 
    • If the project is not released, a large number of users are able to access it fully and discover its secrets. 
    • Inconsistent quality and increased workload : The users that compose your crowd of testers are from different backgrounds, speak different languages and possess different levels of experience. 
    • They may be a number of poorly written bugs, duplicate bugs and erroneous bugs.
    Some Crowdsource testing Tools:









    Thanks and Regards,

    Prashant Vadher | QC Engineer

    How to test Flex application using Selenium RC?

    How to test flex application using Selenium RC?


    Testing Flex applications was difficult because the logic or behavior is encapsulated from the browser. Selenium RC uses JavaScript to communicate with the browser. Flex External Interface provides a mechanism for which you can use JavaScript to call an ActionScript function in a SWF file embedded in an HTML page. Therefore, we use FlexSelenium, a Selenium RC client extension that uses JavaScript as the medium between Selenium RC and the Flex application.
    You can also test flex application by having a flex monkium plugin in selenium IDE. You need to compile your client application with sfapi.swc and automation_monkey.swc and the flex libs. This becomes your application to test. You can record your test and convert that into any format which you are comfortable in.
    Recently I just got the chance to test one Flex application using selenium. To provide flex support to selenium, you just have to add few JAR files. But for this you need to rebuild your application with provided library file (SeleniumFlexAPI.swc) by selenium flex. 

    Below are the steps to test flex application using Selenium RC.
    1. Rebuild your flex application with SeleniumFlexAPI.swc
      Download the “Selenium-Flex API” zip file and extract the zip file. In FlexBuilder, add this SeleniumFlexAPI.swc in the /src folder, then build your application with -include-libraries SeleniumFlexAPI.swc as the additional compiler argument.


    2. Add JAR files in the project
      Download “Flash Selenium Java client extension” and “Flex UI Selenium” jar files. Now Right click on Project name in Eclipse and Select “Build Path >> Configure Build Path >> Library Tab”. Add these jar files by selecting “Add External Jar files” button.

    1. Write Selenium Script
      Before we write the script in Selenium RC we need to identify the elements of the flex application. So for this use FlashFirebug (extension of the firebug add-on) Firefox add-on to identify the elements.
       

      Example:

      selenium_flex.swf:




      Selenium Script:
      /*
       @author Software Testing Diary
      */


      package practice;
      import org.junit.After;
      import org.junit.Before;
      import org.junit.Test;
      import static org.junit.Assert.*;
      import com.thoughtworks.selenium.DefaultSelenium;
      import com.thoughtworks.selenium.FlexUISelenium;
      import com.thoughtworks.selenium.Selenium;

      public class flexSelenium {
          private final static String OPEN_URL = "http://www.softwaretestingdiary.com/";
          private final static String OPEN_PAGE = "2012/07/how-to-test-flex-application-using.html";
          private Selenium selenium;
          private FlexUISelenium flexUITester;
         
          @Before
          public void setUp() throws Exception
          {
              selenium = new DefaultSelenium("localhost", 4444, "*chrome",OPEN_URL);
              selenium.start();
              selenium.open(OPEN_PAGE);
              flexUITester = new FlexUISelenium(selenium, "selenium_flex.swf");
          }
           @Test
           public void test()
          {
          flexUITester.type("Software Testing Diary").at("myInput");
          flexUITester.click("myButton");
          assertEquals("Software Testing Diary", flexUITester.readFrom("myText"));
          }
          @After
          public void tearDown() throws Exception
          {
          selenium.stop();
          }
      }




      Thanks and Regards,
      Prashant Vadher | QC Engineer 
       



       

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    What is Testopia?

    Testopia is a test case management extension for Bugzilla. This allows a single user experience and point of product management for both defect tracking and test case management. It is designed to be a generic tool for tracking test cases, allowing for testing organizations to integrate bug reporting with their test case run results.
    Bugzilla is one of the most popular open source issue tracking systems available. Testopia integrates with Bugzilla products, components, versions, and milestones to allow a single management interface for high level objects. Testopia allows users to attach bugs to test case run results for centralized management of the software engineering process.
    Testopia allows users to login to one tool and uses Bugzilla group permissions to limit access to modifying test objects.

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    Test Plans:
    At the top of the Testopia hierarchy are test plans. So you need to create a Test Plan before you can do anything else in Testopia. Test plans are associated with a single product in Bugzilla. You can create multiple test plans for each product. This test plan will use as the storage point for all related test cases and test runs and it will act as the dashboard for your testing. It will also determine who will have access to update test cases.

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    Test Cases:
    Test cases are semi-independent in Testopia. Each test case can be associated with multiple test plans. Test cases are associated with one or more test plans and with zero or more test runs. Test cases can be divided into categories. You can define as many categories for your product as you like from the test plan page. Each product in Bugzilla is divided into components and you can apply multiple components to each test case, however each test case can only belong to one category at a time.

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    Test Runs:
    Once you have defined a set of test cases, you are ready to run through those tests in a test run. Each run is associated with a single test plan and Environment. It contains any number of test cases from that plan. It contains a list of test cases to be examined and stores the results in the case-runs table.

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    Test Case-run:
    The union of a test case and a test run. Each time a test case is included in a new test run, an entry is made for it in the test case-runs table. This captures whether the test case passed or failed in the given run. Each case-run should be associated with only one build for a given status.
    When you create a test run, records for each test case in that run are created. By default these take on the build and environment of the test run, however, it is possible to change these attributes on a particular case-run, essentially creating a new case-run for each combination.

    What is Testopia and How to set access control in Testopia at Test Plan Level?



    How to set access control in Testopia at Test Plan Level?

    When you first install Testopia, it will create a Bugzilla group called “Testers”. Members of this group have access to view and update all test plans and their associated objects such as cases and runs. Membership in this group is required in order to create new test plans, clone test plans, and administer environments. If the “testopia-allow-group-member-deletes parameter” is on, members of this group will also have rights to delete any object in Testopia. Membership in this group is checked first and supersedes the access control lists for individual plans.

    Testopia checks which test plans your user has access to / remember access control in testopia is at the test plan level (Permissions tab in a test plan). Testopia checks if all the test plans to which the user can access are associated with products that the user has access to. If not then it complains: "You are not authorized to edit product X contact administrator etc."

    In addition to the Testers group, each test plan maintains it's own access control list which can be used to allow or deny access to test plans based on email domain or explicit inclusion. Each test plan has its own access list. For a user that is not in the Testers group to access a test plan or any associated cases, runs, or case runs, he or she must be included on the list either by matching a regular expression, or explicit inclusion. To edit the access control list for a plan, navigate to the test plan and click the Permissions tab.

    What is Testopia and How to set access control in Testopia at Test Plan Level?


    User Regular Expression:
    Users with login names (email addresses) matching a supplied regular expression can be given rights to a particular test plan. The regular expression should be crafted with care to prevent unintentional access to the test plan by outsiders.

    For example :
    To grant access to your test plan by all users at SoftwareTestingDiary.com you would supply the following regular expression:
    ^.*@SoftwareTestingDiary\.com$
    To provide access to all users at gmail.com and SoftwareTestingDiary.com you would use:
    ^.*@(gmail\.com|SoftwareTestingDiary\.com)$
    To provide public access (all users) you would use:
    .*
    An empty regular expression does not match anything meaning leaving this field blank will mean the test plan will rely solely on explicit membership. Once you have supplied the regular expression, you must select the access level.

    Explicit Inclusion:
    If you do not wish to grant access to a whole group at once, you can add individual users by entering their Bugzilla login id in the field provided and clicking the Add User button. This allows the most fine grained control as to who can do what within your test plan. However, if you add a user that matches the regular expression they will have the greater of the two rights.

    Access Rights
    Users on the test plan access control lists can be granted rights to read, write, delete, and admin test plans and their associated objects.

    Read: Allows viewing rights to the plan and all test cases, test runs, and test case-runs associated with it. Test cases linked to more than one plan will be visible to users in both plans.
    Write: Implies Read. Allows rights to modify the plan and associated cases, runs, and case-runs. Test cases linked to more than one plan will not be writable unless the user has write rights in all plans.
    Delete: Implies Read and Write. Allows rights to delete the plan and associated cases, runs, and case-runs. Test cases linked to more than one plan will not be deletable unless the user has
    delete rights in all plans.
    Admin: Implies Read, Write, and Delete. Allows rights to modify the plan's access controls.








    Thanks and Regards,
    Prashant Vadher | QC Engineer

     
    Design by Prashant Vadher