4.1 Tools Target Areas
Test automation tools support the full software development lifecycle. There is likely a tool for every aspect of software that is developed. Tools are categorized by types of testing they can perform. Test types can span multiple test levels (e.g. component, integration, system, etc.). The following represents broad categories of functionality that test tools are capable of.
By far the most common use of automation is to test application functionality. This can include new functionality added in a release or retesting of existing functionality. In both cases the goal is to use automation to test the functionality of an application or system.
Applications use a variety of user interfaces. Most common today is the graphical user interface (GUI) that make up a large proportion of existing interfaces. All modern computers, tablets, and smartphones use a GUI (e.g. Windows, MacOS, Android, iOS, etc.). Historically, interfaces to systems were very crude and text based (e.g. Mainframe, Terminal, etc.). Some continue to be that way as the text based interface has very low overhead required by some applications or underlying hardware.
Interface (UI or GUI) testing test tools must be able to recognize the interface graphical or otherwise, and the fields or objects on that interface. Where there is a standard implementation of the interface the test tool can easily recognize and interact with the fields or objects.
For fully graphical interfaces there are a number of significant elements which can be tested or verified for each object or control. For example, a simple text field where a user enters a name can have many attributes: value (what data is in the field); focus (is this the field where the cursor is at); enabled (can we interact with this field); etc.
Application Programming Interface (API) testing is testing that does not involve a user interface (e.g., text or graphical interface). This is often referred to as client-less testing or non-GUI testing. It simply means that there is no direct user interface by which the tester interacts. However, data is still passed to the application, the application responds, and these interactions can be measured, verified, and reported on with automated test tools. Most applications that have a UI or GUI also are communicating at the API level.
With API testing, rather than interacting with an object or control, we interact with a function or service. In order to do so we need to understand the structure of data that the function or service is expected to respond with. If we send the wrong data structure will it identify it as an error or do something unexpectedly? If we send data and receive no return data, what does that tell us? There needs to be a full understanding of what the function or service is supposed to do so that verification can be effective. This includes an understanding of knowing what the function or service is not supposed to do as well.
Performance testing is not concerned directly with testing functionality, but rather testing that functionality under load. Hence it is categorized as non-functional testing. Performance testing is difficult to impossible to do without test tools. In years past, rooms full of people and computers attempted to do performance testing but the cost was high, the consistency was low, and the breadth of testing left much to be desired. Modern performance testing tools simulate those same people on those same computers. Now, however, all the people and all the computers can be contained within one powerful computer. For extremely large simulations with heavy loads, performance testing can be distributed across multiple computers.
Performance testing tools have 3 areas of functionality:
The controller function starts tests, ramps them up or down, and stops tests. The controller function schedules the testing event and runs the operational profiles (often referred to as a scenarios) to simulate the conditions of a system in use.
The user script is the recorded or programmed sequence of events, or operational profile, that mimic the way in which a user interacts with a system. Running the same user script concurrently is how the test tool simulates multiple users performing the same function. User scripts often contain timing information (or wait statements as they are often called) that help regulate the speed of execution to something representative of a human user. Otherwise, scripts could run at speeds far faster than how real users interact with a system, which would not accurately represent the performance of the system.
Performance test tools need to have robust reporting and analysis capabilities in order to accurately convey what has transpired. Reporting will include transactional timing information (e.g., how long it took for a search to return) and aggregate data (e.g., all searches on average took no more than 3 seconds to complete). Reporting can also include system resource utilization (e.g., caching, CPU, queues, etc.) and analysis can help correlate how system resources are impacted by heavy transactional use. Cross analysis reporting aides in comparing subsequent test runs in order to evaluate effects on changes.
Security testing tools help analyze and test applications for possible security vulnerabilities. There are 3 areas of interest for security testing: static, dynamic, and interactive [ref: Gartner Magic Quadrant for Application Security Testing]. Static analysis refers to looking at the application code from the inside, not when executed. Dynamic analysis is performing testing against a running application. There is overlap with the security qualities identified in static and dynamic security testing and interactive tests span both. Therefore it is advisable to begin security testing early in the software development lifecycle, as code is developed so that it can be assessed.
Static application security testing (SAST) of an application refers to analyzing the underlying application code for vulnerabilities. This may include source code, byte code, or binaries. Due to the visibility of the code these tests are often referred to as "white box" tests. Scanning of code can be accomplished before any code is compiled. Examining code can assist in uncovering vulnerabilities including SQL Injection, Buffer Overflows, Cross-Site Scripting, and Cross-Site Request Forgery.
Dynamic Application Security Testing (DAST) provides applications that are running to be analyzed for possible security flaws. These may include traditional client server applications of web-based applications. DAST has no visibility to the underlying application and therefore is referred to as "black box" testing.
Interactive Application Security Testing (IAST) allows visibility of possible vulnerabilities simultaneously within the application, often through the use of an agent within the runtime environment, and externally via application interfaces. This can include penetration testing which includes the scanning of ports for vulnerabilities.
A set of complementary and overlapping vulnerabilities can be identified with SAST, DAST, and IAST tools at different phase of the software development lifecycle. However, automated tools cannot find all vulnerabilities so be careful to not get a false sense of security. A comprehensive guide for cyber security testing can be found at [ref: DoD Cybersecurity Test and Evaluation Guidebook, version 2.0]
Blue team - penetration test. Red team - hacker to break into application. Red team in a box. Additional references for web application security available at: www.owasp.org.
Test Management tools provide for collaboration and reporting of test events. These tools can be a one-stop repository of manual and automated tests, documentation, data files, and more. Test Management tools should provide a level of configuration management for the many test artifacts in use. Additionally, these tools should allow for the classification of a test event by version so that all components used for a test can be traced back to each individual component version and state. This is extremely important as it's not uncommon to have to revert to an earlier release of testing should the current release find problems.
Test Management tools often include a defect management module that allows manual and automated tests to create entries for defects. The defects can be assigned to owners and alerts can be configured to inform owners of the stages the defect is going through (e.g. identification, remediation, re-test, etc.)