Magazine

Different Functional Testing Types Explained in Detail

Posted on the 13 February 2020 by Testsigma @testsigmainc

According to Wikipedia, “Functional testing is a quality assurance process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and the internal program structure is rarely considered.”

Functional testing is a Black Box technique in which the output is validated against the input provided to the application. Every functionality of the application is tested according to the business requirement, and therefore the name Functional Testing.

Steps involved in Functional Testing

 Following are the steps involved in the Functional Testing process:

1. Understand the User Requirements

The first step is to go through the business requirements and gain a thorough understanding of them. After a good grasp on the functionality, we will be in a position to convert all the requirements into test cases.

2. Document a Test Plan

A good test plan is vital for any testing, and this holds true for functional testing too. According to ISTQB, a test plan is “ a documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organised to coordinate testing activities.”

3. Test Case creation

At this point of Functional testing, we have knowledge of requirements and have a test plan in place, now we can start writing the test cases according to the requirements.

An important aspect during the execution of functional test cases is test data. We should identify the test input and corresponding expected test output while writing the test cases.

Let’s understand more about these terms:

  • Test Data: Data that the tester creates and inputs into the application to test if a certain functionality of the application works as expected.
  • E.g. login id, password, name of employee etc.

  • Test Input: To test the application we may need to provide some data or perform some action (e.g. button click), all these inputs to the application are called Test Inputs.
  • Test Output: When the test input is provided to the application, the application under test may process the data and provide the output, this is ‘actual test output’. While writing the cases, we identify the expected output based on the functional requirement provided. Against the test input,the expected output from the application according to the business requirements is called ‘expected test output’.

4. Execute the Test Cases

Next, we will run the test cases, providing the test input we have identified during the ‘Test Case creation’ step.

5. Validate results

As discussed in point 3, the actual test output is the actual result we have received from the application. While writing the test cases, we identify the ‘expected test output’ which is the output when application behaves as expected.

The actual test output is validated against the test input we had provided and the result(Pass/Fail) is documented.

6. Log defects and get them fixed

Any variation between the actual result and the expected result is logged as defect and the development team is informed to fix the defect.

Functional Testing Types

1) Unit Testing

i. Smallest functional and testable unit of code is tested during unit testing.

ii. Mostly, performed by developers, since it is a White-Box testing technique.

iii. Performed during the earliest stages of development, hence helps in uncovering defects during initial development phases.This helps in saving the higher cost of fixing the defects during the later stages of the STLC.

iv. Techniques used are:

  • Branch Coverage– All the logical paths and conditions (i.e. True and False), are covered during testing. E.g. for an If-Then-Else statement in the code, all branches of the path are If and Then conditions.

  • Statement Coverage– All the statements present in the function or module should be traversed at least once during the testing.

  • Boundary Value Analysis– The test data is created for the boundary values and also for the values that lie just before and just after the boundary value and then the test case is run using all the created datasets. e.g. Days of Month can have valid data from 1 to 31. So, valid boundary values are 1 and 31 but the test case will also be tested for 0 and 32 to test the invalid conditions as well.

  • Decision Coverage– During execution of Control Structures like “Do-While” or “Case statement” all decision paths are tested.

v. Tools Used for Unit Testing- Junit, Jtest, JMockit, NUnit etc.

2) Integration Testing

i. Two or more unit tested components of the software are integrated together, and tested to validate the interaction between them is as expected.

ii. The communication of commands, data, DB calls, API calls, Micro-services processing is happening between the units and there is no unexpected behavior observed during this integration.

iii. Types of Integration Testing

  • Incremental – One or more components are combined and tested, once successfully tested more components are combined and tested. The process continues until the whole system is successfully tested.
  • There can be three approaches for Incremental Integration Testing:

    1. Top-Down Approach: Modules from the top level of either control flow or according to the system design are tested first and the low level of modules are integrated incrementally. If a low-level module is not available, a stub is used.

    2. Bottom-Up Approach: Reverse of Top-Down approach, low-level modules are tested first and then high-level modules are added incrementally. If a high-level module is not available, a driver is used.

    3. Hybrid Approach: Combination of Top-Down and Bottom-Up approach. Testing starts at both the levels and converges at the middle level.

  • Big-Bang- All of the components are integrated and tested as a whole system, just like a big bang!

3)    Interface Testing

i. A part of integration testing; the correctness of data exchange, data transfer, messages, calls and commands between two integrated components are tested.

ii. Communication between database, web-services, APIs or any external component and the application is tested during Interface Testing.

iii.  There should not be any error or format mismatch during this data or command communication. If any such problem is encountered, that needs to be corrected.

iv. Interface testing is the testing of the communication between different interfaces, while Integration Testing is the testing of the integrated group of modules as a single unit.

4) System Testing

i. All components of the system are combined and the system is tested for compliance and correctness against the requirement specifications (Functional or System).

ii. It is a Black-Box testing technique which validates the integrated system.

iii. It is performed before the User Acceptance Testing (UAT) in STLC(Software Testing Life Cycle).

iv. System Testing is performed in an almost real-life environment and according to real-life usage.

5) Regression Testing

i. After some enhancements or code fixes by developers, it becomes very important to run the regression test suite. Regression is run to ensure that these code changes have not hampered the existing working functionalities or any new defect is not injected in the code.

ii. Regression test cases are the subset of existing Functional Tests, which cover the major functionalities of the system.

iii. Regression cases need to be updated, added and deleted according to the application changes.

iv. The Regression test Cases are the best candidates for automation testing because they are run often and require time for execution.

v. Regression test cases to be run can be selected in 3 ways below:

  •  Run the whole regression test suite
  • Select the high priority test cases from regression suite
  • Select cases from regression suite testing the functionalities related to the code changes. 

6) Smoke Testing

i. After development, when a new build is released, Smoke Testing is performed on the application to ensure that all end-to-end major functionalities work.

ii. Smoke testing is usually done for the builds created during the initial phase of development for an application, which are not yet stable.

iii. During testing, if any major functionality is not working as expected then that particular build is rejected. Developers need to fix the bugs and create a new build for further testing.

iv. After successful Smoke Testing, the application is ready for the next level of testing. 

7) Sanity Testing

i. Sanity Tests are selected from the Regression Test suite, covering major functionalities of the application.

ii. Sanity Testing is done on the new build created by developers for a relatively stable application.

iii. When an application successfully passes the Sanity Testing, it is ready for the next level of testing.

iv. It is easy to be confused between smoke and sanity testing. To test an initial application after a new build, Smoke Testing is performed. After many releases, once it has gained stability, Sanity Testing is performed on the same application.

 Differences between smoke testing, sanity testing and regression testing are mentioned in detail here.

8) Acceptance Testing

i. During Acceptance Testing, the acceptance of the application by the end-user is tested. Aim of this testing is to make sure that the developed system fulfils all the requirements that were agreed upon during the business requirement creation.

ii. It is performed just after the System Testing and before the final release of the application in the real world.

iii. Acceptance testing becomes a criterion for the user to either accept or reject the system.

iv. It is a Black-Box testing technique because we are only interested in knowing the application’s readiness for the market and real users.

v. Types of Acceptance Testing

a) User Acceptance Testing

  • Alpha Testing- Performed at the developer’s site by skilled testers.
  • Beta Testing- Performed at the client site by real users.

b) Business Acceptance Testing

Business Acceptance Testing is done to ensure that the application is able to meet business requirements and goals.

c) Regulation Acceptance Testing

Regulation Acceptance Testing is done to ensure that the developed application does not violate any legal regulations put in place by the governing bodies.  

Conclusion

The importance of testing is clearly conveyed in the words of Robert Webb, CIO of Etihad “I know that software testers can make the company more profitable, make it safer, and help it grow faster. If they make your testing faster and get your new apps out there, you can be more competitive. And if they can do that while lowering costs, that’s remarkable.”

How well the application functions based on customer requirements is covered in functional testing. Functional testing can be undoubtedly considered as the most important of all, because it directly deals with the customer perspective and customer requirements. 


Back to Featured Articles on Logo Paperblog