Difference between system, integration and E2E tests?

End-to-end testing verifies that your software works correctly from the beginning to the end of a particular user flow. It replicates expected user behaviour and various usage scenarios to ensure that your software works as whole. End-to-end testing uses a production equivalent environment and data to simulate real-world situations and may also involve the integrations your software has with external applications.

According to Atlassian, 'they're comparatively more expensive to perform and require regular maintenance when automated.' They recommend 'having a few key end-to-end tests and relying more on lower-level types of testing to be able to quickly identify breaking changes.'

A typical software project consists of multiple software units, usually coded by different developers. Integration testing combines those software units logically, and tests them as a group.

Essentially, integration testing verifies whether or not the individual modules or services that make up your application work well together. The purpose of this level of testing is to expose defects in the interaction between these software modules when they are integrated.

Key differences

Both of these types of testing sit further up the Agile testing pyramid than unit or component testing. When testing your products, you'll have to decide which tests you want to run. The decision will depend on budget and on the aspect of your application that you're testing. The answer will change on a case-by-case basis, so it's best to know a bit about your choices and the differences between them.

Integration testing

End-to-end testing

  • Integration testing starts at the very early stages of development. Bugs are caught earlier, rather than later, in the cycle.
  • End-to-end testing is done when the product is almost ready for release.
  • It's easy to integrate with daily builds and easy to test in the development environment.
  • It may be impossible to perform until the product is nearing completion.
  • Tests run faster compared to end-to-end tests.
  • Tests run slower compared to integration testing.
  • Aims to test how external systems work with internal modules, one by one.
  • Aims to test the user experience from start to finish.
  • Used to test a single process from third-party software that will be used in conjunction with your software.
  • Can be used for either a single process being executed from start to finish or various different processes involving different applications.

 2. Difference between smoke, regression and sanity tests

 





What is smoke testing?

Smoke testing, also known as build verification testing or build acceptance testing, is a non-exhaustive
testing type that ensures the proper function of basic and critical components
of a single build. For example, smoke tests on Gmail might include functions
such as composing and sending emails.
 Smoke testing is performed in the initial phase of the software development life cycle (SDLC) on attaining
a fresh software build from developers. Its intent is to detect abnormalities of the core functionalities before teams move on to further exhaustive testing. This leaves little room for major errors to ripple through and become harder to fix down the line.




What is sanity testing?

Stemmed from regression testing, sanity testing is a type of testing performed to ascertain no issues arise in the new functionalities, code changes, and bug fixes. Instead of examining the entire system, it focuses on narrower areas of the added functionalities.  
  




Regression Testing


This testing is concurrently done after unit and integration testing. It is the process of continuous testing to ensure that newly added functionality and features are working correctly without affecting other existing functionality of the software. Regression testing is not just a single test but a concurrent combination of various tests.

Regression testing is the verification of “bug fixes or any changes in the requirement” and making sure they are not affecting other functionalities of the application. automation and usually performed after some modifications have been made in the software build after requirement changes or bug fixes.

Once Sanity testing of the changed functionality is completed, all the impacted features of the application require complete testing. This is called regression testing.

Whenever bug fixes are done in the existing software, some test scenarios need to be executed, to verify the bug fixes. In addition to these, the QA team also has to check the impacted areas, based on the code changes. In regression testing, all those test scenarios will have to be executed, to take care of related functionalities.


 

Differences Between Smoke Testing vs Sanity Testing vs Regression Testing

Smoke testing

Sanity testing

Regression testing

Executed on initial/unstable builds

Performed on stable builds

Performed on stable builds

Verifies the build of critical components

Checks the rationality of new module additions or code changes

Tests the functionality of all areas impacted by any code change

Covers end-to-end essential functionalities

Covers few certain modules of the software 

Extensively examines mostly all the functionalities

Can be employed by both testers and developers

Carried out by testers

Mainly used by testers

A subset of acceptance testing

Part of regression testing

Superset of regression testing

Done when there is a new build

Only carried out in short times

Usually performed after each update

 




 

 

Comments