Project:Testing concept: Difference between revisions

From MaRDI portal
Alvaro (talk | contribs)
Alvaro (talk | contribs)
 
(One intermediate revision by the same user not shown)
Line 23: Line 23:
Acceptance tests cannot be automatized. Acceptance testing should be planned for each release. We should identify suitable testers: these should be domain-savvy individuals, ideally not involved in the development of the portal. While we cannot do acceptance testing by ourselves, we are definitely responsible for planning these tests and making sure the releases are validated in time and comprehensively. Such a plan would include which aspects of the software bring the highest risk, document use cases that should be tested (i.e. those included in the release), allow for a feedback procedure into the next iteration. I would propose to plan prototype releases months in advance and to give testers access to the release prototype and a checklist of what to test.
Acceptance tests cannot be automatized. Acceptance testing should be planned for each release. We should identify suitable testers: these should be domain-savvy individuals, ideally not involved in the development of the portal. While we cannot do acceptance testing by ourselves, we are definitely responsible for planning these tests and making sure the releases are validated in time and comprehensively. Such a plan would include which aspects of the software bring the highest risk, document use cases that should be tested (i.e. those included in the release), allow for a feedback procedure into the next iteration. I would propose to plan prototype releases months in advance and to give testers access to the release prototype and a checklist of what to test.


==Test implementation==
==Unit tests implementation==
=== Write testable code===
=== Write testable code===
Independently of how tests are written, the code should be written in a modular fashion that decouples dependencies between components. Best practices include:
Independently of how tests are written, the code should be written in a modular fashion that decouples dependencies between components. Best practices include:
Line 53: Line 53:
As a side effect, patterns generally result in a less complex control flow within the application, thus less tests will yield larger test coverage [Bashir, 2010].
As a side effect, patterns generally result in a less complex control flow within the application, thus less tests will yield larger test coverage [Bashir, 2010].


==Some common test cases==
==Some common cases of integration tests written in Selenium==
New test classes can be added in the folder portal-compose/tests.
New test classes can be added in the folder portal-compose/tests.
All test classes in this folder are run automatically when calling `bash ./run_tests.sh` from the command-line on the host.
All test classes in this folder are run automatically when calling `bash ./run_tests.sh` from the command-line on the host.

Latest revision as of 14:38, 14 March 2022

Definitions

From a bird's eye view, testing is a process that can be carried out top-down or bottom-up. Both these approaches are valid, and can of course be combined [Yourdon, 2006, chap. 23]:

Unit testing
Test small modules, such as individual classes, methods or functions. Each test should be independent from the others, and should be carried out in a stand-alone fashion. In practice this means that data used for testing should be re-created before each test. If the test requires input from another module, then a mock-up module should be used. Synonyms: module testing, component testing.
Integration testing
Testing of groups of modules that have been unit tested. How large such a group of modules should be ought to balance the importance of the components and the time available for testing. The integration of important components should be carried out in pairs or small units, while less important components can be bundled into larger assemblages. If the system is organized in a client-server fashion, or in layers, then these could be a natural starting point for organizing tests. Synonyms: subsystem testing.
System testing
Testing of the combined components of the system as a whole. Once the individual modules have been unit tested and their interfaces have been submitted to integration testing, the system should be tested based on higher-level requirements. These include functional requirements, but also performance and security requirements.
Acceptance testing
End-users validate the system with their own use cases, ideally in their working environment.

Test plan

Multiple studies have shown that testing increases the quality of the software produced, but comes at a cost [Jeffries & Melnik, 2006]. While the former is desirable, the second requires planning. We are developing using a "lightweight" Scrum framework, so we should incorporate testing into each iteration (BTW Scrum does not provide any guidelines regarding testing).

A good start is to include -within reason- a testable criteria into each issue's definition of done. Sometimes a testable criterion follows from the issue description, but often an explicit testable criterion should be documented. For example an issue like "Add service X to the stack" can simply be tested by verifying that the given service runs. However, an issue like "Increase the performance of service X" should include a quantifiable criterion, e.g. "Service X should perform no less than 1000 operations per minute" (better example required). Test criteria within issues are particularly suitable for defining unit tests. Unit tests should be carried out before committing to GitHub.

As continuous integration will run the tests as part of each pull request (PR), each PR should ideally add tests to the code base. These new tests will then become part of an ever expanding set of regression tests and help with software maintenance. In case of refactoring a component, these will be precious. While unit tests can be carried out within submodules or individual development environments, testing in the context of a PR is ideal for carrying out integration and system tests. These tests should be discussed as part of each PR review.

Acceptance tests cannot be automatized. Acceptance testing should be planned for each release. We should identify suitable testers: these should be domain-savvy individuals, ideally not involved in the development of the portal. While we cannot do acceptance testing by ourselves, we are definitely responsible for planning these tests and making sure the releases are validated in time and comprehensively. Such a plan would include which aspects of the software bring the highest risk, document use cases that should be tested (i.e. those included in the release), allow for a feedback procedure into the next iteration. I would propose to plan prototype releases months in advance and to give testers access to the release prototype and a checklist of what to test.

Unit tests implementation

Write testable code

Independently of how tests are written, the code should be written in a modular fashion that decouples dependencies between components. Best practices include:

  • Functions and methods should do one thing and one thing only. A good rule-of-thumb is that a function or method should never be longer than your screen.
  • Define interfaces for subsystems, so that they can be replaced by mock-up objects during testing.
  • Use software patterns to decouple dependencies between classes. For example apply dependency injection:

don't write

class A:
 def my_function(self):
   b = B() # bad: can't test A independently from B
   b.another_function()

class B:
  def another_function(self):
     ...do something...

write instead

class A:
 def __init__(self, B):
   self.b = B # good: you could replace B by a mock-up test object
 def my_function(self):
   self.b.another_function()

class B:
  def another_function(self):
     ...do something...

As a side effect, patterns generally result in a less complex control flow within the application, thus less tests will yield larger test coverage [Bashir, 2010].

Some common cases of integration tests written in Selenium

New test classes can be added in the folder portal-compose/tests. All test classes in this folder are run automatically when calling `bash ./run_tests.sh` from the command-line on the host.

Testing that a service is running

The tests run in a container. The other containers in the stack are accessible from within the test container using the service names defined in docker-compose.

from MediawikiTest import MediawikiBase

class ServiceXXXTest(MediawikiBase):
   """Test that service XXX is properly installed."""
   
   def test1(self):
       """Test that service XXX is running."""
       status = self.getUrlStatusCode("http://name-of-the-service-as-in-docker-compose")
       self.assertEquals(200, status, "Problem loading service XXX.")


Testing that an extension is installed

Installed extensions are listed in the Special:Version page.

from MediawikiTest import MediawikiBase
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

class XXXExtensionsTest(MediawikiBase):
   """Test that extension XXX is properly installed."""
    
   def test1(self):
       """Check that extension XXX is listed in Version page"""
       version_url = "http://mardi-wikibase/wiki/Special:Version"
       self.loadURL(version_url)
       element = self.getElementById("bodyContent")
       self.assertTrue('Name-of-the-extension' in element.text, "Extension XXX not installed.")

Testing a feature that requires login

Not so common, but might be useful: the MediawikiBase base test class has a login method. To use it do:

from MediawikiTest import MediawikiBase

class FeatureXTest(MediawikiBase):
   """Test a feature that requires login."""
   
   def test1(self):
       ...
       self._login() # note the underscore, as method is protected
       ...

Please note that each test runs in its own thread, so login has to be called from within each test that requires it.

References

Bashir, O. (2010). Using Design Patterns to Manage Complexity. Overload Journal, 96.

Jeffries, R., & Melnik, G. (2007). Guest Editors' Introduction: TDD--The Art of Fearless Programming. Ieee Software, 24(3), 24-30.

Yourdon, E. (2006). Just enough structured analysis.