This is a proposal for more organized release testing. There are unanswered questions, but I think it is a good starting point for a discussion.
Ideally, parts of this process would be incorporated into bug fixing and merge request workflows in future iterations, making the whole procedure more streamlined. Also, in the future, these tests focused on changes would be complemented by regression testing of key functionality.
There are several potential issues:
- Getting a relatively stable pool of testers.
- The task to define priorities and key workflows/functionalities is not yet solved. We may either need to prioritize the testing without it, or postpone some parts of the workflow until it is solved.
- Start doing something
- Catch more bugs and regressions
- Try out the testing workflow
Release testing enhancement
We combine organized exploratory testing with unit tests for higher test coverage. The end result is a release with less regressions and higher code quality.
This is not meant to replace the beta surveys, it should rather complement them. The beta surveys are great for engaging a big number of people - and possibly many different workflows - and getting some feedback on the version; on the downside, it's impossible to know the coverage of the tests and get precise results. More formal organized testing will fill this gap. In addition to that, some of the test scenarios and charters can be used in the beta survey.
Usage of exploratory testing means keeping the fun in the task - the tester can creatively solve mysteries and help with overall test design.
- A test manager - someone to oversee the testing and read the reports
One person can be both a tester and a developer. What is important is that the person who wrote the code is not the same person who confirms it is working.
- We take all changes in branch krita/4.2 since 4.2.8 (~20.11.2019) and compile a list of changes to be tested.
- With focus on the transform tool and other regressions https://phabricator.kde.org/T12498
- Question: How do we compile it and where do we store it? And who should do it?
- (A release testing Phabricator task for a version with the list of changes, with the individual test plans/charters as subtasks? Or some spreadsheet? Or should we wait for the TCMS?)
- (Note: We already prepare such lists for release notes and the beta survey.)
- We ensure that every change is covered either by unit tests or functional tests (or a combination thereof), which validate the change and cover possible issues the change brings about.
- The author (or another developer) of the change updates or adds necessary unit tests.
- The author (or another developer) provides a list of things that they changed and possible reasons it could break, information about the impact on the application and users.
- Question: where should they enter it?
- A tester in cooperation with the author (or another developer) develops a test plan. The output of this effort is either a prepared test charter (or multiple charters if the area is big and the testing can be divided into smaller chunks) for exploratory testing or a verification test with one or multiple scenarios (or a combination thereof).
- Templates for a test charter and verification test
- (Note: inspired by session-based exploratory testing and agile behavior-driven testing practices)
- (Note: for now these test cases may be stored in Phabricator, in the future we should have kiwi tcms)
- Testers run the tests
- we can test as soon as a change hits Krita Plus; we have to test the beta release
- We need to cover all the available platforms (Note: we need testers on Linux, Windows and OSX; Android in the near future)
- We retest as needed by development
- Testers report the findings
- Provide notes, test cases and other artifacts to charters from exploratory testing
- file bugs
- (Note: ideally our future tools ease the reporting part)
Test charter template
Name: one sentence description (similar to bug names or the first line of VCS commit message); starts with ‘Explore:’ Scope: what is to be tested (eg. ‘the transform tool’, or ‘creating new files’) Additional information: information from developers Priority: High (key functionality, or repeated regressions) | Normal
The tester then fills in the following while reporting the finding:
Tester: name/nick of the tester Platform/Hardware: Test notes: what was tested and how Bugs/issues found: links to bugzilla
Verification test template
Name: one sentence description; starts with ‘Verify:’ Scenario: one or multiple test cases; can be either in BDD-style feature/scenario format (ie. https://automationpanda.com/2017/01/30/bdd-101-writing-good-gherkin/), or steps to reproduce - choose what better fits the test
When a tester carries out the scenario(s), he uses the following template
Name: Platform/Hardware: Scenario 1 Result: passed/failed Details: Bugs/Issues found: links to bugzilla
- How do we measure success?
- We need to write short guide for writing and running tests