Proposal: Release testing workflow for next version
Closed, ResolvedPublic


This is a proposal for more organized release testing. There are unanswered questions, but I think it is a good starting point for a discussion.

Ideally, parts of this process would be incorporated into bug fixing and merge request workflows in future iterations, making the whole procedure more streamlined. Also, in the future, these tests focused on changes would be complemented by regression testing of key functionality.

There are several potential issues:

  • Getting a relatively stable pool of testers.
  • The task to define priorities and key workflows/functionalities is not yet solved. We may either need to prioritize the testing without it, or postpone some parts of the workflow until it is solved.


  • Start doing something
  • Catch more bugs and regressions
  • Try out the testing workflow

Release testing enhancement

We combine organized exploratory testing with unit tests for higher test coverage. The end result is a release with less regressions and higher code quality.

This is not meant to replace the beta surveys, it should rather complement them. The beta surveys are great for engaging a big number of people - and possibly many different workflows - and getting some feedback on the version; on the downside, it's impossible to know the coverage of the tests and get precise results. More formal organized testing will fill this gap. In addition to that, some of the test scenarios and charters can be used in the beta survey.

Usage of exploratory testing means keeping the fun in the task - the tester can creatively solve mysteries and help with overall test design.

The team:

  • Developers
  • Testers
  • A test manager - someone to oversee the testing and read the reports

One person can be both a tester and a developer. What is important is that the person who wrote the code is not the same person who confirms it is working.

The workflow

  1. We take all changes in branch krita/4.2 since 4.2.8 (~20.11.2019) and compile a list of changes to be tested.
    1. With focus on the transform tool and other regressions
    2. Question: How do we compile it and where do we store it? And who should do it?
      1. (A release testing Phabricator task for a version with the list of changes, with the individual test plans/charters as subtasks? Or some spreadsheet? Or should we wait for the TCMS?)
    3. (Note: We already prepare such lists for release notes and the beta survey.)
  2. We ensure that every change is covered either by unit tests or functional tests (or a combination thereof), which validate the change and cover possible issues the change brings about.
    1. The author (or another developer) of the change updates or adds necessary unit tests.
    2. The author (or another developer) provides a list of things that they changed and possible reasons it could break, information about the impact on the application and users.
      1. Question: where should they enter it?
    3. A tester in cooperation with the author (or another developer) develops a test plan. The output of this effort is either a prepared test charter (or multiple charters if the area is big and the testing can be divided into smaller chunks) for exploratory testing or a verification test with one or multiple scenarios (or a combination thereof).
      1. Templates for a test charter and verification test
      2. (Note: inspired by session-based exploratory testing and agile behavior-driven testing practices)
      3. (Note: for now these test cases may be stored in Phabricator, in the future we should have kiwi tcms)
  3. Testers run the tests
    1. we can test as soon as a change hits Krita Plus; we have to test the beta release
    2. We need to cover all the available platforms (Note: we need testers on Linux, Windows and OSX; Android in the near future)
    3. We retest as needed by development
  4. Testers report the findings
    1. Provide notes, test cases and other artifacts to charters from exploratory testing
    2. file bugs
    3. (Note: ideally our future tools ease the reporting part)

Test charter template

Name: one sentence description (similar to bug names or the first line of VCS commit message); starts with ‘Explore:’
Scope: what is to be tested (eg. ‘the transform tool’, or ‘creating new files’)
Additional information: information from developers  
Priority: High (key functionality, or repeated regressions) | Normal

The tester then fills in the following while reporting the finding:

Tester: name/nick of the tester
Test notes: what was tested and how
Bugs/issues found: links to bugzilla

Verification test template

Name: one sentence description; starts with ‘Verify:’
Scenario: one or multiple test cases; can be either in BDD-style feature/scenario format (ie., or steps to reproduce - choose what better fits the test

When a tester carries out the scenario(s), he uses the following template


Scenario 1
Result: passed/failed

Bugs/Issues found: links to bugzilla


  • How do we measure success?
  • We need to write short guide for writing and running tests
amedonosova updated the task description. (Show Details)Jan 25 2020, 9:34 PM

For 1B/C, currently the way we prepare lists of changes is that @scottpetrovic , @rempt or me go through the gitlog manually and pick out all the relevant commits.

2B. I suspect that this part will actually require a process point between reviewers and developers to ensure that there's a test, or at the least a short get-together to hash out some risks before a feature gets merged. I don't know where we should put this information. My gut says that this is either in the design task, or if we need to ensure it's found in the future in the dox.
2C. I think we will need to have short meetings with relevant people for this as part of the feature creation process. Though, on the upside, artists can actually help with this, too.

The templates look good to me, thank you for those!

amedonosova closed this task as Resolved.Wed, May 13, 8:44 PM

I am closing the task, as it is now outdated. I have tried the ideas and the better ones will resurface in the next iteration :)