QA Plan for 2020
Open, Needs TriagePublic

Description

I have prepared a basic outline for a QA plan for this year, I would appreciate some feedback/discussion (Is this what we want? Can we do that? Is something important missing?) before further work on the mentioned steps.


In 2020, the plan is to build upon our foundation of already established practices. The main goal is to increase the quality of stable version releases. I propose following steps to catch regressions early, increase code quality and involve both developers and users in quality assurance (because quality is everybody's business):

  • Increase usage of static and dynamic analysis tools
    • scan the source code regularly with static analysis tools, solve the issues found
    • integrate static and dynamic analysis into our code review and CI pipeline (research needed)
  • Extend coverage of unit tests
    • write missing unit tests while fixing bugs
    • solve the issue of image comparisons in tests
    • encourage contributors to submit tests for their changes in their merge requests
  • Evaluate beta testing effectiveness
  • Bootstrap community-wide functional testing initiative (research in progress)
    1. create the test plan
    2. create a platform for organizing the testing
    3. invite the community to join the effort

I don't think Invent's CI subsystem can handle Krita, at all.

rempt added a subscriber: rempt.Jan 6 2020, 9:52 AM
  • The CI system is out of our hands, though we can ask for improvements. I do think that sysadmin is thinking of moving gitlab ci, or at least, improving the integration. What I would particularly like is a system where a commit that causes a unittest to regress or adds a new warning gets reverted automatically.
  • Last year I started doing coverity builds again. I'd like to hand that off to someone else, though. It's not exactly time consuming, but it's a bit fiddly.
  • We probably need to look into what open source/free software static analyzers are available.
  • Our unittest situation is dire, with 26 known broken tests and 14 failures on build.kde.org. And many of our "unit" tests are large integration tests that are not suitable for testing whether a change caused a regression
  • We also need to take care of deprecated Qt API this year, preferably early on
  • We need to be more careful about warnings.

Another thing to look into: the 'Saving issues', https://phabricator.kde.org/T11194 (there are several issues mentioned, I think some of them are solved and other in progress)

I'm adding an idea from @tymond: in preparation for the beta survey, developers would list of things that they changed and possible reasons it could break. This would give the beta testers ideas of where to focus while testing.

Another idea is to make the easier stuff from static analysis into junior jobs for potential new contributors.

rbreu added a subscriber: rbreu.Jan 8 2020, 7:43 PM

list of things that they changed and possible reasons it could break
Seconding this. For ex. "To solve the bug in the transform tool, I also had to touch code shared by the move tool, so please also test that the move tool still works". Ideally, developers would make it a habit to document this in the tickets when they close them so the info could be easily collated.

Future dream: Testers could see during the testing which of those issues other people have already checked so that they can focus on untested stuff.

Generally, I'd also love if the testing phase came with a clearly announced deadline. Not to put any pressure on when the release should be happening, but to give the testers a guaranteed time window in which their efforts won't be for naught and the ability to plan ahead. Plus some people work better with timelines, and one could make a second announcement halfway through the testing phase to get more eyes on it.

It could also be a nice gesture to give the people an optional input field for a name in the survey, to be able to credit the testers in the release notes. Not sure if the survey lets you easily extract this data into a postable format? I'd be willing to hack a scraping Python script if needed. (I'm always willing to hack Python scripts for any other processes too ;-)

Hi, @rbreu, thanks for joining in!

Future dream: Testers could see during the testing which of those issues other people have already checked so that they can focus on untested stuff.

Your future dream is similar to mine :) For testing organization, I though of using an actual test case management system (like https://kiwitcms.org/). There is much work to be done on that before it becomes reality. I have some notes on that and there will be another task specific to that.

It could also be a nice gesture to give the people an optional input field for a name in the survey, to be able to credit the testers in the release notes. Not sure if the survey lets you easily extract this data into a postable format? I'd be willing to hack a scraping Python script if needed. (I'm always willing to hack Python scripts for any other processes too ;-)

That should be technically possible. I would just like to point out that if we do that, we need to be GDPR compliant (especially specifying how we would use the data and asking for explicit consent)

rempt added a comment.Jan 9 2020, 2:53 PM

I think the surveymonkey results can be exported as a spreadsheet.

kiwitcms looks very interesting, too!

Suggestion for static analysis tool/coverity scans:

Those are usually issues very easy to fix, and Krita doesn't really have all that security issues that prevent people from outside of the project to get access to the result. On the other hand, beginner coders might be scared to even try to fix one little bug because (1) most of the bugs are actually quite complex, (2) the code is just complex and scary for a newcommer. I would like to make this sort of organized coverity issues fixing:

  • someone posts sets of coverity scan issues (let's say, 5 or 10 issues per one set) publicly (they can be checked by someone earlier to see if they have "security risks" if it's even needed in Krita...).
  • all issues from one set should be fixed in one MR (easier than getting a thousand MRs...)
  • all issues should be in separate commits (easier to revert later if they turned out to be faulty)

That should be nice for a newcommer without too much of time to spare but wanting to help with something small :)
(I can tell from my own experience that I gave up on helping Krita through code during my uni years because while I knew I can understand it after spending some time on it, time required would be just too much. This kind of easy, nearly technical-only).

Yes, that could work. Though even though it must look like mechanical turk work, there is often a bit of judgement needed...

Thanks, @tymond. If it's ok with you, I will summarize all the ideas for static analysis, both yours and mine, into a more concrete proposal that we then can refine together and implement.

rempt added a comment.Jan 20 2020, 1:42 PM

Note: we want to use https://kiwitcms.org/ to develop test cases