Generating Jobs
Updated 2,535 Days AgoPublic

The existence of jobs on Jenkins is the first, and perhaps most important step, in getting a build performed. It is from a job that the status of the last 25 builds can be viewed, a new build initiated and logs and other results examined.

Jobs come into existence through a two step process:

  1. Execution of helpers/gather-jobs.py
  2. Evaluation of dsl/kdeseed.groovy and dsl/metaseed.groovy

Gathering Jobs

The helpers/gather-jobs.py script is responsible for determining a complete listing of projects to build, on which platforms, and from which branches. Before it can begin this process, it needs to know two key pieces of information:

  1. What repositories exist and where are they placed in the tree structure
  2. For each repository, which branch should be built for each branch group

To determine what repositories exist, it reads in the YAML metadata from sysadmin/repo-metadata which is the canonical source of information for all mainline repositories on KDE Infrastructure. This also provides important information such as whether the repository is considered active along with it's place in the tree structure.

For branches, the system consults a JSON format file called logical-module-structure in kde-build-metadata. This file contains a series of rules which specify the branch to be used for each branch group. Rules may use standard shell glob expressions to match paths within the tree structure, subject to being able to be interpreted by Python's fnmatch module. Where multiple rules are matched, the system will use the most specific rule for that particular repository (kde/workspace/* being more specific than kde/* for instance).

Once this is complete the list of Products will be read from local-metadata/product-definitions.yaml. Each Product will then be processed in turn, with it's repository rules evaluated against the list of known repositories. For each match that is found, an entry is created for each platform and branch group combination subject to:

  1. The repository being active and not blacklisted globally
  2. The repository not being blacklisted for that Platform
  3. A valid branch existing for that branch group and repository per the rules in logical-module-structure

The notification rules for that Product will also be processed for each entry, and the addresses to be notified are also stored in that entry.

Example Entry for the DSL Scripts to Read:

{
   "product": "Frameworks",
   "notifyByEmail": "kde-frameworks-devel@kde.org, kde-dashboard@kde.org",
   "name": "solid",
   "environment": "production",
   "branchGroup": "kf5-qt5",
   "branch": "master",
   "platform": "XenialQt5.7",
   "description": "Solid",
   "repositoryUrl": "git://anongit.kde.org/solid",
   "browserUrl": "https://cgit.kde.org/solid.git"
}

DSL Evaluation

The two DSL files, dsl/kdeseed.groovy and dsl/metaseed.groovy read the output from the helpers/gather-jobs.py script and perform the process of actually creating jobs and views in Jenkins itself. They contain no logic for determining which jobs should be created, depending on the output for this information.

They are however responsible for selecting the correct Pipeline template for the job, which is read from pipeline-templates/. Each Platform has it's own template in this directory, with the template filename matching the name of the platform. These Pipelines govern the actual build process itself and will be covered later. Jobs are created with names of the form $product $repository $branchGroup $platform.

The other task the DSL Scripts are responsible is setting up a series of views to make the jobs easier to view and work with. These views are reliant on jobs following the form described above.

Last Author
bcooksley
Projects
None
Subscribers
None