The current set of tooling we have to support api.kde.org is now very old, and as such experiences a number of issues.
These mean that it is unlikely we will be able to replicate the setup exactly on a new server should that be necessary, and that from time to time projects documentation stops being generated correctly, a situation which is difficult (and in some instances impossible) to debug and then fix.
Currently it is based off a combination of several different systems, all inter-related with each other. These are:
- The original KDE 4.x era scripts, which in turn were inherited from KDE 3.x and 2.x.
- Some specialised KDE 4.x era tooling for QML based projects (known as Doxyqml)
- Support for projects using mkdocs (Sink and Kube)
- Support for projects using Docbook (RKWard)
- Frameworks era (KF5) tooling, known as KApiDox
The majority of our projects are using either 1) or 2) from above, with the exception of Frameworks and a couple of smaller projects, that use 5).
In addition to the issues noted above, the generation process is currently performed as a nightly cronjob, which has scalability limitations and has the side effect that the process is a single monolithic piece, further complicating debugging of the system (as you have to wait overnight for a new run to see if your fix did the trick and solved the problem)
As such, we need to completely replace the system as it currently stands, something which can be broken down into approximately three different components.
Part 1: The actual API Documentation generator.
This should run over an individual repository, and generate a folder of HTML files and other artifacts which represent the documentation for that project, along with a single metadata file describing that folder (for use by Part 2)
Both the folder that contain the actual documentation and the metadata file would be transferred to a web server to be served statically.
As we've previously been asked for QCH format files for people to download and use locally, providing these would also be done by this Part of the process.
Part 2: The Website Frontend.
This will scan those metadata files and use them to provide a list of available projects to choose from (with name, description, supported platforms, etc) and the various versions which are available.
This would need to be dynamic (so written in something like PHP) to avoid having to be regenerated every time a project updated it's API documentation.
Part 3: Arranges for the API Documentation to actually be generated and then uploaded to the server running the Website Frontend (Part 2)
This would be the glue that ties it all together. It would have a list of projects that had been enabled along with the branches to be covered, and from this would prepare a list of jobs that would then be provisioned and run periodically on the Binary Factory (when changes are detected in that project's repository for instance)
It would spawn Docker containers (which would be fresh, and contain nothing from any other run performed previously) which would then checkout the code and any tooling, perform the documentation generation, and upload the relevant artifacts to the server hosting api.kde.org (the website frontend)
Benefits of this Approach
Debugging of the system with the new arrangement would be substantially easier, as developers would be able to pull down the relevant Docker image and run the necessary steps within it to generate the API Documentation.
Because the logs produced by the Binary Factory contain all the relevant commands, this would be reduced to an exercise of reading the logs and copy-pasting commands (alternatively, you could read the Jenkins Pipeline), after which a developer would be able to use a local browser to check the result. This would remove the need for access to the server which generates the documentation in the event of any problems.
Additionally, because we would be transferring responsibility for documentation generation to the Binary Factory, we would be able to get an at-a-glance view and know very quickly should any breakage take place, rather than having to spot this within the current logs (which aren't regularly monitored)
Further, Doxygen is a utility which has in the past broken it's template compatibility. By performing the generation process within Docker, we will be able to easily determine the necessary dependencies for the API Generation process and track the versioning, making tracking down these breakages easier in the future (and giving us more control over when we upgrade Doxygen version within our environment)
Getting Started
Should someone be interested in getting this working, it should be relatively straight forward for them to start on Part 1 without needing any outside guidance.
Part 2 should be guided in part by Sysadmin and the Website team, to ensure it is able to be deployed on our systems at the end, and that it fits within the branding currently being used on KDE.org.
As we already have some tooling for handling project lists (for instance for Craft) it should be relatively straight forward to complete Step 3 with minimal work.