ccache certain builds that take way too long for their own good.
As we are using multiple build nodes, ccache is a tricky thing to do well. There is the general concern of running out of disk space on the slaves due to the fact that builds take twice as much as disk space with cache vs. without. OTOH build time can be improved by some 80-90% for the majority of builds.
While we can't do much about the disk space bit we can deal with the distributed nature of things.
For disk space problems **jobs** should be forced onto a suitable fat node (master?). Builds that would risk disk space issues are the ones that benefit the most from a cache. Changing the node a build can appear on is easy to do and should be the way to deal with this.
As for the distributed slave problem:
- There will be a cache directory on master on the DO block volume.
- The cache will contain tar.gz ccache dirs. One per binary job. So there'll be `xenial_unstable_frameworks_kio.tar.gz` and `xenial_unstable_frameworks_ki18n.tar.gz` and each will contain a fully qualified ccache dir for use with the env var `CCACHE_DIR`.
- Upon build start the slaves retrieve this cache tarball from the master. The way this should happen remains to be determined. Reverse SSH access to the jenkins user is a no-go as that'd expose the entire build system to potentially compromised nodes. Ideally the master would actually push the tarball into the slave, to my knowledge there is no jenkins plugin for this though. We could somehow archive the tarball and unarchvie it via jenkins. Another alternative would be to run an rsyncd on the master and rsync on the slave.
- The cache should probably be unpacked in $WORKSPACE/ccache so they are isolated per-job and get cleaned up along with the build artifacts.
- Inside the docker container CCACHE_DIR is set accordingly (i.e. /workspace/ccache)
- Upon successful build the ccache dir is pruned (or whatever the ccache clean for dropping now unused crap is), tar'd, gzip'd and pushed back to the server. Pushing might best be done via Jenkins job artifacts. Reverse-access again is a no-go and since workspaces on the slaves are deleted when the bin_amd64 job ends, we can't easily have the master pull the tarball after that.
- If done via Jenkins job artifacts the master job (i.e. `xenial_unstable_frameworks_kio` for `xenial_unstable_frameworks_kio_bin_amd64`) **moves** the tarball out of the Jenkins job's archive directory into its cache directory.