snap binary factorization
Closed, ResolvedPublic

Description

Hej!

Krita has a snapcraft.yaml in its repo and we'd like to have that binary factorized for the store. What info do you need and what kind of tooling do we need on top?

Builds do ideally need either an LXD or multipass (wrapper around qemu/kvm) installation on the build hosts. They'll also need snapd so as to get access to the latest snapcraft tool, which is itself shipped as a snap. For upload to the store we need a place to store a config with secure auth key, or a way to send that to the build hosts.

Other than that everything happens inside LXD or multipass and doesn't really need anything special from the host.

sitter created this task.Jun 6 2019, 9:16 AM
Restricted Application added a subscriber: sysadmin. · View Herald TranscriptJun 26 2019, 8:46 AM
bcooksley added a subscriber: bcooksley.

In terms of the final upload, ideally we would not be doing this on the actual nodes performing the builds. Not only for the security of the credentials themselves (which should be in as few places as possible) but also to ease deployment of the build nodes. (For Android builds, we have the undesirable state of affairs whereby the various signing keys have to be made available to those builds, which I want to move away from. Flatpak is similar, and I want to change that as well)

I assume the build process produces a file (or can be made to export a file) which can then be used for performing the upload elsewhere?

Correct. Upload can be done on a different host.

Thanks for confirming.

Okay, in order to do this we'll have to setup a bunch of stuff first, so this request will take a bit of time to sort out.

Could you please note all the steps required to perform the build, export the artifact for transfer to the uploading system, and perform the upload?

With regards to the upload system, it won't be able to run Snaps, as it will be an LXC container, so hopefully that doesn't cause an issue.

sitter added a subscriber: bshah.Jul 8 2019, 9:35 AM

@bshah actually built a prototype for gitlab a while ago, which should be fairly comprehensive, except for publishing:

https://invent.kde.org/sysadmin/ci-tooling/raw/master/invent/binary-snap.yml

As mentioned there are two options for building: multipass or lxd. I'll explain both because the process is the same, LXD just needs more hand-holding. I'd like @jriddell to give some input as well though. And the question is also which one is preferred from a sysadmin POV.

multipass

This is a wrapper around qemu and the (upstream) preferred way of building https://github.com/CanonicalLtd/multipass
The multipass tool is a bit like a container daemon spinning up minimal images to do the build in. Notable disadvantage is that it is full virtualization, at the same time I suppose that is its biggest advantage (fully separated and since multipass manages the image it's always clean and uptodate). multipass doesn't need any setup from our side AFAIK, so all the lxc commands in the example I posted would be unnecessary there.

The actual process would be

  • clone repo
  • find snapcraft.yaml
  • cd to snapcraft.yaml
  • snapcraft
  • upon success there should be a *.snap file in the PWD
  • snapcraft push *.snap --edge pushes the file to the store and publishes it into the edge channel (release channel for git builds and the like)

For push to work a credentials file needs to be in XDG_CONFIG_DIR/snapcraft/ which was created via snapcraft login.

snapcraft itself takes full control of multipass operation so we technically don't have to worry about anything in this setup. The host server however would need a working snapd since both snapcraft and multipass would be used from their respective snaps.

LXD

With LXD as shown in the example we need to manually take care of spinning up the isolation environment by creating an ephemeral container, push the working directory into the container, invoke snapcraft --destructive-mode inside to build without multipass, and lastly pull the snap file out again.
Outside the container the snapcraft push dance as above is then used to push the file to the store.

The advantage of this is that snapd (which is still used to get snapcraft) is run inside the container, so the host doesn't need a working snapd as long as it can run LXD. It's also easier to inject the neon repo to get access to recent frameworks and Qt.

Thanks for all of that detail.

In terms of compatibility for uploading, I don't suppose whether we know if the version shipped with Ubuntu 18.04 will be able to continue to do that on a continuing basis?

So I guess at this point we need to weigh up the ease of integrating the Neon repositories, vs. the possibility that upstream will drop support for everything apart from Multipass.

Don't suppose we know much about their thinking on that?

In terms of which one is preferrable, at first glance I'd say the LXD method is probably the most preferrable simply due to it not using virtualisation - thus leaving more disk space and memory available and reducing CPU overhead.

In terms of compatibility for uploading, I don't suppose whether we know if the version shipped with Ubuntu 18.04 will be able to continue to do that on a continuing basis?

Well, I think ™ it does for now so we could just cross this bridge when it comes to it. If it breaks I doubt they'd let it remain that way. That being said 20.04 is around the corner and things may change there. It's not entirely unreasonable to think that they'd drop the snapcraft deb altogether as snap distribution is just so much less hassle.

So I guess at this point we need to weigh up the ease of integrating the Neon repositories,

FWIW, long-term the repo injection should go away and be replaced by a more "integrated" solution one way or the other. Using multipass would only force our hand in solving this particular problem, while with LXD we can manually inject the repo until we and upstream can come up with a better approach for third party repos.

vs. the possibility that upstream will drop support for everything apart from Multipass.

Don't suppose we know much about their thinking on that?

There is no intention of dropping !multipass support. In fact, I've made it very clear that we absolutely need destructive-mode and as far as I know upstream has settled very well on the current system.

Thanks for that information.

I've been looking into scheduling this - we'll need to do some rearrangement on the CI worker nodes in order to be able to accomodate this so it may be a little bit of time before this gets actioned as it will require rebuilds of some worker nodes.

[spam comment removed by sysadmin]

I had a play with setting up a separate signing machine for this tonight.

As I feared, Snap threw it's toys:

snapsigner@capona:~$ snap login
Personal information is handled as per our privacy notice at
https://www.ubuntu.com/legal/dataprivacy/snap-store

Email address: snap-store@kde.org
Password of "snap-store@kde.org": 
error: system does not fully support snapd: apparmor detected but insufficient permissions to use
       it

This is caused, from what I can tell, by snapd advising clients that it is in degraded mode, and snap login not handling this. It's questionable in the first place why one needs to talk to a privileged, uid=0, process just to login to the Snapcraft.io site though.

(As for why Apparmor is unusable, that's because our LXC containers operate in full user namespacing, so they can't even read the Apparmor configuration)

I'll have to come up with a different approach to the one I had been intending on using, which is annoying because the resource consumption of this should be very low.

FWIW for signing we actually need snapcraft login which shouldn't require snapd or apparmor or anything. snap login would be for the user to login to see private store items and I guess make purchases or something, should be entirely irrelevant to us from a build perspective.

I am so terribly confused when it comes to Snaps.

Fortunately it seems that snapcraft login does indeed not require anything to do with snapd (which makes me feel much happier) and snapcraft login works perfectly fine.

My plan of attack on this is to use Ange (which is a spinning rust machine, with a reasonably recent i7 CPU) as the first testbed for Snap building, however it needs a reinstall - and for that to happen I need to be sure the stuff running in Joy currently (Flatpaks and Android repository publishing) can be moved elsewhere.

I've the Flatpak stuff most of the way there now, once that is done i'll be able to commence with the rebuild of Ange.

For when I do reach that point, is there anything lighter than Krita available for me to build as a test to make sure the whole process works okay?

We now have the builder side of things sorted out (at least for a single machine doing the builds - we'll expand that out to more machines in the long run though).

Alas, snapcraft doesn't seem to want to do the right thing as it wants to install a large number of dependencies at the host level (which doesn't sound right, and isn't something i'd like to do in any event). Thoughts?

packaging@ange:~/kblocks$ snapcraft
Installing build dependencies: aspell aspell-en cmake cmake-data cpp cpp-7 dictionaries-common docbook-xml docbook-xsl emacsen-common gcc gcc-7 gcc-7-base hunspell-en-us i965-va-driver kdoctools-dev kdoctools5 kinit kio kpackagelauncherqml kpackagetool5 kwayland-data kwayland-integration liba52-0.7.4 libaacs0 libaribb24-0 libasan4 libaspell15 libass9 libatomic1 libauthen-sasl-perl libavcodec57 libavformat57 libavutil55 libbasicusageenvironment1 libbdplus0 libbluray2 libc-dev-bin libc6-dev libcc1-0 libcddb2 libchromaprint1 libcilkrts5 libcrystalhd3 libdata-dump-perl libdbusmenu-qt5-2 libdc1394-22 libdca0 libdouble-conversion1 libdrm-amdgpu1 libdrm-dev libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libdvbpsi10 libdvdnav4 libdvdread4 libebml4v5 libegl-mesa0 libegl1 libegl1-mesa-dev libencode-locale-perl libevdev2 libfaad2 libfam0 libfile-basedir-perl libfile-desktopentry-perl libfile-listing-perl libfile-mimeinfo-perl libfont-afm-perl libfontenc1 libgbm1 libgcc-7-dev libgl1 libgl1-mesa-dev libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libgles1 libgles2 libgles2-mesa-dev libglu1-mesa libglu1-mesa-dev libglvnd-core-dev libglvnd-dev libglvnd0 libglx-mesa0 libglx0 libgme0 libgomp1 libgpgmepp6 libgroupsock8 libgsm1 libhfstospell9 libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libhunspell-1.6-0 libice6 libinput-bin libinput10 libio-html-perl libio-socket-ssl-perl libipc-system-simple-perl libisl19 libitm1 libjsoncpp1 libkate1 libkf5archive-dev libkf5archive5 libkf5attica-dev libkf5attica5 libkf5auth-bin-dev libkf5auth-data libkf5auth-dev libkf5auth5 libkf5bookmarks-data libkf5bookmarks-dev libkf5bookmarks5 libkf5calendarevents5 libkf5codecs-data libkf5codecs-dev libkf5codecs5 libkf5completion-data libkf5completion-dev libkf5completion5 libkf5config-bin libkf5config-bin-dev libkf5config-data libkf5config-dev libkf5configcore5 libkf5configgui5 libkf5configwidgets-data libkf5configwidgets-dev libkf5configwidgets5 libkf5coreaddons-data libkf5coreaddons-dev libkf5coreaddons-dev-bin libkf5coreaddons5 libkf5crash5 libkf5dbusaddons-bin libkf5dbusaddons-data libkf5dbusaddons-dev libkf5dbusaddons5 libkf5declarative-data libkf5declarative-dev libkf5declarative5 libkf5dnssd-data libkf5dnssd5 libkf5doctools-dev libkf5doctools5 libkf5globalaccel-bin libkf5globalaccel-data libkf5globalaccel-dev libkf5globalaccel5 libkf5globalaccelprivate5 libkf5guiaddons-dev libkf5guiaddons5 libkf5i18n-data libkf5i18n-dev libkf5i18n5 libkf5iconthemes-bin libkf5iconthemes-data libkf5iconthemes-dev libkf5iconthemes5 libkf5idletime5 libkf5itemviews-data libkf5itemviews-dev libkf5itemviews5 libkf5jobwidgets-data libkf5jobwidgets-dev libkf5jobwidgets5 libkf5kdegames-data libkf5kdegames-dev libkf5kdegames7 libkf5kdegamesprivate1 libkf5kio-dev libkf5kiocore5 libkf5kiofilewidgets5 libkf5kiogui5 libkf5kiontlm5 libkf5kiowidgets5 libkf5kirigami2-5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5 libkf5notifications-data libkf5notifications5 libkf5package-data libkf5package-dev libkf5package5 libkf5quickaddons5 libkf5service-bin libkf5service-data libkf5service-dev libkf5service5 libkf5solid-dev libkf5solid5 libkf5solid5-data libkf5sonnet-dev libkf5sonnet-dev-bin libkf5sonnet5-data libkf5sonnetcore5 libkf5sonnetui5 libkf5textwidgets-data libkf5textwidgets-dev libkf5textwidgets5 libkf5wallet-bin libkf5wallet-data libkf5wallet5 libkf5waylandclient5 libkf5widgetsaddons-data libkf5widgetsaddons-dev libkf5widgetsaddons5 libkf5windowsystem-data libkf5windowsystem-dev libkf5windowsystem5 libkf5xmlgui-bin libkf5xmlgui-data libkf5xmlgui-dev libkf5xmlgui5 libkwalletbackend5-5 liblirc-client0 liblivemedia62 libllvm8 liblsan0 liblua5.2-0 liblwp-mediatypes-perl liblwp-protocol-https-perl libmad0 libmailtools-perl libmatroska6v5 libmicrodns0 libmpc3 libmpcdec6 libmpeg2-4 libmpfr6 libmpx2 libmtdev1 libmtp-common libmtp-runtime libmtp9 libnet-dbus-perl libnet-http-perl libnet-smtp-ssl-perl libnet-ssleay-perl libnfs11 libopenal-data libopenal1 libopengl0 libopenjp2-7 libopenmpt-modplug1 libopenmpt0 libphonon4qt5-4 libplacebo4 libpolkit-qt5-1-1 libpostproc54 libprotobuf-lite10 libproxy-tools libpthread-stubs0-dev libpulse-mainloop-glib0 libqt5concurrent5 libqt5core5a libqt5dbus5 libqt5gui5 libqt5network5 libqt5opengl5 libqt5opengl5-dev libqt5printsupport5 libqt5qml5 libqt5quick5 libqt5quickcontrols2-5 libqt5quickparticles5 libqt5quicktemplates2-5 libqt5quicktest5 libqt5quickwidgets5 libqt5script5 libqt5scripttools5 libqt5sql5 libqt5sql5-sqlite libqt5svg5 libqt5test5 libqt5texttospeech5 libqt5waylandclient5 libqt5waylandcompositor5 libqt5widgets5 libqt5x11extras5 libqt5xml5 libquadmath0 libresid-builder0c2a librhash0 libsdl-image1.2 libsecret-1-0 libsecret-common libsensors4 libshine3 libsidplay2 libsm6 libsnappy1v5 libsndio6.1 libsoxr0 libspeexdsp1 libssh-gcrypt-4 libssh2-1 libswresample2 libswscale4 libtie-ixhash-perl libtimedate-perl libtry-tiny-perl libtsan0 libubsan0 libupnp6 liburi-perl libusageenvironment3 libuv1 libva-drm2 libva-wayland2 libva-x11-2 libva2 libvdpau1 libvlc-bin libvlc5 libvlccore9 libvoikko1 libvorbisfile3 libvulkan1 libwacom-bin libwacom-common libwacom2 libwayland-bin libwayland-dev libwayland-server0 libwebp6 libwebpmux3 libwww-perl libwww-robotrules-perl libx11-dev libx11-doc libx11-protocol-perl libx11-xcb-dev libx11-xcb1 libx264-152 libx265-146 libxau-dev libxaw7 libxcb-dri2-0 libxcb-dri2-0-dev libxcb-dri3-0 libxcb-dri3-dev libxcb-glx0 libxcb-glx0-dev libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-present-dev libxcb-present0 libxcb-randr0 libxcb-randr0-dev libxcb-render-util0 libxcb-render0-dev libxcb-shape0 libxcb-shape0-dev libxcb-sync-dev libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xfixes0-dev libxcb-xinerama0 libxcb-xkb1 libxcb-xv0 libxcb1-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft2 libxkbcommon-x11-0 libxml-parser-perl libxml-twig-perl libxml-xpathengine-perl libxmu6 libxmuu1 libxpm4 libxshmfence-dev libxshmfence1 libxss1 libxt6 libxvidcore4 libxxf86dga1 libxxf86vm-dev libxxf86vm1 libzvbi-common libzvbi0 linux-libc-dev make manpages manpages-dev media-player-info mesa-common-dev mesa-va-drivers mesa-vdpau-drivers perl-openssl-defaults phonon4qt5 phonon4qt5-backend-vlc qml-module-org-kde-kirigami2 qml-module-org-kde-kquickcontrolsaddons qml-module-org-kde-newstuff qml-module-qtgraphicaleffects qml-module-qtqml-models2 qml-module-qtquick-controls2 qml-module-qtquick-templates2 qml-module-qtquick-window2 qml-module-qtquick2 qt5-gtk-platformtheme qt5-qmake qt5-qmake-bin qt5-qmltooling-plugins qtbase5-dev qtbase5-dev-tools qtchooser qtdeclarative5-dev qtscript5-dev qttranslations5-l10n qtwayland5 sgml-base sgml-data sonnet-plugins va-driver-all vdpau-driver-all vlc-data vlc-plugin-base vlc-plugin-video-output x11-utils x11-xserver-utils x11proto-core-dev x11proto-damage-dev x11proto-dev x11proto-fixes-dev x11proto-xext-dev x11proto-xf86vidmode-dev xdg-utils xml-core xorg-sgml-doctools xtrans-dev
[sudo] password for packaging:
bcooksley closed this task as Resolved.Nov 19 2019, 2:02 AM
bcooksley claimed this task.

I've now finished implementing this, at least as far as the actual Snap generation goes at least.
Once we know the Snaps being produced are correct and valid we can look at the automation of the Store uploads.

For the record, implementing this wasn't too fun, due in part to:

  1. Snap imposes limits on 'snapcraft' and 'multipass' even though it claims they're 'classic' applications and thus unconfined. This means that if you use anywhere other than a single folder under /home/ as the home directory for a user, it will just give you an obscure permission error and fail. Following the documentation on this didn't work at all, so I ended up using a bind mount to keep it happy.

Why it has a problem with /home being somewhere else, when it is happy with /var/snap/ being a symlink to elsewhere I don't know.

Not impressed with Canonical's engineering on that point, they need to rewrite their Apparmor rules or perhaps not dump a folder in $HOME for everyone that uses a Snap (as that is just dirty)

  1. Multipass requires that the user running it have membership of the 'sudo' group as that is the group owner of it's socket under /run. I couldn't see or find any documentation to change this, so not that happy with having to use that group for this. Why they didn't use a group dedicated for this like every other virtualisation tool (KVM/libvirt and Docker always use their own separate groups) I don't know.

I ended up using visudo to strip the sudo group of it's powers to actually use sudo, prior to granting the packaging user membership of that group.

It also seems to keep the image it generated around on disk - we'll need to see if it shares this between all the various builds, and if it doesn't build some cleanup logic of our own.

To get projects setup within this, there are a couple of boxes that need to be ticked first, namely:

  1. The Snap definition (snapcraft.yaml) file needs to reside somewhere within it's primary Git repository (git.kde.org)
  2. The necessary entry needs to be added to snaps/enabled-projects.yaml in sysadmin/binary-factory-tooling

Once those two processes have been completed, and the DSL Job Run has been run the job should appear on the Binary Factory for execution.

Getting back onto this, there's currently no builder for it
"There are no nodes with the label ‘SnapAMD64’"
https://binary-factory.kde.org/job/KBlocks_snap/2/console

jriddell reopened this task as Open.Sep 2 2020, 12:13 PM

The builder side of this was removed and cleaned up due to a couple of issues that led me to determine that Multipass was unreliable and could not be trusted (not to mention it wasn't being used at all)

In particular, it did not cleanup after itself, and attempting to run the correct commands to clean it up would result in multipassd crashing (segfault) an issue that the multipass command handled very poorly. (After removal I ended up having to rm -rf the directories it had used)

Sorry I've been away on holiday. If multipass isn't working then the alternative is to run it with lxd: snapcraft --use-lxd can we try with that?

I'll see if we can schedule some time to test that, however we currently have quite a bit on the go so may be a little bit before we get to it:

  1. Deployment of Grafana for Telemetry analysis
  2. Deployment of Tirex with OSM data for KItinerary maps
  3. Transition to a new version of Mirrorbrain, including replacement of Milonia
  4. Finalisation of tooling to support generation of API Documentation via the Binary Factory
  5. Dealing with issues associated with Nextcloud's realtime editors
  6. Rollout of MyKDE to allow for Identity to be shutdown

Along with general routine maintenance and other queries we receive...

Any chance of some movement on this? It's blocking All About the Apps Goal and it shouldn't be hard to add a --use-ldx flag to try it out (maybe I can do it and I just need a pointer of where to add it).

Also answers from a multipass developer:

Snap imposes limits on 'snapcraft' and 'multipass' even though it claims they're 'classic' applications and thus unconfined. This means that if you use anywhere other than a single folder under /home/ as the home directory for a user, it will just give you an obscure permission error and fail. Following the documentation on this didn't work at all, so I ended up using a bind mount to keep it happy.

Multipass is actually a strict snap these days. Snapcraft is not,
indeed. And yes, due to how things are confined we can only read from
/home [1] and /mnt or /media [2] at the moment. Bind-mounting is a good
way around it.

[1] https://snapcraft.io/docs/home-interface
[2] https://snapcraft.io/docs/removable-media-interface

Multipass requires that the user running it have membership of the 'sudo' group as that is the group owner of it's socket under /run. I couldn't see or find any documentation to change this, so not that happy with having to use that group for this. Why they didn't use a group dedicated for this like every other virtualisation tool (KVM/libvirt and Docker always use their own separate groups) I don't know.

To use KVM we need the Multipass daemon to be privileged. Anyone with
access to the socket can mount arbitrary folders from the host and so
circumvent access restrictions. When we originally started Snapd did not
have facilities to run daemons under different users (or even create
them, in the first place). It does, now [3], we just never got the time
to use it.

We _could_ use a multipass group if that was pre-created, however.

[3] https://snapcraft.io/docs/system-usernames

It also seems to keep the image it generated around on disk - we'll need to see if it shares this between all the various builds, and if it doesn't build some cleanup logic of our own.

Correct, to avoid re-downloading things, Multipass will keep a copy of
the original image for as long as there isn't a newer one. The
build-specific VM image is also reused unless snapcraft clean is
issued, or Snapcraft, on its own, decides to recreate it.

In particular, it did not cleanup after itself, and attempting to run the correct commands to clean it up would result in multipassd crashing (segfault) an issue that the multipass command handled very poorly. (After removal I ended up having to rm -rf the directories it had used)

snap remove --purge multipass is a sure way to get rid of anything
related to a particular snap. What is the cleanup that you would expect
it to do on its own? We would love to hear about the segfault, steps to
reproduce would be awesome. We're not currently aware of any such issue.

The cleanup I was expecting was for the build-specific VM images.

This is quite important for us, as we're performing these builds on systems that only have ~500GB of NVMe storage available - and this has to be shared with many other things including:

  • The artifacts cache for our regular CI builds
  • The Docker images used for the regular CI builds
  • The Flatpak builder resources for those Binary Factory builds
  • Working space for both CI builds as well as Binary Factory builds, which can depending on the job demand a large amount of space

Therefore any kind of excessive material left behind - such as a build-specific VM image - is considered wasted space that needs to be reclaimed, especially given the number of Snaps we would likely end up producing and the multi-gigabyte size of the image in question.

With regards to the segmentation fault, while it was some time ago, looking through the history it seems the command it was bailing on was:

multipass delete snapcraft-kblocks

In terms of working on this - we'll see what sort of time we can find for this, however things are incredibly busy at the moment (with numerous BBB requests, our recent move to new maps.kde.org infrastructure for Itinerary, other requests from Neon including a change that requires a significant refactor and rework of our Git hooks and the need for us to get our Ruby applications moved to a newer system so we can retire the older one - which is a mission in and of itself)

Sorry it looks like this got dropped by all of us.

Given that the Binary Factory/Jenkins is going away (to be all replaced by Gitlab CI - or should I really say CD) i'm thinking that the best path for this would be if we could do snap builds in a Docker container.
Because they're ephemeral, I wonder if the --destructive flag coupled with Docker might be the best path forward?

That would save us needing to play with Multipass and co.

I spoke to the Snap devs who don't recommend it. Snap needs systemd and docker doesn't work well with systemd they say. It's also little tested. Having said that they believe that Visual Studio snap is built with Docker so it's not impossible.

But for the neon CI we use lxd which is an alternative to the default multipass and seems to be more reliable at setting up virtual machines. We store the snapcraft.yaml files in https://invent.kde.org/packaging/snapcraft-kde-applications/-/tree/Neon/release now and each job starts a cloud server if needed with this script:

echo "USER DATA"
apt update
apt install snapd -y
snap install snapcraft --classic --candidate
snap install lxd
snap install review-tools
usermod -a -G lxd jenkins-slave
echo  'jenkins-slave ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/99-jenkins
snap install lxd
sleep 5

The runs this to build it which sets some values then runs snapcraft --use-lxd
https://github.com/pangea-project/pangea-tooling/blob/master/nci/snapcraft.rb

and this to upload it running snapcraft upload
https://github.com/pangea-project/pangea-tooling/blob/master/nci/snap/publish.rb

i'm afraid reality is all our CI resources are intended to be managed via Docker (in the case of Linux anyway, same applies to Windows) so we would need as a hard requirement to do it via Docker.

Note that Flatpak's can also be built in Docker these days (as Appimages always have been able to) and if memory serves Flatpak does something similar with containerisation so it should be more than possible to do so.

Can we just use launchpad.net which has a feature that you can use snapcraft remote-build and it'll upload the snapcraft.yaml file, build the snap and let you download the resulting snap binary?

See https://snapcraft.io/docs/remote-build for remote-build details:

Snapcraft remote-build
The snapcraft remote-build command offloads the snap build process to the Launchpad build farm, pushing the potentially multi-architecture snap back to your machine. See below for further details.

Remote build is a feature in Snapcraft (from Snapcraft 3.9+ onwards) that enables anyone to run a multi-architecture snap build process on remote servers using Launchpad.

With remote build, you can build snaps for hardware you don’t have access to and free up your local machine for other tasks.

Supported build architectures are: amd64, arm64, armhf, i386, ppc64el and s390x.

I see no reason why we couldn't do that - looks like Ubuntu ships what is essentially the latest version of Snap/Snapcraft even in the LTS Ubuntu (rather interesting exception to distribution policies there...) so running this shouldn't be too much drama.

jriddell added a comment.EditedJul 29 2022, 12:40 PM

mfederle has been awesome and made some CI for it that seems to work so I've merged it into
https://invent.kde.org/packaging/snapcraft-kde-applications/-/blob/Neon/release/.gitlab-ci.yml
https://invent.kde.org/packaging/snapcraft-kde-applications/-/blob/Neon/release/Dockerfile

It needs some environment variables set for login to the snap store and launchpad but it seems I don't have the necessary permissions on https://invent.kde.org/packaging/snapcraft-kde-applications/

snap-store@kde.org account seems sensible to use, this might need some coordination

snapcraft login
snapcraft export-login snap-file.text
export SNAPCRAFT_STORE_CREDENTIALS=$(cat snap-file.text)

And for Launchpad create an account on launchpad.net with snap-store@kde.org
Locally do a snapcraft remote-build with a local snapcraft.yaml file
set $LAUNCHPAD_CREDENTIALS to the contents of ~/.local/share/snapcraft/provider/launchpad/credentials

Is the intention for the Snap definitions to continue to live in a central repo (vs. say the actual app repo)?

Is the intention for the Snap definitions to continue to live in a central repo (vs. say the actual app repo)?

Yes

bcooksley closed this task as Resolved.Aug 2 2022, 7:44 PM

Following discussion with Jonathan on Matrix, the necessary environment variables are now set.

Unfortunately Gitlab is not able to protect the contents of LAUNCHPAD_CREDENTIALS due to the unusual format of that object, so it will be possible to echo it out as part of CI runs.
Please keep a close eye on changes to .gitlab-ci.yml

With regards to the Docker image, this would preferrably come from the kdeorg/* namespace - please submit a MR to sysadmin/ci-images which is where these are built, targeting the name kdeorg/snap-builder (following the convention set by flatpak-builder).

It doesn't seem very happy with the LAUNCHPAD_CREDENTIALS contents for some reason. I twiddled it so please add two variables with the relevant parts from within that file:
LAUNCHPAD_CREDENTIALS_ACCESS_TOKEN
LAUNCHPAD_CREDENTIALS_ACCESS_SECRET

I've now made those changes to the credentials jobs.

The snap builds are still getting stuck on launchpad credentials.
https://invent.kde.org/packaging/snapcraft-kde-applications/-/jobs/415906

It's working fine on my own fork
https://invent.kde.org/jriddell/snapcraft-kde-applications/-/jobs/415972

This might need some coordination to work out what's up, the code is the same.

It seems it needs a timeout longer than 1h
https://invent.kde.org/jriddell/snapcraft-kde-applications/-/jobs/415972
Is that possible? When I set it to 2h it seem to get ignored.

We have limits setup at the runner level to prevent people from occupying a builder with extremely long running tasks and blocking others from using it.
When we initially thought of this I hadn't realised it was going to block and wait for the build to complete.

We'll need to setup something a bit different for this as when Launchpad is busy I imagine the builds could wait for several hours+ to complete...

I've now made those changes, however they will only work on packaging/snapcraft-kde-applications.

While doing some examination of the setup I noted that you use a non-standard branching scheme in that repository, disregarding 'master' and using Neon/stable instead.

I've therefore made that the default branch and have unprotected the CI variables (as our standard rules for Gitlab repositories only mark 'master' and branches matching the *.* convention as protected - which usually covers release branches while leaving work branches and other dev branches alone). This isn't ideal as it means the CI variables will be available to Merge Requests so it would be nice if the repository could use master as all our other repositories do.