Saturday, May 4, 2024

Docker Desktop on Linux - Credential Store

Introduction

According to the documentation, in order to sign into Docker Desktop on Linux, pass must be installed and configured. While I already use pass, I wanted to keep Docker Desktop's use of pass separate. For a while I used the PASSWORD_STORE_DIR environment variable to manage separate password store locations, but I was not happy with that and started looking for alternative methods.

Docker Desktop vs. Docker Engine

Just to level set, Docker Engine and its associated command line tools are what most Linux users are familiar with. The dockerd daemon runs on a Linux host and users interact with it via the docker CLI. Containers use the host system's kernel and sudo is required to execute docker commands unless the user is in the docker group or rootless docker is used. Docker Engine is open source and is now based on the Moby Project.

While Linux users could simply run Docker Engine, Windows and Mac users had to use Docker Desktop. Docker Desktop included the same command line tools, but also provided a UI, and more importantly it provided a Linux virtual machine (VM) to run Docker Engine and the containers it managed. Unlike Docker Engine, Docker Desktop is not open source and in 2021 it changed from a free product to one with a cost for organizations of certain sizes.

In 2022, Docker Desktop was finally released for Linux. Just like Docker Desktop for Windows and Mac, it provided both a UI and a Linux VM for dockerd and running containers. Why run these in a VM when they can run natively? A reason given is it provides a consistent experience across all three platforms (Linux, Windows, Mac) because you have the same UI and the VM, which provides the same kernel version. Another reason given is security as your containers are no longer running on the host system, they are running a VM. This also makes it so neither sudo nor the docker group are required to execute docker commands.

Credential Store

When using Docker Engine alone, and without a credential helper, executing 'docker login' results in your docker credentials stored in plaintext in a file in your home directory. Installing a credential helper directs it to use one of the available credential stores such as pass or the D-Bus secret service.

The documentation for Docker Desktop is very clear that pass will be used and must be enabled to be able to sign in. There are no other alternatives provided. Given that I did not want to continue using pass for this, I started looking to see if the question had come up before.

Docker Desktop with D-Bus Secret Service

I was unable to find any official documentation that would confirm or deny if there was a way to use the D-Bus secret service. But I did come across a credential helper issue that requested a documentation update based on the issue creator figuring out how to do this. I was able to use the issue to configure Docker Desktop  to use the D-Bus secret service.

  • I downloaded the docker-credential-secretservice-* binary for my platform, installed it in /usr/local/bin (anywhere on the PATH would be fine), and made it executable.
  • In ~/.docker/desktop/settings.json I changed the credentialHelper value from "docker-credential-pass" to "docker-credential-secretservice".

That was it. When I signed into Docker Desktop, I was able to confirm that my credentials were not in pass, they were in my keychain which I could view with Seahorse.

I do not know if the lack of official documentation for this means there is some sort of risk associated with this method, or if only supporting one type of credential helper is done just to keep things simple; docker-credential-pass is part of the installation package. Regardless, it seems to be working just fine and I like having credentials in the D-Bus service rather than having them in my pass store.


Monday, January 22, 2024

GNOME Boxes - Backup / Restore

Introduction

I finally started to play with GNOME Boxes in conjunction with getting a RedHat Enterprise Linux (RHEL) individual developer subscription. Immediately I was curious about what it would take to backup and restore a virtual machine (VM) for a host reload, or to move a VM from one host to another. This post details my experience with this process.

GNOME Boxes

GNOME Boxes is a Graphical User Interface (GUI) for managing VMs on a Linux host. It aims to simply the process of creating and running a VM. In my exploration I ran it on a Pop!_OS 22.04 host and a Fedora Workstation 39 host. It is installed by default on Fedora (assuming the default Fedora Workstation is what you are running) and it has to be manually installed on Pop!_OS.

Pop!_OS is where I started playing around with it. I installed it via the Pop!_Shop. There are two options, one to install the Ubuntu .deb package and the second to install it via Flatpak. I started with the .deb package.

Boxes gives you an opportunity to let it download an operating system (OS) image for you. Unfortunately, on Pop!_OS, RHEL was not one of the choices. So I simply downloaded the image myself. Later on I did notice on Fedora, and with the Flatpak install, RHEL was an option listed.

RHEL

RedHat provides a free RHEL subscription to individual developers. They allow up to 16 instances to be deployed with that subscription. I opted to use RHEL because I wanted to have an instance around for some personal development activities. The registration did not bother me as I welcomed the RedHat Developer weekly emails that I get. While not everything in them is relevant to me, I am finding a useful article here or there.

Backup

I had to do some searching to find out how to properly backup a VM image managed by Boxes. There isn't anything in the GUI that exports a VM and its metadata. But I found a couple of posts both detailing the same information.

Although I ran the backup on Pop!_OS, the instructions are the same on Fedora and should be the same on any other distribution. The variation will be with a Flatpak install which I cover later.

Metadata

To export the metadata for an image, you need its domain name. This does not appear to be available via Boxes. It can be retrieved via the 'virsh' command. This command is installed by default on Fedora but it needs to be installed on Pop!_OS via the 'libvirt-clients' package. To get the list of domain names, run:

virsh -c qemu:///session list --all

On my system, this was the output:

 Id   Name        State
----------------------------
 -    rhel9.3-2   shut off

In this case, 'rhel9.3-2' is my domain name. To export the metadata I ran:

virsh dumpxml rhel9.3-2 > rhel9.3-2.xml

I then copied the resulting XML to a USB stick for my backup.

Image

The image itself was easy to find. The images are all located here:

$HOME/.local/share/gnome-boxes/images

The filename of the image should match the domain name. On some posts it was stated there would be a file extension, but that was not the case for my image, it was simply called 'rhel9.3-2'. I copied this to my USB stick for a backup.

Restore

While I did the backup on Pop!_OS, I did the restore on Fedora. Again, this describes the process when using the distribution package rather than the Flatpak install. After performing these two steps, the VM showed up in Boxes for me and worked just fine.

Image

I simply copied my 'rhel9.3-2' image backup to:

$HOME/.local/share/gnome-boxes/images

Metadata

The metadata restore was also straightforward:

virsh define rhel9.3-2.xml

Flatpak Variations

A Flatpak is simply a distribution independent application package that runs on in a sandbox environment. Boxes is available as a Flatpak. After my initial backup / restore test with distribution native packages, I installed the Boxes Flatpak on Pop!_OS. There were just a couple of things I had to adjust in the process.

virsh

When running Boxes via Flatpak, the distribution native 'virsh' will not have access to your image metadata. You need to run 'virsh' from within the Flatpak sandbox for Boxes. You can enter a shell for that environment via this command:

flatpak --command=bash run org.gnome.Boxes

When doing so, since it is a sandbox you will not have access to your full host filesystem. Unless you restrict the permissions yourself, you will have access to your home fileystem and can run the 'virsh' commands from above and have read / write access to your home filesystem.

Image Location

Your VM images will still be stored in your home filesystem, but Flatpaks store their data in a different location. You will find your images here:

$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images

XML

A backup / restore from a distribution native package to a distribution native package does not require any XML changes. A backup / restore from a Flatpak install to a Flatpak install does not require any XML changes. But if one system is using a distribution native package and the other is using Flatpak, the metadata XML will require two modifications before it can be restored. Please note that it was only two modifications for my use case, more advanced use cases may require additional changes.

Disk Image Location

The path to the VM image is in the XML. The difference between distribution native and Flatpak can be seen here:

<source file='/home/userid/.local/share/gnome-boxes/images/rhel9.3-2'/>

vs

<source file='/var/data/gnome-boxes/images/rhel9.3-2'/>
The path just needs to be updated for the desired target.

The reason the Flatpak image location looks different in the XML than the path I provided in the previous section is because the Flatpak sets up the sandbox environment with /var/data/gnome-boxes pointed to $HOME/.var/app/org.gnome.Boxes/data.

Emulator

The path to the hardware emulator is also in the XML. The difference between distribution native and Flatpak can be seen here:

<emulator>/usr/bin/qemu-system-x86_64</emulator>

vs

<emulator>/app/bin/qemu-system-x86_64</emulator>

Once again, the path just needs to be adjusted for the desired target.

Conclusion

While Boxes doesn't provide a direct way of backing up and restoring your VM images, it doesn't take much to do so. Beyond testing this process out I have yet to do anything useful with Boxes or RHEL, so I am not an expert by any means. But I hope this little bit of information will be useful.