Compare commits

...

6 commits

4 changed files with 197 additions and 4 deletions

View file

@ -1,7 +1,7 @@
---
title: "Building Proxmox Templates with Packer"
date: "2024-07-21T00:36:16Z"
# lastmod: 2024-06-12
lastmod: "2024-07-25T13:14:42Z"
description: "Using Packer and Vault to build VM templates for my Proxmox homelab."
featured: false
toc: true
@ -19,7 +19,7 @@ tags:
I've been [using Proxmox](/ditching-vsphere-for-proxmox/) in my [homelab](/homelab/) for a while now, and I recently expanded the environment with two HP Elite Mini 800 G9 computers. It was time to start automating the process of building and maintaining my VM templates. I already had functional [Packer templates for VMware](https://github.com/jbowdre/packer-vsphere-templates) so I used that as a starting point for the [Proxmox builds](https://github.com/jbowdre/packer-proxmox-templates). So far, I've only ported over the Ubuntu builds; I'm telling myself I'll get the rest moved over after *finally* publishing this post.
Once I got the builds working locally, I explored how to automate them. I set up a GitHub Actions workflow and a rootless runner to perform the builds for me. I'll write up notes on that part of the process soon, but first, let's run through how I set up Packer. That will be plenty to chew on for now.
Once I got the builds working locally, I explored how to automate them. I set up a GitHub Actions workflow and a rootless runner to perform the builds for me. I wrote up some notes on that part of the process [here](/automate-packer-builds-github-actions/), but first, let's run through how I set up Packer. That will be plenty to chew on for now.
This post will cover a lot of the Packer implementation details but may gloss over some general setup steps; you'll need at least a passing familiarity with [Packer](https://www.packer.io/) and [Vault](https://www.vaultproject.io/) to take this on.
@ -1555,8 +1555,8 @@ proxmox-iso.linux-server: output will be in this color. # [tl! .nocopy:6]
```
### Up Next...
Being able to generate a template on-demand is pretty cool, but the next stage of this project is to integrate it with a GitHub Actions workflow so that the templates can be built automatically on a schedule or as the configuration gets changed. But this post is long enough (and I've been poking at it for long enough) so that explanation will have to wait for another time.
Being able to generate a template on-demand is pretty cool, but the next stage of this project is to [integrate it with a GitHub Actions workflow](/automate-packer-builds-github-actions/) so that the templates can be built automatically on a schedule or as the configuration gets changed. But this post is long enough (and I've been poking at it for long enough) so that explanation will have to wait for another time.
(If you'd like a sneak peek of what's in store, take a self-guided tour of [the GitHub repo](https://github.com/jbowdre/packer-proxmox-templates).)
Stay tuned!
~~Stay tuned!~~ **It's here!** [Automate Packer Builds with GitHub Actions](/automate-packer-builds-github-actions/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View file

@ -0,0 +1,193 @@
---
title: "Taking Taildrive for a Testdrive"
date: "2024-07-29T23:48:29Z"
# lastmod: 2024-07-28
description: "A quick exploration of Taildrive, Tailscale's new(ish) feature to easily share directories with other machines on your tailnet without having to juggle authentication or network connectivity."
featured: false
toc: true
reply: true
categories: Tips
tags:
- linux
- tailscale
---
My little [homelab](/homelab) is bit different from many others in that I don't have a SAN/NAS or other dedicated storage setup. This can sometimes make sharing files between systems a little bit tricky. I've used workarounds like [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) for sharing files over HTTP or simply `scp`ing files around as needed, but none of those solutions are really very elegant.
Last week, Tailscale announced [a new integration](https://tailscale.com/blog/controld) with [ControlD](https://controld.com/) to add advanced DNS filtering and security. While I was getting that set up on my tailnet, I stumbled across an option I hadn't previously noticed in the Tailscale CLI: the `tailscale drive` command:
> Share a directory with your tailnet
>
> USAGE
> `tailscale drive share <name> <path>`
> `tailscale drive rename <oldname> <newname>`
> `tailscale drive unshare <name>`
> `tailscale drive list`
>
> Taildrive allows you to share directories with other machines on your tailnet.
That sounded kind of neat - especially once I found the corresponding [Taildrive documentation](https://tailscale.com/kb/1369/taildrive) and started to get a better understanding of how this new(ish) feature works:
> Normally, maintaining a file server requires you to manage credentials and access rules separately from the connectivity layer. Taildrive offers a file server that unifies connectivity and access controls, allowing you to share directories directly from the Tailscale client. You can then use your tailnet policy file to define which members of your tailnet can access a particular shared directory, and even define specific read and write permissions.
>
> Beginning in version 1.64.0, the Tailscale client includes a WebDAV server that runs on `100.100.100.100:8080` while Tailscale is connected. Every directory that you share receives a globally-unique path consisting of the tailnet, the machine name, and the share name: `/tailnet/machine/share`.
>
> For example, if you shared a directory with the share name `docs` from the machine `mylaptop` on the tailnet `mydomain.com`, the share's path would be `/mydomain.com/mylaptop/docs`.
Oh yeah. That will be a huge simplification for how I share files within my tailnet.
I've now had a chance to get this implemented on my tailnet and thought I'd share some notes on how I did it.
### ACL Changes
My Tailscale policy relies heavily on [ACL tags](https://tailscale.com/kb/1068/acl-tags) to manage access between systems, especially for "headless" server systems which don't typically have users logged in to them. I don't necessarily want every system to be able to export a file share so I decided to control that capability with a new `tag:share` flag. Before I could use that tag, though, I had to [add it to the ACL](https://tailscale.com/kb/1068/acl-tags#define-a-tag):
```json
{
"groups": {
"group:admins": ["user@example.com"],
},
"tagOwners": {
"tag:share": ["group:admins"],
},
{...},
}
```
Next I needed to add the appropriate [node attributes](https://tailscale.com/kb/1337/acl-syntax#nodeattrs) to enable Taildrive sharing on devices with that tag and Taildrive access for all other systems:
```json
{
"nodeAttrs": {
{
// devices with the share tag can share files with Taildrive
"target": ["tag:share"],
"attr": ["drive:share"],
},
{
// all devices can access shares
"target": ["*"],
"attr": ["drive:access"],
},
},
{...},
}
```
And I created a pair of [Grants](https://tailscale.com/kb/1324/acl-grants) to give logged-in users read-write access and tagged devices read-only access:
```json
{
"grants":[
{
// users get read-write access to shares
"src": ["autogroup:member"],
"dst": ["tag:share"],
"app": {
"tailscale.com/cap/drive": [{
"shares": ["*"],
"access": "rw"
}]
}
},
{
// tagged devices get read-only access
"src": ["autogroup:tagged"],
"dst": ["tag:share"],
"app": {
"tailscale.com/cap/drive": [{
"shares": ["*"],
"access": "ro"
}]
}
}
],
{...},
}
```
That will let me create/manage files from the devices I regularly work on, and easily retrieve them as needed on the others.
Then I just used the Tailscale admin portal to add the new `tag:share` tag to my existing `files` node:
![The files node tagged with `tag:internal`, `tag:salt-minion`, and `tag:share`](files-tags.png)
### Exporting the Share
After making the required ACL changes, actually publishing the share was very straightforward. Per the [`tailscale drive --help` output](https://paste.jbowdre.lol/tailscale-drive), the syntax is:
```shell
tailscale drive share <name> <path> # [tl! .cmd]
```
I (somewhat-confusingly) wanted to share a share named `share`, found at `/home/john/share` (I *might* be bad at naming things) so I used this to export it:
```shell
tailscale drive share share /home/john/share # [tl! .cmd]
```
And I could verify that `share` had, in fact, been shared with:
```shell
tailscale drive list # [tl! .cmd]
name path as # [tl! .nocopy:2]
----- ---------------- ----
share /home/john/share john
```
### Mounting the Share
In order to mount the share from the Debian [Linux development environment on my Chromebook](https://support.google.com/chromebook/answer/9145439), I first needed to install the `davfs2` package to add support for mounting WebDAV shares:
```shell
sudo apt update # [tl! .cmd:1]
sudo apt install davfs2
```
During the install of `davfs2`, I got prompted for whether or not I want to allow unprivileged users to mount WebDAV resources. I was in a hurry and just selected the default `<No>` response... before I realized that was probably a mistake (at least for this particular use case).
So I ran `sudo dpkg-reconfigure davfs2` to try again and this time made sure to select `<Yes>`:
![Should unprivileged users be allowed to mount WebDAV resources?](davfs-suid.png)
That should ensure that the share gets mounted with appropriate privileges (otherwise, all the files would be owned by `root` and that could pose some additional challenges).
I also created a folder inside my home directory to use as a mountpoint:
```shell
mkdir ~/taildrive # [tl! .cmd]
```
I knew from the [Taildrive docs](https://tailscale.com/kb/1369/taildrive) that the WebDAV server would be running at `http://100.100.100.100:8080` and the share would be available at `/<tailnet>/<machine>/<share>`, so I added the following to my `/etc/fstab`:
```txt
http://100.100.100.100:8080/example.com/files/share /home/john/taildrive/ davfs user,rw,noauto 0 0
```
Then I ran `sudo systemctl daemon-reload` to make sure the system knew about the changes to the fstab.
Taildrive's WebDAV implementation doesn't require any additional authentication (that's handled automatically by Tailscale), but `davfs2` doesn't know that. So to keep it from prompting unnecessarily for credentials when attempting to mount the taildrive, I added this to the bottom of `~/.davfs2/secrets`, with empty strings taking the place of the username and password:
```txt
/home/john/taildrive "" ""
```
After that, I could mount the share like so:
```shell
mount ~/taildrive # [tl! .cmd]
```
And verify that I could see the files being shared from `share` on `files`:
```shell
ls -l ~/taildrive # [tl! .cmd]
drwxr-xr-x - john 15 Feb 09:20 books # [tl! .nocopy:5]
drwxr-xr-x - john 22 Oct 2023 dist
drwx------ - john 28 Jul 15:10 lost+found
drwxr-xr-x - john 22 Nov 2023 media
drwxr-xr-x - john 16 Feb 2023 notes
.rw-r--r-- 18 john 10 Jan 2023 status
```
Neat, right?
I'd like to eventually get this set up so that [AutoFS](https://help.ubuntu.com/community/Autofs) can handle mounting the Taildrive WebDAV share on the fly. I know that [won't work within the containerized Linux environment on my Chromebook](https://www.chromium.org/chromium-os/developer-library/guides/containers/containers-and-vms/#can-i-mount-filesystems) but I think it *should* be possible on an actual Linux system. My initial efforts were unsuccessful though; I'll update this post if I figure it out.
In the meantime, though, this will be a more convenient way for me to share files between my Tailscale-connected systems.