publish draft

This commit is contained in:
John Bowdre 2024-07-29 16:34:15 -05:00
parent 24ba586d0a
commit f8442b1ae6
2 changed files with 23 additions and 22 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

View file

@ -1,18 +1,17 @@
---
title: "Taking Taildrive for a Testdrive"
date: 2024-07-28
date: "2024-07-29T21:32:12Z"
# lastmod: 2024-07-28
draft: true
description: "This is a new post about..."
description: "A quick exploration of Taildrive, Tailscale's new(ish) feature to easily share directories with other machines on your tailnet without having to juggle authentication or network connectivity."
featured: false
toc: true
reply: true
categories: Tips # Backstage, ChromeOS, Code, Self-Hosting, VMware
categories: Tips
tags:
- linux
- tailscale
---
My little [homelab](/homelab) is perhaps a bit different from most in that I don't have any dedicated storage setup. This can sometimes make sharing files between systems a little bit tricky. I've used workarounds like [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) for sharing files over HTTP or simply `scp`ing files around as needed, but none of those solutions are really very elegant.
My little [homelab](/homelab) is bit different from many others in that I don't have a SAN/NAS or other dedicated storage setup. This can sometimes make sharing files between systems a little bit tricky. I've used workarounds like [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) for sharing files over HTTP or simply `scp`ing files around as needed, but none of those solutions are really very elegant.
Last week, Tailscale announced [a new integration](https://tailscale.com/blog/controld) with [ControlD](https://controld.com/) to add advanced DNS filtering and security. While I was getting that set up on my tailnet, I stumbled across an option I hadn't previously noticed in the Tailscale CLI: the `tailscale drive` command:
@ -36,10 +35,10 @@ That sounded kind of neat - especially once I found the corresponding [Taildrive
Oh yeah. That will be a huge simplification for how I share files within my tailnet.
I've finally had a chance to get this implemented and am pretty pleased with the results so far. Here's how I did it.
I've now had a chance to get this implemented on my tailnet and thought I'd share some notes on how I did it.
### ACL Changes
My Tailscale policy relies heavily on [ACL tags](https://tailscale.com/kb/1068/acl-tags) to manage access between systems, and especially for server nodes which don't typically have a user logging on to them directly. I don't necessarily want every system to be able to export a file share so I decided to control that capability with a new `tag:share` flag. Before I can use that tag, though, I need to [add it to the ACL](https://tailscale.com/kb/1068/acl-tags#define-a-tag):
My Tailscale policy relies heavily on [ACL tags](https://tailscale.com/kb/1068/acl-tags) to manage access between systems, especially for "headless" server systems which don't typically have users logged in to them. I don't necessarily want every system to be able to export a file share so I decided to control that capability with a new `tag:share` flag. Before I could use that tag, though, I had to [add it to the ACL](https://tailscale.com/kb/1068/acl-tags#define-a-tag):
```json
{
@ -53,7 +52,7 @@ My Tailscale policy relies heavily on [ACL tags](https://tailscale.com/kb/1068/a
}
```
Next I needed to add the appropriate [node attributes](https://tailscale.com/kb/1337/acl-syntax#nodeattrs) to enable Taildrive:
Next I needed to add the appropriate [node attributes](https://tailscale.com/kb/1337/acl-syntax#nodeattrs) to enable Taildrive sharing on devices with that tag and Taildrive access for all other systems:
```json
{
@ -105,6 +104,8 @@ And I created a pair of [Grants](https://tailscale.com/kb/1324/acl-grants) to gi
}
```
That will let me create/manage files from the devices I regularly work on, and easily retrieve them as needed on the others.
Then I just used the Tailscale admin portal to add the new `tag:share` tag to my existing `files` node:
![The files node tagged with `tag:internal`, `tag:salt-minion`, and `tag:share`](files-tags.png)
@ -116,7 +117,7 @@ After making the required ACL changes, actually publishing the share was very st
tailscale drive share <name> <path> # [tl! .cmd]
```
I (somewhat-confusingly) wanted to share a share named `share`, found at `/home/john/share`... so I used this to export it:
I (somewhat-confusingly) wanted to share a share named `share`, found at `/home/john/share` (I *might* be bad at naming things) so I used this to export it:
```shell
tailscale drive share share /home/john/share # [tl! .cmd]
@ -132,24 +133,20 @@ share /home/john/share john
```
### Mounting the Share
In order to mount the share from the Debian environment on my Chromebook, I first needed to install the `davfs2` package to add support for mounting WebDAV shares:
In order to mount the share from the Debian [Linux development environment on my Chromebook](https://support.google.com/chromebook/answer/9145439), I first needed to install the `davfs2` package to add support for mounting WebDAV shares:
```shell
sudo apt update # [tl! .cmd:1]
sudo apt install davfs2
```
I then added my user to the `davfs2` group so that I could mount WebDAV shares without needing `sudo` access:
During the install of `davfs2`, I got prompted for whether or not I want to allow unprivileged users to mount WebDAV resources. I was in a hurry and just selected the default `<No>` response... before I realized that was probably a mistake (at least for this particular use case).
```shell
sudo usermod -aG davfs2 $USER # [tl! .cmd]
```
So I ran `sudo dpkg-reconfigure davfs2` to try again and this time made sure to select `<Yes>`:
And used `newgrp` to activate that membership without needing to log out and back in again:
![Should unprivileged users be allowed to mount WebDAV resources?](davfs-suid.png)
```shell
newgrp davfs2 # [tl! .cmd]
```
That should ensure that the share gets mounted with appropriate privileges (otherwise, all the files would be owned by `root` and that could pose some additional challenges).
I also created a folder inside my home directory to use as a mountpoint:
@ -160,15 +157,15 @@ mkdir ~/taildrive # [tl! .cmd]
I knew from the [Taildrive docs](https://tailscale.com/kb/1369/taildrive) that the WebDAV server would be running at `http://100.100.100.100:8080` and the share would be available at `/<tailnet>/<machine>/<share>`, so I added the following to my `/etc/fstab`:
```txt
http://100.100.100.100:8080/example.com/files/share /home/john/taildrive/ davfs user,rw,noauto 0 0
http://100.100.100.100:8080/example.com/files/share /home/john/taildrive/ davfs user,rw,noauto 0 0
```
Then I ran `sudo systemctl daemon-reload` to make sure the system knew about the changes to the fstab.
And to avoid being prompted for (unnecessary) credentials when attempting to mount the taildrive, I added this to the bottom of `~/.davfs2/secrets`:
Taildrive's WebDAV implementation doesn't require any additional authentication (that's handled automatically by Tailscale), but `davfs2` doesn't know that. So to keep it from prompting unnecessarily for credentials when attempting to mount the taildrive, I added this to the bottom of `~/.davfs2/secrets`, with empty strings taking the place of the username and password:
```txt
/home/john/taildrive john ""
/home/john/taildrive "" ""
```
After that, I could mount the share like so:
@ -190,3 +187,7 @@ drwxr-xr-x - john 16 Feb 2023 notes
```
Neat, right?
I'd like to eventually get this set up so that [AutoFS](https://help.ubuntu.com/community/Autofs) can handle mounting the Taildrive WebDAV share on the fly. I know that [won't work within the containerized Linux environment on my Chromebook](https://www.chromium.org/chromium-os/developer-library/guides/containers/containers-and-vms/#can-i-mount-filesystems) but I think it *should* be possible on an actual Linux system. My initial efforts were unsuccessful though; I'll update this post if I figure it out.
In the meantime, though, this will be a more convenient way for me to share files between my Tailscale-connected systems.