add selfhosting tag

This commit is contained in:
John Bowdre 2023-09-11 14:53:22 -05:00
parent 248ec1b485
commit 07613c87e4
8 changed files with 49 additions and 41 deletions

View file

@ -44,6 +44,7 @@ tags:
- rest
- salt
- security
- selfhosting
- serverless
- shell
- tasker

View file

@ -9,10 +9,11 @@ tags:
- containers
- networking
- security
- selfhosting
title: AdGuard Home in Docker on Photon OS
---
I was recently introduced to [AdGuard Home](https://adguard.com/en/adguard-home/overview.html) by way of its very slick [Home Assistant Add-On](https://github.com/hassio-addons/addon-adguard-home/blob/main/adguard/DOCS.md). Compared to the relatively-complicated [Pi-hole](https://pi-hole.net/) setup that I had implemented several months back, AdGuard Home was *much* simpler to deploy (particularly since I basically just had to click the "Install" button from the Home Assistant add-ons manage). It also has a more modern UI with options arranged more logically (to me, at least), and it just feels easier to use overall. It worked great for a time... until my Home Assistant instance crashed, taking down AdGuard Home (and my internet access) with it. Maybe bundling these services isn't the best move.
I was recently introduced to [AdGuard Home](https://adguard.com/en/adguard-home/overview.html) by way of its very slick [Home Assistant Add-On](https://github.com/hassio-addons/addon-adguard-home/blob/main/adguard/DOCS.md). Compared to the relatively-complicated [Pi-hole](https://pi-hole.net/) setup that I had implemented several months back, AdGuard Home was *much* simpler to deploy (particularly since I basically just had to click the "Install" button from the Home Assistant add-ons manage). It also has a more modern UI with options arranged more logically (to me, at least), and it just feels easier to use overall. It worked great for a time... until my Home Assistant instance crashed, taking down AdGuard Home (and my internet access) with it. Maybe bundling these services isn't the best move.
I'd like to use AdGuard Home, but the system it runs on needs to be rock-solid. With that in mind, I thought it might be fun to instead run AdGuard Home in a Docker container on a VM running VMware's container-optimized [Photon OS](https://github.com/vmware/photon), primarily because I want an excuse to play more with Docker and Photon (but also the thing I just mentioned about stability). So here's what it took to get that running.
@ -56,7 +57,7 @@ DHCP=no
IPv6AcceptRA=no
```
I set the required permissions on my new network configuration file with `chmod 644 /etc/systemd/network/10-static-en.network` and then restarted `networkd` with `systemctl restart systemd-networkd`.
I set the required permissions on my new network configuration file with `chmod 644 /etc/systemd/network/10-static-en.network` and then restarted `networkd` with `systemctl restart systemd-networkd`.
I then ran `networkctl` a couple of times until the `eth0` interface went fully green, and did an `ip a` to confirm that the address had been applied.
![Verifying networking](qOw7Ysj3O.png)
@ -95,7 +96,7 @@ systemctl restart systemd-resolved
```
### Deploy AdGuard Home container
Okay, now for the fun part.
Okay, now for the fun part.
I create a directory for AdGuard to live in, and then create a `docker-compose.yaml` therein:
```shell
@ -158,9 +159,9 @@ AdGuard Home ships with pretty sensible defaults so there's not really a huge ne
### Getting requests to AdGuard Home
Normally, you'd tell your Wifi router what DNS server you want to use, and it would relay that information to the connected DHCP clients. Google Wifi is a bit funny, in that it wants to function as a DNS proxy for the network. When you configure a custom DNS server for Google Wifi, it still tells the DHCP clients to send the requests to the router, and the router then forwards the queries on to the configured DNS server.
Normally, you'd tell your Wifi router what DNS server you want to use, and it would relay that information to the connected DHCP clients. Google Wifi is a bit funny, in that it wants to function as a DNS proxy for the network. When you configure a custom DNS server for Google Wifi, it still tells the DHCP clients to send the requests to the router, and the router then forwards the queries on to the configured DNS server.
I already have Google Wifi set up to use my Windows DC (at `192.168.1.5`) for DNS. That lets me easily access systems on my internal `lab.bowdre.net` domain without having to manually configure DNS, and the DC forwards resolution requests it can't handle on to the upstream (internet) DNS servers.
I already have Google Wifi set up to use my Windows DC (at `192.168.1.5`) for DNS. That lets me easily access systems on my internal `lab.bowdre.net` domain without having to manually configure DNS, and the DC forwards resolution requests it can't handle on to the upstream (internet) DNS servers.
To easily insert my AdGuard Home instance into the flow, I pop in to my Windows DC and configure the AdGuard Home address (`192.168.1.2`) as the primary DNS forwarder. The DC will continue to handle internal resolutions, and anything it can't handle will now get passed up the chain to AdGuard Home. And this also gives me a bit of a failsafe, in that queries will fail back to the previously-configured upstream DNS if AdGuard Home doesn't respond within a few seconds.
![Setting AdGuard Home as a forwarder](bw09OXG7f.png)

View file

@ -10,6 +10,7 @@ tags:
- cloud
- gcp
- security
- selfhosting
title: BitWarden password manager self-hosted on free Google Cloud instance
---

View file

@ -14,6 +14,7 @@ tags:
- automation
- networking
- security
- selfhosting
title: Cloud-hosted WireGuard VPN for remote homelab access
featured: false
---
@ -28,11 +29,11 @@ It's a pretty slick setup, if I do say so myself. Anyway, this post will discuss
### WireGuard Concepts, in Brief
WireGuard does things a bit differently from other VPN solutions I've used in the past. For starters, there aren't any user accounts to manage, and in fact users don't really come into the picture at all. WireGuard also doesn't really distinguish between _client_ and _server_; the devices on both ends of a tunnel connection are _peers_, and they use the same software package and very similar configurations. Each WireGuard peer is configured with a virtual network interface with a private IP address used for the tunnel network, and a configuration file tells it which tunnel IP(s) will be used by the other peer(s). Each peer has its own cryptographic _private_ key, and the other peers get a copy of the corresponding _public_ key added to their configuration so that all the peers can recognize each other and encrypt/decrypt traffic appropriately. This mapping of peer addresses to public keys facilitates what WireGuard calls [Cryptokey Routing](https://www.wireguard.com/#cryptokey-routing).
Once the peers are configured, all it takes is bringing up the WireGuard virtual interface on each peer to establish the tunnel and start passing secure traffic.
Once the peers are configured, all it takes is bringing up the WireGuard virtual interface on each peer to establish the tunnel and start passing secure traffic.
You can read a lot more fascinating details about how this all works back on the [WireGuard homepage](https://www.wireguard.com/#conceptual-overview) (and even more in this [protocol description](https://www.wireguard.com/protocol/)) but this at least covers the key points I needed to grok prior to a successful initial deployment.
For my hybrid cloud solution, I also leaned heavily upon [this write-up of a WireGuard Site-to-Site configuration](https://gist.github.com/insdavm/b1034635ab23b8839bf957aa406b5e39) for how to get traffic flowing between my on-site environment, cloud-hosted WireGuard server, and "Road Warrior" client devices, and drew from [this documentation on implementing WireGuard in GCP](https://github.com/agavrel/wireguard_google_cloud) as well. The [VyOS documentation for configuring the built-in WireGuard interface](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html) was also quite helpful to me.
For my hybrid cloud solution, I also leaned heavily upon [this write-up of a WireGuard Site-to-Site configuration](https://gist.github.com/insdavm/b1034635ab23b8839bf957aa406b5e39) for how to get traffic flowing between my on-site environment, cloud-hosted WireGuard server, and "Road Warrior" client devices, and drew from [this documentation on implementing WireGuard in GCP](https://github.com/agavrel/wireguard_google_cloud) as well. The [VyOS documentation for configuring the built-in WireGuard interface](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html) was also quite helpful to me.
Okay, enough background; let's get this thing going.
@ -54,9 +55,9 @@ The other defaults are fine, but I'll holding off on clicking the friendly blue
![Instance creation advanced settings](20211028_instance_advanced_settings.png)
##### Network Configuration
Expanding the **Networking** section of the request form lets me add a new `wireguard` network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the _IP Forwarding_ option so that the instance will be able to do router-like things.
Expanding the **Networking** section of the request form lets me add a new `wireguard` network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the _IP Forwarding_ option so that the instance will be able to do router-like things.
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is _ephemeral_ so it will change periodically. Normally I'd overcome this by [using ddclient to manage its dynamic DNS record](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns), but (looking ahead) [VyOS's WireGuard interface configuration](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html#interface-configuration) unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a _static_ IP address for my instance.
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is _ephemeral_ so it will change periodically. Normally I'd overcome this by [using ddclient to manage its dynamic DNS record](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns), but (looking ahead) [VyOS's WireGuard interface configuration](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html#interface-configuration) unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a _static_ IP address for my instance.
I can do that by clicking on the _Default_ network interface to expand the configuration. While I'm here, I'll first change the **Network Service Tier** from _Premium_ to _Standard_ to save a bit of money on network egress fees. _(This might be a good time to mention that while the compute instance itself is free, I will have to spend [about $3/mo for the public IP](https://cloud.google.com/vpc/network-pricing#:~:text=internal%20IP%20addresses.-,External%20IP%20address%20pricing,-You%20are%20charged), as well as [$0.085/GiB for internet egress via the Standard tier](https://cloud.google.com/vpc/network-pricing#:~:text=or%20Cloud%20Interconnect.-,Standard%20Tier%20pricing,-Egress%20pricing%20is) (versus [$0.12/GiB on the Premium tier](https://cloud.google.com/vpc/network-pricing#:~:text=Premium%20Tier%20pricing)). So not entirely free, but still pretty damn cheap for a cloud-hosted VPN that I control completely.)_
@ -74,14 +75,14 @@ Okay, now that I've got my keys, I can click the **Add Item** button and paste i
![Security configuration](20211027_security_settings.png)
And that's it for the pre-deploy configuration! Time to hit **Create** to kick it off.
And that's it for the pre-deploy configuration! Time to hit **Create** to kick it off.
![Do it!](20211027_creation_time.png)
The instance creation will take a couple of minutes but I can go ahead and get the firewall sorted while I wait.
#### Firewall
Google Cloud's default firewall configuration will let me reach my new server via SSH without needing to configure anything, but I'll need to add a new rule to allow the WireGuard traffic. I do this by going to **VPC > Firewall** and clicking the button at the top to **[Create Firewall Rule](https://console.cloud.google.com/networking/firewalls/add)**. I give it a name (`allow-wireguard-ingress`), select the rule target by specifying the `wireguard` network tag I had added to the instance, and set the source range to `0.0.0.0/0`. I'm going to use the default WireGuard port so select the _udp:_ checkbox and enter `51820`.
#### Firewall
Google Cloud's default firewall configuration will let me reach my new server via SSH without needing to configure anything, but I'll need to add a new rule to allow the WireGuard traffic. I do this by going to **VPC > Firewall** and clicking the button at the top to **[Create Firewall Rule](https://console.cloud.google.com/networking/firewalls/add)**. I give it a name (`allow-wireguard-ingress`), select the rule target by specifying the `wireguard` network tag I had added to the instance, and set the source range to `0.0.0.0/0`. I'm going to use the default WireGuard port so select the _udp:_ checkbox and enter `51820`.
![Firewall rule creation](20211027_firewall.png)
@ -89,7 +90,7 @@ I'll click **Create** and move on.
#### WireGuard Server Setup
Once the **Compute Engine > Instances** [page](https://console.cloud.google.com/compute/instances) indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH:
```sh
```sh
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP}
```
@ -140,7 +141,7 @@ Then I can use the `wg genkey` command to generate the server's private key, sav
wg genkey | tee server.key | wg pubkey > server.pub
```
As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
The format of the interface configuration file will need to look something like this:
```
@ -236,7 +237,7 @@ save
I can check the status of WireGuard on VyOS (and view the public key!) like so:
```sh
$ show interfaces wireguard wg0 summary
$ show interfaces wireguard wg0 summary
interface: wg0
public key: {VYOS_PUBLIC_KEY}
private key: (hidden)
@ -259,7 +260,7 @@ PublicKey = {VYOS_PUBLIC_KEY}
AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
```
This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
After saving the file, I can either restart WireGuard by bringing the interface down and back up (`wg-quick down wg0 && wg-quick up wg0`), or I can reload it on the fly with:
```sh
@ -452,7 +453,7 @@ wg syncconf wg0 <(wg-quick strip wg0)
Back in the `clients/` directory, I can use `qrencode` to render the phone configuration file (keys and all!) as a QR code:
```sh
qrencode -t ansiutf8 < phone1.conf
qrencode -t ansiutf8 < phone1.conf
```
![QR code config](20211028_qrcode_config.png)
@ -489,7 +490,7 @@ Profile: VPN on Strange WiFi
A1: Tasker Function [
Function: WireGuardSetTunnel(true,wireguard-gcp) ]
If [ %TRUSTED_WIFI !Set ]
Exit Task: DisconnectVPN
A1: Tasker Function [
Function: WireGuardSetTunnel(false,wireguard-gcp) ]
@ -499,5 +500,5 @@ Profile: VPN on Strange WiFi
_Automagic!_
#### Other Peers
Any additional peers that need to be added in the future will likely follow one of the above processes. The steps are always to generate the peer's key pair, use the private key to populate the `[Interface]` portion of the peer's config, configure the `[Peer]` section with the _public_ key, allowed IPs, and endpoint address of the peer it will be connecting to, and then to add the new peer's _public_ key and internal WireGuard IP to a new `[Peer]` section of the existing peer's config.
Any additional peers that need to be added in the future will likely follow one of the above processes. The steps are always to generate the peer's key pair, use the private key to populate the `[Interface]` portion of the peer's config, configure the `[Peer]` section with the _public_ key, allowed IPs, and endpoint address of the peer it will be connecting to, and then to add the new peer's _public_ key and internal WireGuard IP to a new `[Peer]` section of the existing peer's config.

View file

@ -10,10 +10,11 @@ tags:
- cloud
- containers
- chat
- selfhosting
title: Federated Matrix Server (Synapse) on Oracle Cloud's Free Tier
---
I've heard a lot lately about how generous [Oracle Cloud's free tier](https://www.oracle.com/cloud/free/) is, particularly when [compared with the free offerings](https://github.com/cloudcommunity/Cloud-Service-Providers-Free-Tier-Overview) from other public cloud providers. Signing up for an account was fairly straight-forward, though I did have to wait a few hours for an actual human to call me on an actual telephone to verify my account. Once in, I thought it would be fun to try building my own [Matrix](https://matrix.org/) homeserver to really benefit from the network's decentralized-but-federated model for secure end-to-end encrypted communications.
I've heard a lot lately about how generous [Oracle Cloud's free tier](https://www.oracle.com/cloud/free/) is, particularly when [compared with the free offerings](https://github.com/cloudcommunity/Cloud-Service-Providers-Free-Tier-Overview) from other public cloud providers. Signing up for an account was fairly straight-forward, though I did have to wait a few hours for an actual human to call me on an actual telephone to verify my account. Once in, I thought it would be fun to try building my own [Matrix](https://matrix.org/) homeserver to really benefit from the network's decentralized-but-federated model for secure end-to-end encrypted communications.
There are two primary projects for Matrix homeservers: [Synapse](https://github.com/matrix-org/synapse/) and [Dendrite](https://github.com/matrix-org/dendrite). Dendrite is the newer, more efficient server, but it's not quite feature complete. I'll be using Synapse for my build to make sure that everything works right off the bat, and I will be running the server in a Docker container to make it (relatively) easy to replace if I feel more comfortable about Dendrite in the future.
@ -38,7 +39,7 @@ Now I can finally click the blue **Create Instance** button at the bottom of the
![Logged in!](5PD1H7b1O.png)
### DNS setup
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
While I'm managing DNS, it might be good to take a look at the requirements for [federating my new server](https://github.com/matrix-org/synapse/blob/master/docs/federate.md#setting-up-federation) with the other Matrix servers out there. I'd like for users identities on my server to be identified by the `bowdre.net` domain (`@user:bowdre.net`) rather than the full `matrix.bowdre.net` FQDN (`@user:matrix.bowdre.net` is kind of cumbersome). The standard way to do this to leverage [`.well-known` delegation](https://github.com/matrix-org/synapse/blob/master/docs/delegate.md#well-known-delegation), where the URL at `http://bowdre.net/.well-known/matrix/server` would return a JSON structure telling other Matrix servers how to connect to mine:
```json
@ -47,7 +48,7 @@ While I'm managing DNS, it might be good to take a look at the requirements for
}
```
I don't *currently* have another server already handling requests to `bowdre.net`, so for now I'll add another `A` record with the same public IP address to my DNS configuration. Requests for both `bowdre.net` and `matrix.bowdre.net` will reach the same server instance, but those requests will be handled differently. More on that later.
I don't *currently* have another server already handling requests to `bowdre.net`, so for now I'll add another `A` record with the same public IP address to my DNS configuration. Requests for both `bowdre.net` and `matrix.bowdre.net` will reach the same server instance, but those requests will be handled differently. More on that later.
An alternative to this `.well-known` delegation would be to use [`SRV` DNS record delegation](https://github.com/matrix-org/synapse/blob/master/docs/delegate.md#srv-dns-record-delegation) to accomplish the same thing. I'd create an `SRV` record for `_matrix._tcp.bowdre.net` with the data `0 10 8448 matrix.bowdre.net` (priority=`0`, weight=`10`, port=`8448`, target=`matrix.bowdre.net`) which would again let other Matrix servers know where to send the federation traffic for my server. This approach has an advantage of not needing to make any changes on the `bowdre.net` web server, but it would require the delegated `matrix.bowdre.net` server to *also* [return a valid certificate for `bowdre.net`](https://matrix.org/docs/spec/server_server/latest#:~:text=If%20the%20/.well-known%20request%20resulted,present%20a%20valid%20certificate%20for%20%3Chostname%3E.). Trying to get a Let's Encrypt certificate for a server name that doesn't resolve authoritatively in DNS sounds more complicated than I want to get into with this project, so I'll move forward with my plan to use the `.well-known` delegation instead.
@ -68,17 +69,17 @@ The *Ingress Rules* section lists the existing inbound firewall exceptions, whic
I want this to apply to traffic from any source IP so I enter the CIDR `0.0.0.0/0`, and I enter the *Destination Port Range* as `80,443`. I also add a brief description and click **Add Ingress Rules**.
![Adding an ingress rule](2fbKJc5Y6.png)
Success! My new ingress rules appear at the bottom of the list.
Success! My new ingress rules appear at the bottom of the list.
![New rules added](s5Y0rycng.png)
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
```
$ sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere
2 ACCEPT icmp -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere
4 ACCEPT udp -- anywhere anywhere udp spt:ntp
5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
6 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
@ -94,10 +95,10 @@ And then I'll confirm that the order is correct:
```
$ sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere
2 ACCEPT icmp -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere
4 ACCEPT udp -- anywhere anywhere udp spt:ntp
5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
@ -204,7 +205,7 @@ sudo apt update
sudo apt install caddy
```
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
```
$ sudo vi /etc/caddy/Caddyfile
matrix.bowdre.net {
@ -253,7 +254,7 @@ Browsing to `https://matrix.bowdre.net` shows a blank page - but a valid and tru
The `.well-known` URL also returns the expected JSON:
![.well-known](6IRPHhr6u.png)
And trying to hit anything else at `https://bowdre.net` brings me right back here.
And trying to hit anything else at `https://bowdre.net` brings me right back here.
And again, the config to do all this (including getting valid certs for two server names!) is just 11 lines long. Caddy is seriously and magically cool.
@ -335,15 +336,15 @@ $ docker run -it --rm \
Unable to find image 'matrixdotorg/synapse:latest' locally
latest: Pulling from matrixdotorg/synapse
69692152171a: Pull complete
66a3c154490a: Pull complete
3e35bdfb65b2: Pull complete
f2c4c4355073: Pull complete
65d67526c337: Pull complete
5186d323ad7f: Pull complete
436afe4e6bba: Pull complete
c099b298f773: Pull complete
50b871f28549: Pull complete
69692152171a: Pull complete
66a3c154490a: Pull complete
3e35bdfb65b2: Pull complete
f2c4c4355073: Pull complete
65d67526c337: Pull complete
5186d323ad7f: Pull complete
436afe4e6bba: Pull complete
c099b298f773: Pull complete
50b871f28549: Pull complete
Digest: sha256:5ccac6349f639367fcf79490ed5c2377f56039ceb622641d196574278ed99b74
Status: Downloaded newer image for matrixdotorg/synapse:latest
Creating log config /data/bowdre.net.log.config
@ -408,7 +409,7 @@ Now I can fire up my [Matrix client of choice](https://element.io/get-started)),
### Wrap-up
And that's it! I now have my own Matrix server, and I can use my new account for secure chats with Matrix users on any other federated homeserver. It works really well for directly messaging other individuals, and also for participating in small group chats. The server *does* kind of fall on its face if I try to join a massively-populated (like 500+ users) room, but I'm not going to complain about that too much on a free-tier server.
All in, I'm pretty pleased with how this little project turned out, and I learned quite a bit along the way. I'm tremendously impressed by Caddy's power and simplicity, and I look forward to using it more in future projects.
All in, I'm pretty pleased with how this little project turned out, and I learned quite a bit along the way. I'm tremendously impressed by Caddy's power and simplicity, and I look forward to using it more in future projects.
### Update: Updating
After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:

View file

@ -20,6 +20,7 @@ tags:
- docker
- cloud
- tailscale
- selfhosting
comment: true # Disable comment if false.
---
I recently started using [Obsidian](https://obsidian.md/) for keeping notes, tracking projects, and just generally organizing all the information that would otherwise pass into my brain and then fall out the other side. Unlike other similar solutions which operate entirely in *The Cloud*, Obsidian works with Markdown files stored in a local folder[^sync], which I find to be very attractive. Not only will this allow me to easily transfer my notes between apps if I find something I like better than Obsidian, but it also opens the door to using `git` to easily back up all this important information.

View file

@ -21,6 +21,7 @@ tags:
- docker
- containers
- chat
- selfhosting
comment: true # Disable comment if false.
---
**Non-technical users deserve private communications, too.**

View file

@ -21,6 +21,7 @@ tags:
- tailscale
- wireguard
- containers
- selfhosting
comment: true # Disable comment if false.
---
I've shared in the past about how I use [custom search engines in Chrome](/abusing-chromes-custom-search-engines-for-fun-and-profit/) as quick web shortcuts. And I may have mentioned [my love for Tailscale](/tags/tailscale/) a time or two as well. Well I recently learned of a way to combine these two passions: [Tailscale golink](https://github.com/tailscale/golink). The [golink announcement post on the Tailscale blog](https://tailscale.com/blog/golink/) offers a great overview of the service: