mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-11-22 06:52:18 +00:00
Compare commits
No commits in common. "c3b39f26895085f9fbf7df64a7029d4e51291a0f" and "08c5efb655d6ff1f1f04eb5cf165f476c33a8fac" have entirely different histories.
c3b39f2689
...
08c5efb655
16 changed files with 24 additions and 134 deletions
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "/homelab"
|
||||
date: "2024-05-26T21:30:51Z"
|
||||
lastmod: "2024-09-22T19:16:04Z"
|
||||
lastmod: "2024-08-13T02:12:54Z"
|
||||
aliases:
|
||||
- playground
|
||||
description: "The systems I use for fun and enrichment."
|
||||
|
@ -14,8 +14,6 @@ categories: slashes
|
|||
|
||||
Everything is connected to my [Tailscale](https://tailscale.com) tailnet, with a GitOps-managed ACL to allow access as needed. This lets me access and manage systems without really caring if they're local or remote. [Tailscale is magic](/secure-networking-made-simple-with-tailscale/).
|
||||
|
||||
The Docker containers are (generally) managed with [Portainer](https://www.portainer.io/).
|
||||
|
||||
### On Premise
|
||||
|
||||
**Proxmox VE 8 Cluster**
|
||||
|
@ -32,10 +30,12 @@ The Docker containers are (generally) managed with [Portainer](https://www.porta
|
|||
- [Unifi USW Flex XG 10GbE Switch](https://store.ui.com/us/en/collections/unifi-switching-utility-10-gbps-ethernet/products/unifi-flex-xg)
|
||||
|
||||
The Proxmox cluster hosts a number of VMs and LXC containers:
|
||||
- `doc`: Ubuntu 22.04 Docker host for various on-prem container workloads, served via [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) / [Caddy + Tailscale](/caddy-tailscale-alternative-cloudflare-tunnel/):
|
||||
- `doc`: Ubuntu 22.04 Docker host for various on-prem container workloads, served via [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) / [Cloudflare Tunnel](/publish-services-cloudflare-tunnel/):
|
||||
- [Calibre Web](https://github.com/janeczku/calibre-web) for managing my ebooks
|
||||
- [Crowdsec](https://www.crowdsec.net/) log processor
|
||||
- [Cyberchef](https://github.com/gchq/CyberChef), the Cyber Swiss Army Knife
|
||||
- [Hashicorp Vault](https://www.vaultproject.io/) for secrets management
|
||||
- [Miniflux](https://miniflux.app/) feed reader
|
||||
- [RIPE Atlas Probe](https://www.ripe.net/analyse/internet-measurements/ripe-atlas/) for measuring internet connectivity
|
||||
- [SilverBullet](https://silverbullet.md), a web-based personal knowledge management system
|
||||
- [Tailscale Golink](https://github.com/tailscale/golink), a private shortlink service ([post](/tailscale-golink-private-shortlinks-tailnet/))
|
||||
|
@ -74,8 +74,9 @@ I like to know what's flying overhead, and I'm also feeding flight data to [flig
|
|||
- `smp1`: Ubuntu 22.04 [SimpleX](/simplex/) server
|
||||
|
||||
**[Vultr](https://www.vultr.com)**
|
||||
- `volly`: Ubuntu 22.04 Docker host for various workloads, served either through [Caddy](https://caddyserver.com/) or [Caddy + Tailscale](/caddy-tailscale-alternative-cloudflare-tunnel/):
|
||||
- `volly`: Ubuntu 22.04 Docker host for various workloads, served either through [Caddy](https://caddyserver.com/) or [Cloudflare Tunnel](/publish-services-cloudflare-tunnel/):
|
||||
- [Agate](https://github.com/mbrubeck/agate) Gemini server ([post](/gemini-capsule-gempost-github-actions/))
|
||||
- [Crowdsec](https://www.crowdsec.net) security engine
|
||||
- [Kineto](https://github.com/beelux/kineto) Gemini-to-HTTP proxy ([post](/gemini-capsule-gempost-github-actions/))
|
||||
- [Linkding](https://github.com/sissbruecker/linkding) bookmark manager serving [links.bowdre.net](https://links.bowdre.net/bookmarks/shared)
|
||||
- [ntfy](https://ntfy.sh/) notification service ([post](/easy-push-notifications-with-ntfy/))
|
||||
|
|
|
@ -71,7 +71,7 @@ I can then go to Service Broker and drag the new fields onto the Custom Form can
|
|||
![Service Broker form](unhgNySSzz.png)
|
||||
|
||||
### vRO workflow
|
||||
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](/vra8-custom-provisioning-part-two/#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](/vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||
![Naming the new workflow](X9JhgWx8x.png)
|
||||
|
||||
The workflow will have a single input from vRA, `inputProperties` of type `Properties`.
|
||||
|
|
|
@ -44,7 +44,7 @@ Gateway=192.168.1.1
|
|||
DNS=192.168.1.5
|
||||
```
|
||||
|
||||
By the way, that `192.168.1.5` address is my Windows DC/DNS server that I use for [my homelab environment](/vmware-home-lab-on-intel-nuc-9/#basic-infrastructure). That's the DNS server that's configured on my Google Wifi router, and it will continue to handle resolution for local addresses.
|
||||
By the way, that `192.168.1.5` address is my Windows DC/DNS server that I use for [my homelab environment](/vmware-home-lab-on-intel-nuc-9#basic-infrastructure). That's the DNS server that's configured on my Google Wifi router, and it will continue to handle resolution for local addresses.
|
||||
|
||||
I also disabled DHCP by setting `DHCP=no` in `/etc/systemd/network/99-dhcp-en.network`:
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ I figured I could combine the excellent [Reolink integration for Home Assistant]
|
|||
|
||||
### Alert on motion detection
|
||||
{{% notice note "Ntfy Integration" %}}
|
||||
Since manually configuring ntfy in Home Assistant via the [RESTful Notifications integration](/easy-push-notifications-with-ntfy/#notify-configuration), I found that a [ntfy-specific integration](https://github.com/ivanmihov/homeassistant-ntfy.sh) was available through the [Home Assistant Community Store](https://hacs.xyz/) addon. That setup is a bit more flexible so I've switched my setup to use it instead:
|
||||
Since manually configuring ntfy in Home Assistant via the [RESTful Notifications integration](/easy-push-notifications-with-ntfy#notify-configuration), I found that a [ntfy-specific integration](https://github.com/ivanmihov/homeassistant-ntfy.sh) was available through the [Home Assistant Community Store](https://hacs.xyz/) addon. That setup is a bit more flexible so I've switched my setup to use it instead:
|
||||
```yaml
|
||||
# configuration.yaml
|
||||
notify:
|
||||
|
|
|
@ -1,111 +0,0 @@
|
|||
---
|
||||
title: "Caddy + Tailscale as an Alternative to Cloudflare Tunnel"
|
||||
date: "2024-09-22T19:12:52Z"
|
||||
# lastmod: 2024-09-22
|
||||
description: "Combining the magic of Caddy and Tailscale to serve web content from my homelab - and declaring independence from Cloudflare in the process."
|
||||
featured: false
|
||||
toc: true
|
||||
reply: true
|
||||
categories: Self-Hosting
|
||||
tags:
|
||||
- caddy
|
||||
- cloud
|
||||
- containers
|
||||
- docker
|
||||
- networking
|
||||
- selfhosting
|
||||
- tailscale
|
||||
---
|
||||
Earlier this year, I [shared how I used Cloudflare Tunnel](/publish-services-cloudflare-tunnel/) to publish some self-hosted resources on the internet without needing to expose any part of my home network. Since then, I've [moved many resources to bunny.net](https://srsbsns.lol/i-just-hopped-to-bunnynet/) ([including this website](/further-down-the-bunny-hole/)). I left some domains at Cloudflare, primarily just to benefit from the convenience of Cloudflare Tunnel, but I wasn't thrilled about being so dependent upon a single company that controls so much of the internet.
|
||||
|
||||
However a [post on Tailscale's blog this week](https://tailscale.com/blog/last-reverse-proxy-you-need) reminded me that there was another easy approach using solutions I'm already using heavily: [Caddy](/tags/caddy) and [Tailscale](/tags/tailscale). Caddy is a modern web server (that works great as a reverse proxy with automatic HTTPS), and Tailscale [makes secure networking simple](/secure-networking-made-simple-with-tailscale/). Combining the two allows me to securely serve web services without any messy firewall configurations.
|
||||
|
||||
So here's how I ditched Cloudflare Tunnel in favor of Caddy + Tailscale.
|
||||
|
||||
### Docker Compose config
|
||||
To keep things simple, I'll deploy the [same speedtest app I used to demo Cloudflare Tunnel](https://runtimeterror.dev/publish-services-cloudflare-tunnel/#speedtest-demo) on a Docker host located in my [homelab](/homelab).
|
||||
|
||||
Here's a basic config to run [openspeedtest](https://github.com/openspeedtest/Docker-Image) on HTTP only (defaults to port `3000`):
|
||||
|
||||
```yaml
|
||||
# torchlight! {"lineNumbers":true}
|
||||
services:
|
||||
speedtest:
|
||||
image: openspeedtest/latest
|
||||
container_name: speedtest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 3000:3000
|
||||
```
|
||||
|
||||
### A Tailscale sidecar
|
||||
I can easily add [Tailscale in a sidecar container](/tailscale-serve-docker-compose-sidecar/) to make my new speedtest available within my tailnet:
|
||||
|
||||
```yaml
|
||||
# torchlight! {"lineNumbers":true}
|
||||
services:
|
||||
speedtest:
|
||||
image: openspeedtest/latest
|
||||
container_name: speedtest
|
||||
restart: unless-stopped
|
||||
ports: # [tl! --:1]
|
||||
- 3000:3000
|
||||
network_mode: service:tailscale # [tl! ++]
|
||||
|
||||
tailscale: # [tl! ++:12]
|
||||
image: tailscale/tailscale:latest
|
||||
container_name: speedtest-tailscale
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
TS_AUTHKEY: ${TS_AUTHKEY:?err}
|
||||
TS_HOSTNAME: ${TS_HOSTNAME:-ts-docker}
|
||||
TS_STATE_DIR: /var/lib/tailscale/
|
||||
volumes:
|
||||
- ./ts_data:/var/lib/tailscale/
|
||||
```
|
||||
|
||||
Note that I no longer need to ask the host to expose port `3000` from the container; instead, I bridge the `speedtest` container's network with that of the `tailscale` container.
|
||||
|
||||
And I create a simple `.env` file with the secrets required for connecting to Tailscale using a [pre-authentication key](https://tailscale.com/kb/1085/auth-keys):
|
||||
|
||||
```shell
|
||||
# torchlight! {"lineNumbers":true}
|
||||
TS_AUTHKEY=tskey-auth-somestring-somelongerstring
|
||||
TS_HOSTNAME=speedtest
|
||||
```
|
||||
|
||||
After a quick `docker compose up -d` I can access my new speedtest at `http://speedtest.tailnet-name.ts.net:3000`. Next I just need to put it behind Caddy.
|
||||
|
||||
### Caddy configuration
|
||||
I already have [Caddy](https://caddyserver.com/) running on a server in [Vultr](https://www.vultr.com/) ([referral link](https://www.vultr.com/?ref=9488431)) so I'll be using that to front my new speedtest server. I add a DNS record in Bunny for `speed.runtimeterror.dev` pointed to the server's public IP address, and then add a corresponding block to my `/etc/caddy/Caddyfile` configuration:
|
||||
|
||||
|
||||
```text
|
||||
speed.runtimeterror.dev {
|
||||
bind 192.0.2.1 # replace with server's public interface address
|
||||
reverse_proxy http://speedtest.tailnet-name.ts.net:3000
|
||||
}
|
||||
```
|
||||
|
||||
{{% notice note "Caddy binding" %}}
|
||||
Since I'm already using Tailscale Serve for other services on this server, I use the `bind` directive to explicitly tell Caddy to listen on the server's public IP address. By default, it will try to listen on *all* interfaces and that would conflict with `tailscaled` that's already bound to the tailnet-internal IP.
|
||||
{{% /notice %}}
|
||||
|
||||
The `reverse_proxy` directive points to speedtest's HTTP endpoint within my tailnet; all traffic between tailnet addresses is already encrypted, and I can just let Caddy obtain and serve the SSL certificate automagically.
|
||||
|
||||
Now I just need to reload the Caddyfile:
|
||||
|
||||
```shell
|
||||
sudo caddy reload -c /etc/caddy/Caddyfile # [tl! .cmd]
|
||||
INFO using config from file {"file": "/etc/caddy/Caddyfile"} # [tl! .nocopy:1]
|
||||
INFO adapted config to JSON {"adapter": "caddyfile"}
|
||||
```
|
||||
|
||||
And I can try out my speedtest at `https://speed.runtimeterror.dev`:
|
||||
|
||||
![OpenSpeedTest results showing a download speed of 194.1 Mbps, upload speed of 147.8 Mbps, and ping of 20 ms with 0.6 ms jitter. A graph displays connection speed over time.](speedtest.png)
|
||||
|
||||
*Not bad!*
|
||||
|
||||
### Conclusion
|
||||
Combining the powers (and magic) of Caddy and Tailscale makes it easy to publicly serve content from private resources without compromising on security *or* extending vendor lock-in. This will dramatically simplify migrating the rest of my domains from Cloudflare to Bunny.
|
Binary file not shown.
Before Width: | Height: | Size: 222 KiB |
|
@ -22,7 +22,7 @@ For a while now, I've been using an [OpenVPN Access Server](https://openvpn.net/
|
|||
|
||||
I found that solution in [WireGuard](https://www.wireguard.com/), which provides an extremely efficient secure tunnel implemented directly in the Linux kernel. It has a much smaller (and easier-to-audit) codebase, requires minimal configuration, and uses the latest crypto wizardry to securely connect multiple systems. It took me an hour or so of fumbling to get WireGuard deployed and configured on a fresh (and minimal) Ubuntu 20.04 VM running on my ESXi 7 homelab host, and I was pretty happy with the performance, stability, and resource usage of the new setup. That new VM idled at a full _tenth_ of the memory usage of my OpenVPN AS, and it only required a single port to be forwarded into my home network.
|
||||
|
||||
Of course, I soon realized that the setup could be _even better:_ I'm now running a WireGuard server on the Google Cloud free tier, and I've configured the [VyOS virtual router I use for my homelab stuff](/vmware-home-lab-on-intel-nuc-9/#networking) to connect to that cloud-hosted server to create a secure tunnel between the two without needing to punch any holes in my local network (or consume any additional resources). I can then connect my client devices to the WireGuard server in the cloud. From there, traffic intended for my home network gets relayed to the VyOS router, and internet-bound traffic leaves Google Cloud directly. So my self-managed VPN isn't just good for accessing my home lab remotely, but also more generally for encrypting traffic when on WiFi networks I don't control - allowing me to replace the paid ProtonVPN subscription I had been using for that purpose.
|
||||
Of course, I soon realized that the setup could be _even better:_ I'm now running a WireGuard server on the Google Cloud free tier, and I've configured the [VyOS virtual router I use for my homelab stuff](/vmware-home-lab-on-intel-nuc-9#networking) to connect to that cloud-hosted server to create a secure tunnel between the two without needing to punch any holes in my local network (or consume any additional resources). I can then connect my client devices to the WireGuard server in the cloud. From there, traffic intended for my home network gets relayed to the VyOS router, and internet-bound traffic leaves Google Cloud directly. So my self-managed VPN isn't just good for accessing my home lab remotely, but also more generally for encrypting traffic when on WiFi networks I don't control - allowing me to replace the paid ProtonVPN subscription I had been using for that purpose.
|
||||
|
||||
It's a pretty slick setup, if I do say so myself. Anyway, this post will discuss how I implemented this, and what I learned along the way.
|
||||
|
||||
|
@ -57,7 +57,7 @@ The other defaults are fine, but I'll holding off on clicking the friendly blue
|
|||
##### Network Configuration
|
||||
Expanding the **Networking** section of the request form lets me add a new `wireguard` network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the _IP Forwarding_ option so that the instance will be able to do router-like things.
|
||||
|
||||
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is _ephemeral_ so it will change periodically. Normally I'd overcome this by [using ddclient to manage its dynamic DNS record](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/#configure-dynamic-dns), but (looking ahead) [VyOS's WireGuard interface configuration](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html#interface-configuration) unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a _static_ IP address for my instance.
|
||||
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is _ephemeral_ so it will change periodically. Normally I'd overcome this by [using ddclient to manage its dynamic DNS record](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns), but (looking ahead) [VyOS's WireGuard interface configuration](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html#interface-configuration) unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a _static_ IP address for my instance.
|
||||
|
||||
I can do that by clicking on the _Default_ network interface to expand the configuration. While I'm here, I'll first change the **Network Service Tier** from _Premium_ to _Standard_ to save a bit of money on network egress fees. _(This might be a good time to mention that while the compute instance itself is free, I will have to spend [about $3/mo for the public IP](https://cloud.google.com/vpc/network-pricing#:~:text=internal%20IP%20addresses.-,External%20IP%20address%20pricing,-You%20are%20charged), as well as [$0.085/GiB for internet egress via the Standard tier](https://cloud.google.com/vpc/network-pricing#:~:text=or%20Cloud%20Interconnect.-,Standard%20Tier%20pricing,-Egress%20pricing%20is) (versus [$0.12/GiB on the Premium tier](https://cloud.google.com/vpc/network-pricing#:~:text=Premium%20Tier%20pricing)). So not entirely free, but still pretty damn cheap for a cloud-hosted VPN that I control completely.)_
|
||||
|
||||
|
@ -487,7 +487,7 @@ Two quick pre-requisites first:
|
|||
1. Open the WireGuard Android app, tap the three-dot menu button at the top right, expand the Advanced section, and enable the _Allow remote control apps_ so that Tasker will be permitted to control WireGuard.
|
||||
2. Exclude the WireGuard app from Android's battery optimization so that it doesn't have any problems running in the background. On (Pixel-flavored) Android 12, this can be done by going to **Settings > Apps > See all apps > WireGuard > Battery** and selecting the _Unrestricted_ option.
|
||||
|
||||
On to the Tasker config. The only changes will be in the [VPN on Strange Wifi](/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker/#vpn-on-strange-wifi) profile. I'll remove the OpenVPN-related actions from the Enter and Exit tasks and replace them with the built-in **Tasker > Tasker Function WireGuard Set Tunnel** action.
|
||||
On to the Tasker config. The only changes will be in the [VPN on Strange Wifi](/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker#vpn-on-strange-wifi) profile. I'll remove the OpenVPN-related actions from the Enter and Exit tasks and replace them with the built-in **Tasker > Tasker Function WireGuard Set Tunnel** action.
|
||||
|
||||
For the Enter task, I'll set the tunnel status to `true` and specify the name of the tunnel as configured in the WireGuard app; the Exit task gets the status set to `false` to disable the tunnel. Both actions will be conditional upon the `%TRUSTED_WIFI` variable being unset.
|
||||
![Tasker setup](20211028_tasker_setup.png)
|
||||
|
|
|
@ -89,7 +89,7 @@ Cool! Now I just need to do that same thing, but from vRealize Orchestrator. Fir
|
|||
|
||||
### Template changes
|
||||
#### Cloud Template
|
||||
Similar to the template changes I made for [optionally joining deployed servers to the Active Directory domain](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/#cloud-template), I'll just be adding a simple boolean checkbox to the `inputs` section of the template in Cloud Assembly:
|
||||
Similar to the template changes I made for [optionally joining deployed servers to the Active Directory domain](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template), I'll just be adding a simple boolean checkbox to the `inputs` section of the template in Cloud Assembly:
|
||||
```yaml
|
||||
formatVersion: 1
|
||||
inputs:
|
||||
|
|
|
@ -40,7 +40,7 @@ Now I can finally click the blue **Create Instance** button at the bottom of the
|
|||
![Logged in!](5PD1H7b1O.png)
|
||||
|
||||
### DNS setup
|
||||
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
|
||||
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
|
||||
|
||||
While I'm managing DNS, it might be good to take a look at the requirements for [federating my new server](https://github.com/matrix-org/synapse/blob/master/docs/federate.md#setting-up-federation) with the other Matrix servers out there. I'd like for users identities on my server to be identified by the `bowdre.net` domain (`@user:bowdre.net`) rather than the full `matrix.bowdre.net` FQDN (`@user:matrix.bowdre.net` is kind of cumbersome). The standard way to do this to leverage [`.well-known` delegation](https://github.com/matrix-org/synapse/blob/master/docs/delegate.md#well-known-delegation), where the URL at `http://bowdre.net/.well-known/matrix/server` would return a JSON structure telling other Matrix servers how to connect to mine:
|
||||
```json
|
||||
|
|
|
@ -52,7 +52,7 @@ I edited the apache config file to bind that new certificate on port 443, and to
|
|||
```
|
||||
After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` redirected me to `https://ipam.lab.bowdre.net`, and that the connection was secured with the shiny new certificate.
|
||||
|
||||
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9/#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
|
||||
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
|
||||
|
||||
This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom:
|
||||
```yaml
|
||||
|
|
|
@ -11,7 +11,7 @@ tags:
|
|||
- vmware
|
||||
title: Run scripts in guest OS with vRA ABX Actions
|
||||
---
|
||||
Thus far in my [vRealize Automation project](/categories/vmware), I've primarily been handing the payload over to vRealize Orchestrator to do the heavy lifting on the back end. This approach works really well for complex multi-part workflows (like when [generating unique hostnames](/vra8-custom-provisioning-part-two/#the-vro-workflow)), but it may be overkill for more linear tasks (such as just running some simple commands inside of a deployed guest OS). In this post, I'll explore how I use [vRA Action Based eXtensibility (ABX)](https://blogs.vmware.com/management/2020/09/vra-abx-flow.html) to do just that.
|
||||
Thus far in my [vRealize Automation project](/categories/vmware), I've primarily been handing the payload over to vRealize Orchestrator to do the heavy lifting on the back end. This approach works really well for complex multi-part workflows (like when [generating unique hostnames](/vra8-custom-provisioning-part-two#the-vro-workflow)), but it may be overkill for more linear tasks (such as just running some simple commands inside of a deployed guest OS). In this post, I'll explore how I use [vRA Action Based eXtensibility (ABX)](https://blogs.vmware.com/management/2020/09/vra-abx-flow.html) to do just that.
|
||||
|
||||
### The Goal
|
||||
My ABX action is going to use PowerCLI to perform a few steps inside a deployed guest OS (Windows-only for this demonstration):
|
||||
|
@ -69,9 +69,9 @@ resources:
|
|||
In the Resources section of the cloud template, I'm going to add a few properties that will tell the ABX script how to connect to the appropriate vCenter and then the VM.
|
||||
- `vCenter`: The vCenter server where the VM will be deployed, and thus the server which PowerCLI will authenticate against. In this case, I've only got one vCenter, but a larger environment might have multiples. Defining this in the cloud template makes it easy to select automagically if needed. (For instance, if I had a `bow-vcsa` and a `dre-vcsa` for my different sites, I could do something like `vCenter: '${input.site}-vcsa.lab.bowdre.net'` here.)
|
||||
- `vCenterUser`: The username with rights to the VM in vCenter. Again, this doesn't have to be a static assignment.
|
||||
- `templateUser`: This is the account that will be used by `Invoke-VmScript` to log in to the guest OS. My template will use the default `Administrator` account for non-domain systems, but the `lab\vra` service account on domain-joined systems (using the `adJoin` input I [set up earlier](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/#cloud-template)).
|
||||
- `templateUser`: This is the account that will be used by `Invoke-VmScript` to log in to the guest OS. My template will use the default `Administrator` account for non-domain systems, but the `lab\vra` service account on domain-joined systems (using the `adJoin` input I [set up earlier](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)).
|
||||
|
||||
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/#cloud-template)) so that I'll have that to work with later.
|
||||
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
|
||||
|
||||
```yaml
|
||||
# torchlight! {"lineNumbers": true}
|
||||
|
@ -479,7 +479,7 @@ Before I can test the new action, I'll need to first add an extensibility subscr
|
|||
I'll be using this to call my new `configureGuest` action - so I'll name the subscription `Configure Guest`. I tie it to the `Compute Post Provision` event, and bind my action:
|
||||
![Creating the new subscription](20210903_new_subscription_1.png)
|
||||
|
||||
I do have another subsciption on that event already, [`VM Post-Provisioning`](/adding-vm-notes-and-custom-attributes-with-vra8/#extensibility-subscription) which is used to modify the VM object with notes and custom attributes. I'd like to make sure that my work inside the guest happens after that other subscription is completed, so I'll enable blocking and give it a priority of `2`:
|
||||
I do have another subsciption on that event already, [`VM Post-Provisioning`](/adding-vm-notes-and-custom-attributes-with-vra8#extensibility-subscription) which is used to modify the VM object with notes and custom attributes. I'd like to make sure that my work inside the guest happens after that other subscription is completed, so I'll enable blocking and give it a priority of `2`:
|
||||
![Adding blocking to Configure Guest](20210903_new_subscription_2.png)
|
||||
|
||||
After hitting the **Save** button, I go back to that other `VM Post-Provisioning` subscription, set it to enable blocking, and give it a priority of `1`:
|
||||
|
|
|
@ -43,7 +43,7 @@ miniflux.runtimeterror.dev {
|
|||
|
||||
*and so on...* You get the idea. This approach works well for services I want/need to be public, but it does require me to manage those DNS records and keep track of which app is on which port. That can be kind of tedious.
|
||||
|
||||
And I don't really need all of these services to be public. Not because they're particularly sensitive, but I just don't really have a reason to share my personal [Miniflux](https://github.com/miniflux/v2) or [CyberChef](https://github.com/gchq/CyberChef) instances with the world at large. Those would be great candidates to proxy with [Tailscale Serve](/tailscale-ssh-serve-funnel/#tailscale-serve) so they'd only be available on my tailnet. Of course, with that setup I'd then have to differentiate the services based on external port numbers since they'd all be served with the same hostname. That's not ideal either.
|
||||
And I don't really need all of these services to be public. Not because they're particularly sensitive, but I just don't really have a reason to share my personal [Miniflux](https://github.com/miniflux/v2) or [CyberChef](https://github.com/gchq/CyberChef) instances with the world at large. Those would be great candidates to proxy with [Tailscale Serve](/tailscale-ssh-serve-funnel#tailscale-serve) so they'd only be available on my tailnet. Of course, with that setup I'd then have to differentiate the services based on external port numbers since they'd all be served with the same hostname. That's not ideal either.
|
||||
|
||||
```shell
|
||||
sudo tailscale serve --bg --https 8443 8180 # [tl! .cmd]
|
||||
|
|
|
@ -12,7 +12,7 @@ title: vRA8 Automatic Deployment Naming - Another Take
|
|||
toc: false
|
||||
---
|
||||
|
||||
A [few days ago](/vra8-custom-provisioning-part-four/#automatic-deployment-naming), I shared how I combined a Service Broker Custom Form with a vRO action to automatically generate a unique and descriptive deployment name based on user inputs. That approach works *fine* but while testing some other components I realized that calling that action each time a user makes a selection isn't necessarily ideal. After a bit of experimentation, I settled on what I believe to be a better solution.
|
||||
A [few days ago](/vra8-custom-provisioning-part-four#automatic-deployment-naming), I shared how I combined a Service Broker Custom Form with a vRO action to automatically generate a unique and descriptive deployment name based on user inputs. That approach works *fine* but while testing some other components I realized that calling that action each time a user makes a selection isn't necessarily ideal. After a bit of experimentation, I settled on what I believe to be a better solution.
|
||||
|
||||
Instead of setting the "Deployment Name" field to use an External Source (vRO), I'm going to configure it to use a Computed Value. This is a bit less flexible, but all the magic happens right there in the form without having to make an expensive vRO call.
|
||||
![Computed Value option](Ivv0ia8oX.png)
|
||||
|
|
|
@ -85,7 +85,7 @@ The last step before testing is to click that *Enable* button to activate the cu
|
|||
Cool! So it's dynamically generating the deployment name based on selections made on the form. Now that it works, I can go back to the custom form and set the "Deployment Name" field to be invisible just like the "Project" one.
|
||||
|
||||
### Per-site network selection
|
||||
So far, vRA has been automatically placing VMs on networks based solely on [which networks are tagged as available](/vra8-custom-provisioning-part-one/#using-tags-for-resource-placement) for the selected site. I'd like to give my users a bit more control over which network their VMs get attached to, particularly as some networks may be set aside for different functions or have different firewall rules applied.
|
||||
So far, vRA has been automatically placing VMs on networks based solely on [which networks are tagged as available](/vra8-custom-provisioning-part-one#using-tags-for-resource-placement) for the selected site. I'd like to give my users a bit more control over which network their VMs get attached to, particularly as some networks may be set aside for different functions or have different firewall rules applied.
|
||||
|
||||
As a quick recap, I've got five networks available for vRA, split across my two sites using tags:
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ Looking back, that's kind of a lot. I can see why I've been working on this for
|
|||
In production, I'll want to be able to deploy to different computer clusters spanning multiple vCenters. That's a bit difficult to do on a single physical server, but I still wanted to be able to simulate that sort of dynamic resource selection. So for development and testing in my lab, I'll be using two sites - `BOW` and `DRE`. I ditched the complicated "just because I can" vSAN I'd built previously and instead spun up two single-host nested clusters, one for each of my sites:
|
||||
![vCenter showing the BOW and DRE clusters](KUCwEgEhN.png)
|
||||
|
||||
Those hosts have one virtual NIC each on a standard switch connected to my home network, and a second NIC each connected to the ["isolated" internal lab network](vmware-home-lab-on-intel-nuc-9/#networking) with all the VLANs for the guests to run on:
|
||||
Those hosts have one virtual NIC each on a standard switch connected to my home network, and a second NIC each connected to the ["isolated" internal lab network](vmware-home-lab-on-intel-nuc-9#networking) with all the VLANs for the guests to run on:
|
||||
![dvSwitch showing attached hosts and dvPortGroups](y8vZEnWqR.png)
|
||||
|
||||
### vRA setup
|
||||
|
|
|
@ -17,7 +17,7 @@ Picking up after [Part Two](/vra8-custom-provisioning-part-two), I now have a pr
|
|||
|
||||
### Active Directory
|
||||
#### Adding an AD endpoint
|
||||
Remember how I [used the built-in vSphere plugin](/vra8-custom-provisioning-part-two/#interlude-connecting-vro-to-vcenter) to let vRO query my vCenter(s) for VMs with a specific name? And how that required first configuring the vCenter endpoint(s) in vRO? I'm going to take a very similar approach here.
|
||||
Remember how I [used the built-in vSphere plugin](/vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter) to let vRO query my vCenter(s) for VMs with a specific name? And how that required first configuring the vCenter endpoint(s) in vRO? I'm going to take a very similar approach here.
|
||||
|
||||
So as before, I'll first need to run the preinstalled "Add an Active Directory server" workflow:
|
||||
![Add an Active Directory server workflow](uUDJXtWKz.png)
|
||||
|
|
Loading…
Reference in a new issue