diff --git a/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md b/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md index 794d8ef..17659b4 100644 --- a/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md +++ b/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md @@ -41,12 +41,12 @@ Okay, enough background; let's get this thing going. #### Instance Deployment I started by logging into my Google Cloud account at https://console.cloud.google.com, and proceeded to create a new project (named `wireguard`) to keep my WireGuard-related resources together. I then navigated to **Compute Engine** and [created a new instance](https://console.cloud.google.com/compute/instancesAdd) inside that project. The basic setup is: -| Attribute | Value | -| --- | --- | -| Name | `wireguard` | -| Region | `us-east1` (or whichever [free-tier-eligible region](https://cloud.google.com/free/docs/gcp-free-tier/#compute) is closest) | -| Machine Type | `e2-micro` | -| Boot Disk Size | 10 GB | +| Attribute | Value | +|-----------------|------------------| +| Name | `wireguard` | +| Region | `us-east1` | +| Machine Type | `e2-micro` | +| Boot Disk Size | 10 GB | | Boot Disk Image | Ubuntu 20.04 LTS | ![Instance creation](20211027_instance_creation.png) @@ -325,25 +325,25 @@ _Note: the version of the WireGuard app currently available on the Play Store (v Once it's installed, I open the app and click the "Plus" button to create a new tunnel, and select the _Create from scratch_ option. I click the circle-arrows icon at the right edge of the _Private key_ field, and that automatically generates this peer's private and public key pair. Simply clicking on the _Public key_ field will automatically copy the generated key to my clipboard, which will be useful for sharing it with the server. Otherwise I fill out the **Interface** section similarly to what I've done already: -| Parameter | Value | -| --- | --- | -| Name | `wireguard-gcp` | +| Parameter | Value | +|-------------|--------------------| +| Name | `wireguard-gcp` | | Private key | `{CB_PRIVATE_KEY}` | -| Public key | `{CB_PUBLIC_KEY}` | -| Addresses | `10.200.200.3/24` | -| Listen port | | -| DNS servers | `10.200.200.2` | -| MTU | | +| Public key | `{CB_PUBLIC_KEY}` | +| Addresses | `10.200.200.3/24` | +| Listen port | | +| DNS servers | `10.200.200.2` | +| MTU | | I then click the **Add Peer** button to tell this client about the peer it will be connecting to - the GCP-hosted instance: -| Parameter | Value | -| --- | --- | -| Public key | `{GCP_PUBLIC_KEY}` | -| Pre-shared key | | -| Persistent keepalive | | -| Endpoint | `{GCP_PUBLIC_IP}:51820` | -| Allowed IPs | `0.0.0.0/0` | +| Parameter | Value | +|----------------------|-------------------------| +| Public key | `{GCP_PUBLIC_KEY}` | +| Pre-shared key | | +| Persistent keepalive | | +| Endpoint | `{GCP_PUBLIC_IP}:51820` | +| Allowed IPs | `0.0.0.0/0` | I _shouldn't_ need the keepalive for the "Road Warrior" peers connecting to the GCP peer, but I can always set that later if I run into stability issues. diff --git a/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md b/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md index 3ff90b0..9674d19 100644 --- a/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md +++ b/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md @@ -259,13 +259,13 @@ I'll call it `dnsConfig` and put it in my `CustomProvisioning` folder. And then I create the following variables: -| Variable | Value | Type | -| --- | --- | --- | -| `sshHost` | `win02.lab.bowdre.net` | string | -| `sshUser` | `vra` | string | -| `sshPass` | `*****` | secureString | -| `dnsServer` | `[win01.lab.bowdre.net]` | Array/string | -| `supportedDomains` | `[lab.bowdre.net]` | Array/string | +| Variable | Value | Type | +|--------------------|--------------------------|--------------| +| `sshHost` | `win02.lab.bowdre.net` | string | +| `sshUser` | `vra` | string | +| `sshPass` | `*****` | secureString | +| `dnsServer` | `[win01.lab.bowdre.net]` | Array/string | +| `supportedDomains` | `[lab.bowdre.net]` | Array/string | `sshHost` is my new `win02` server that I'm going to connect to via SSH, and `sshUser` and `sshPass` should explain themselves. The `dnsServer` array will tell the script which DNS servers to try to create the record on; this will just be a single server in my lab, but I'm going to construct the script to support multiple servers in case one isn't reachable. And `supported domains` will be used to restrict where I'll be creating records; again, that's just a single domain in my lab, but I'm building this solution to account for the possibility where a VM might need to be deployed on a domain where I can't create a static record in this way so I want it to fail elegantly. diff --git a/content/posts/esxi-arm-on-quartz64/index.md b/content/posts/esxi-arm-on-quartz64/index.md index b65b640..86689f8 100644 --- a/content/posts/esxi-arm-on-quartz64/index.md +++ b/content/posts/esxi-arm-on-quartz64/index.md @@ -61,24 +61,23 @@ All that is to say that (as usual) I'll be embarking upon this project in Hard M ### Bill of Materials Let's start with the gear (hardware and software) I needed to make this work: -| Hardware | Purpose | -| --- | --- | -| [PINE64 Quartz64 Model-A 8GB Single Board Computer](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/) | kind of the whole point | -| [ROCKPro64 12V 5A US Power Supply](https://pine64.com/product/rockpro64-12v-5a-us-power-supply/) | provies power for the the SBC | -| [Serial Console “Woodpecker” Edition](https://pine64.com/product/serial-console-woodpecker-edition/) | allows for serial console access | -| [Google USB-C Adapter](https://www.amazon.com/dp/B071G6NLHJ/) | connects the console adapter to my Chromebook | -| [Sandisk 64GB Micro SD Memory Card](https://www.amazon.com/dp/B00M55C1I2) | only holds the firmware; a much smaller size would be fine | -| [Monoprice USB-C MicroSD Reader](https://www.amazon.com/dp/B00YQM8352/) | to write firmware to the SD card from my Chromebook | -| [Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive](https://www.amazon.com/dp/B07D7Q41PM) | ESXi boot device and local VMFS datastore | -| ~~[Cable Matters 3 Port USB 3.0 Hub with Ethernet](https://www.amazon.com/gp/product/B01J6583NK)~~ | ~~for network connectivity and to host the above USB drive~~[^v1.10] | -| [3D-printed open enclosure for QUARTZ64](https://www.thingiverse.com/thing:5308499) | protect the board a little bit while allowing for plenty of passive airflow | +| Hardware | Purpose | +|--------------------------------------------------------|-----------------------------------------------------------------------------| +| [PINE64 Quartz64 Model-A 8GB Single Board Computer](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/) | kind of the whole point | +| [ROCKPro64 12V 5A US Power Supply](https://pine64.com/product/rockpro64-12v-5a-us-power-supply/) | provides power for the the SBC | +| [Serial Console “Woodpecker” Edition](https://pine64.com/product/serial-console-woodpecker-edition/) | allows for serial console access | +| [Google USB-C Adapter](https://www.amazon.com/dp/B071G6NLHJ/) | connects the console adapter to my Chromebook | +| [Sandisk 64GB Micro SD Memory Card](https://www.amazon.com/dp/B00M55C1I2) | only holds the firmware; a much smaller size would be fine | +| [Monoprice USB-C MicroSD Reader](https://www.amazon.com/dp/B00YQM8352/) | to write firmware to the SD card from my Chromebook | +| [Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive](https://www.amazon.com/dp/B07D7Q41PM) | ESXi boot device and local VMFS datastore | +| [3D-printed open enclosure for QUARTZ64](https://www.thingiverse.com/thing:5308499) | protect the board a little bit while allowing for plenty of passive airflow | -| Downloads | Purpose | -| --- | --- | -| [ESXi ARM Edition](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM) (v1.10) | hypervisor | -| [Tianocore EDK II firmware for Quartz64](https://github.com/jaredmcneill/quartz64_uefi/releases) (2022-07-20) | firmare image | -| [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) | easy way to write filesystem images to external media | -| [Beagle Term](https://chrome.google.com/webstore/detail/beagle-term/gkdofhllgfohlddimiiildbgoggdpoea) | for accessing the Quartz64 serial console | +| Downloads | Purpose | +|----------------------------------------------------------|-------------------------------------------------------| +| [ESXi ARM Edition](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM) (v1.10) | hypervisor | +| [Tianocore EDK II firmware for Quartz64](https://github.com/jaredmcneill/quartz64_uefi/releases) (2022-07-20) | firmare image | +| [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) | easy way to write filesystem images to external media | +| [Beagle Term](https://chrome.google.com/webstore/detail/beagle-term/gkdofhllgfohlddimiiildbgoggdpoea) | for accessing the Quartz64 serial console | ### Preparation #### Firmware media @@ -110,10 +109,10 @@ Then it's time to write the image onto the USB drive: I'll need to use the Quartz64 serial console interface and ["Woodpecker" edition console USB adapter](https://pine64.com/product/serial-console-woodpecker-edition/) to interact with the board until I get ESXi installed and can connect to it with the web interface or SSH. The adapter comes with a short breakout cable, and I connect it thusly: | Quartz64 GPIO pin | Console adapter pin | Wire color | -| --- | --- | --- | -| 6 | `GND` | Brown | -| 8 | `RXD` | Red | -| 10 | `TXD` | Orange | +|-------------------|---------------------|------------| +| 6 | `GND` | Brown | +| 8 | `RXD` | Red | +| 10 | `TXD` | Orange | I leave the yellow wire dangling free on both ends since I don't need a `+V` connection for the console to work. ![Console connection](console_connection.jpg) @@ -122,14 +121,14 @@ To verify that I've got things working, I go ahead and pop the micro SD card con I'll need to use these settings for the connection (which are the defaults selected by Beagle Term): -| Setting | Value | -| -- | --- | -| Port | `/dev/ttyUSB0` | -| Bitrate | `115200` | -| Data Bit | `8 bit` | -| Parity | `none` | -| Stop Bit | `1` | -| Flow Control | `none` | +| Setting | Value | +|--------------|----------------| +| Port | `/dev/ttyUSB0` | +| Bitrate | `115200` | +| Data Bit | `8 bit` | +| Parity | `none` | +| Stop Bit | `1` | +| Flow Control | `none` | ![Beagle Term settings](beagle_term_settings.png) diff --git a/content/posts/free-serverless-url-shortener-google-cloud-run/index.md b/content/posts/free-serverless-url-shortener-google-cloud-run/index.md index 7280b01..5f0c250 100644 --- a/content/posts/free-serverless-url-shortener-google-cloud-run/index.md +++ b/content/posts/free-serverless-url-shortener-google-cloud-run/index.md @@ -78,11 +78,11 @@ I'm very pleased with how this quick little project turned out. Managing my shor And now I can hand out handy-dandy short links! -| Link | Description| -| --- | --- | -| [go.bowdre.net/coso](https://l.runtimeterror.dev/coso) | Follow me on CounterSocial | -| [go.bowdre.net/conedoge](https://l.runtimeterror.dev/conedoge) | 2014 Subaru BRZ autocross videos | +| Link | Description | +|---------------------------------|-----------------------------------------------------------| +| [go.bowdre.net/coso](https://l.runtimeterror.dev/coso) | Follow me on CounterSocial | +| [go.bowdre.net/conedoge](https://l.runtimeterror.dev/conedoge) | 2014 Subaru BRZ autocross videos | | [go.bowdre.net/cooltechshit](https://l.runtimeterror.dev/cooltechshit) | A collection of cool tech shit (references and resources) | -| [go.bowdre.net/stuffiuse](https://l.runtimeterror.dev/stuffiuse) | Things that I use (and think you should use too) | -| [go.bowdre.net/shorterer](https://l.runtimeterror.dev/shorterer) | This post! | +| [go.bowdre.net/stuffiuse](https://l.runtimeterror.dev/stuffiuse) | Things that I use (and think you should use too) | +| [go.bowdre.net/shorterer](https://l.runtimeterror.dev/shorterer) | This post! | diff --git a/content/posts/gitea-self-hosted-git-server/index.md b/content/posts/gitea-self-hosted-git-server/index.md index 8f526cc..f26b897 100644 --- a/content/posts/gitea-self-hosted-git-server/index.md +++ b/content/posts/gitea-self-hosted-git-server/index.md @@ -36,13 +36,13 @@ In this post, I'll describe what I did to get Gitea up and running on a tiny ARM ### Create the server I'll be deploying this on a cloud server with these specs: -| | | -| --- | --- | -| Shape | `VM.Standard.A1.Flex` | -| Image | Ubuntu 22.04 | -| CPU Count | 1 | -| Memory (GB) | 6 | -| Boot Volume (GB) | 50 | +| | | +|------------------|-----------------------| +| Shape | `VM.Standard.A1.Flex` | +| Image | Ubuntu 22.04 | +| CPU Count | 1 | +| Memory (GB) | 6 | +| Boot Volume (GB) | 50 | I've described the [process of creating a new instance on OCI in a past post](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#instance-creation) so I won't reiterate that here. The only gotcha this time is switching the shape to `VM.Standard.A1.Flex`; the [OCI free tier](https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm) allows two AMD Compute VMs (which I've already used up) as well as *up to four* ARM Ampere A1 instances[^free_ampere]. @@ -259,23 +259,23 @@ The format of PostgreSQL data changes with new releases, and that means that the {{% /notice %}} Let's go through the extra configs in a bit more detail: -| Variable setting | Purpose | -|:--- |:--- | -|`USER_UID=1003` | User ID of the `git` user on the container host | -|`USER_GID=1003` | GroupID of the `git` user on the container host | -|`GITEA____APP_NAME=Gitea` | Sets the title of the site. I shortened it from `Gitea: Git with a cup of tea` because that seems unnecessarily long. | -|`GITEA__log__MODE=file` | Enable logging | -|`GITEA__openid__ENABLE_OPENID_SIGNIN=false` | Disable signin through OpenID | -|`GITEA__other__SHOW_FOOTER_VERSION=false` | Anyone who hits the web interface doesn't need to know the version | -|`GITEA__repository__DEFAULT_PRIVATE=private` | All repos will default to private unless I explicitly override that | -|`GITEA__repository__DISABLE_HTTP_GIT=true` | Require that all Git operations occur over SSH | -|`GITEA__server__DOMAIN=git.bowdre.net` | Domain name of the server | -|`GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net` | Leverage Tailscale's [MagicDNS](https://tailscale.com/kb/1081/magicdns/) to tell clients how to SSH to the Tailscale internal IP | -|`GITEA__server__ROOT_URL=https://git.bowdre.net/` | Public-facing URL | -|`GITEA__server__LANDING_PAGE=explore` | Defaults to showing the "Explore" page (listing any public repos) instead of the "Home" page (which just tells about the Gitea project) | -|`GITEA__service__DISABLE_REGISTRATION=true` | New users will not be able to self-register for access; they will have to be manually added by the Administrator account that will be created during the initial setup | -|`GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true` | Don't allow browsing of user accounts | -|`GITEA__ui__DEFAULT_THEME=arc-green` | Default to the darker theme | +| Variable setting | Purpose | +|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `USER_UID=1003` | User ID of the `git` user on the container host | +| `USER_GID=1003` | GroupID of the `git` user on the container host | +| `GITEA____APP_NAME=Gitea` | Sets the title of the site. I shortened it from `Gitea: Git with a cup of tea` because that seems unnecessarily long. | +| `GITEA__log__MODE=file` | Enable logging | +| `GITEA__openid__ENABLE_OPENID_SIGNIN=false` | Disable signin through OpenID | +| `GITEA__other__SHOW_FOOTER_VERSION=false` | Anyone who hits the web interface doesn't need to know the version | +| `GITEA__repository__DEFAULT_PRIVATE=private` | All repos will default to private unless I explicitly override that | +| `GITEA__repository__DISABLE_HTTP_GIT=true` | Require that all Git operations occur over SSH | +| `GITEA__server__DOMAIN=git.bowdre.net` | Domain name of the server | +| `GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net` | Leverage Tailscale's [MagicDNS](https://tailscale.com/kb/1081/magicdns/) to tell clients how to SSH to the Tailscale internal IP | +| `GITEA__server__ROOT_URL=https://git.bowdre.net/` | Public-facing URL | +| `GITEA__server__LANDING_PAGE=explore` | Defaults to showing the "Explore" page (listing any public repos) instead of the "Home" page (which just tells about the Gitea project) | +| `GITEA__service__DISABLE_REGISTRATION=true` | New users will not be able to self-register for access; they will have to be manually added by the Administrator account that will be created during the initial setup | +| `GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true` | Don't allow browsing of user accounts | +| `GITEA__ui__DEFAULT_THEME=arc-green` | Default to the darker theme | Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections: diff --git a/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md b/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md index c3e59fa..2fdab38 100644 --- a/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md +++ b/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md @@ -16,10 +16,10 @@ Connecting a deployed Windows VM to an Active Directory domain is pretty easy; j Fortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even [introduced the ability](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to: -| **Site** | **OU** | -| --- | --- | -| `BOW` | `lab.bowdre.net/LAB/BOW/Computers/Servers` | -| `DRE` | `lab.bowre.net/LAB/DRE/Computers/Servers` | +| **Site** | **OU** | +|----------|--------------------------------------------| +| `BOW` | `lab.bowdre.net/LAB/BOW/Computers/Servers` | +| `DRE` | `lab.bowre.net/LAB/DRE/Computers/Servers` | I didn't find a lot of documentation on how make this work, though, so here's how I've implemented it in my lab (now running vRA 8.4.2). diff --git a/content/posts/ldaps-authentication-tanzu-community-edition/index.md b/content/posts/ldaps-authentication-tanzu-community-edition/index.md index a5a360c..85d50f5 100644 --- a/content/posts/ldaps-authentication-tanzu-community-edition/index.md +++ b/content/posts/ldaps-authentication-tanzu-community-edition/index.md @@ -42,26 +42,26 @@ The [cluster deployment steps](/tanzu-community-edition-k8s-homelab/#management- ![Identity Management section](identity_management_1.png) **LDAPS Identity Management Source** -| Field | Value | Notes | -| --- | --- | ---- | -| LDAPS Endpoint | `win01.lab.bowdre.net:636` | LDAPS interface of my AD DC | -| BIND DN | `CN=LDAP Bind,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net` | DN of an account with LDAP read permissions | -| BIND Password | `*******` | Password for that account | +| Field | Value | Notes | +|----------------|---------------------------------------------------------------|---------------------------------------------| +| LDAPS Endpoint | `win01.lab.bowdre.net:636` | LDAPS interface of my AD DC | +| BIND DN | `CN=LDAP Bind,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net` | DN of an account with LDAP read permissions | +| BIND Password | `*******` | Password for that account | **User Search Attributes** -| Field | Value | Notes | -| --- | --- | --- | -| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for the top-level OU containing my users | -| Filter | `objectClass=(person)` | | -| Username | `sAMAccountName` | I want to auth as `john` rather than `john@lab.bowdre.net` (`userPrincipalName`) | +| Field | Value | Notes | +|----------|----------------------------------|----------------------------------------------------------------------------------| +| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for the top-level OU containing my users | +| Filter | `objectClass=(person)` | | +| Username | `sAMAccountName` | I want to auth as `john` rather than `john@lab.bowdre.net` (`userPrincipalName`) | **Group Search Attributes** -| Field | Value | Notes | -| --- | --- | --- | -| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for OU containing my users | -| Filter | `(objectClass=group)` | | -| Name Attribute | `cn` | Common Name | -| User Attribute | `DN` | Distinguished Name (capitalization matters!) | +| Field | Value | Notes | +|-----------------|-----------------------------------|---------------------------------------------------------------| +| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for OU containing my users | +| Filter | `(objectClass=group)` | | +| Name Attribute | `cn` | Common Name | +| User Attribute | `DN` | Distinguished Name (capitalization matters!) | | Group Attribute | `member:1.2.840.113556.1.4.1941:` | Used to enumerate which groups a user is a member of[^member] | And I'll copy the contents of the base64-encoded CA certificate I downloaded earlier and paste them into the Root CA Certificate field. diff --git a/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md b/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md index 7311d63..09e09b8 100644 --- a/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md +++ b/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md @@ -47,15 +47,15 @@ A few days ago I migrated my original Snikket instance from Google Cloud (GCP) t ### Infrastructure setup You can refer to my notes from last time for details on how I [created the Ubuntu 20.04 VM](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#instance-creation) and [configured the firewall rules](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration) both at the cloud infrastructure level as well as within the host using `iptables`. Snikket does need a few additional [firewall ports](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/firewall.md) beyond what was needed for my Matrix setup: -| Port(s) | Transport | Purpose | -| --- | --- | --- | -| `80, 443` | TCP | Web interface and group file sharing | -| `3478-3479` | TCP/UDP | Audio/Video data proxy negotiation and discovery ([STUN/TURN](https://www.twilio.com/docs/stun-turn/faq)) | -| `5349-5350` | TCP/UDP | Audio/Video data proxy negotiation and discovery (STUN/TURN over TLS) | -| `5000` | TCP | File transfer proxy | -| `5222` | TCP | Connections from clients | -| `5269` | TCP | Connections from other servers | -| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) | +| Port(s) | Transport | Purpose | +|-------------------|-----------|-----------------------------------------------------------------------| +| `80, 443` | TCP | Web interface and group file sharing | +| `3478-3479` | TCP/UDP | Audio/Video data proxy negotiation and discovery ([STUN/TURN](https://www.twilio.com/docs/stun-turn/faq)) | +| `5349-5350` | TCP/UDP | Audio/Video data proxy negotiation and discovery (STUN/TURN over TLS) | +| `5000` | TCP | File transfer proxy | +| `5222` | TCP | Connections from clients | +| `5269` | TCP | Connections from other servers | +| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) | As a gentle reminder, Oracle's `iptables` configuration inserts a `REJECT all` rule at the bottom of each chain. I needed to make sure that each of my `ALLOW` rules get inserted above that point. So I used `iptables -L INPUT --line-numbers` to identify which line held the `REJECT` rule, and then used `iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT` to insert the new rules above that point. ```shell @@ -165,10 +165,10 @@ sudo vi snikket.conf # [tl! .cmd] A basic config only needs two parameters: -| Parameter | Description | -| --- | --- | -| `SNIKKET_DOMAIN` | The fully-qualified domain name that clients will connect to | -| `SNIKKET_ADMIN_EMAIL` | An admin contact for the server | +| Parameter | Description | +|-----------------------|--------------------------------------------------------------| +| `SNIKKET_DOMAIN` | The fully-qualified domain name that clients will connect to | +| `SNIKKET_ADMIN_EMAIL` | An admin contact for the server | That's it. diff --git a/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md b/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md index 5c2fba1..39f7a61 100644 --- a/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md +++ b/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md @@ -133,15 +133,15 @@ Now if I just enter `go/vcenter` I will go to the vSphere UI, while if I enter s Some of my other golinks: -| Shortlink | Destination URL | Description | -| --- | --- | --- | -| `code` | `https://github.com/search?type=code&q=user:jbowdre{{with .Path}}+{{.}}{{end}}` | searches my code on Github | -| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance | -| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches [protondb](https://www.protondb.com/), super-handy for checking game compatibility when [Tailscale is installed on a Steam Deck](https://tailscale.com/blog/steam-deck/) | -| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name | -| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems | -| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) | -| `wx` | `https://wttr.in/{{ .Path }}` | local weather report based on geolocation or weather for a designated city (`curl`-friendly) | +| Shortlink | Destination URL | Description | +|------------|--------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------| +| `code` | `https://github.com/search?type=code&q=user:jbowdre{{with .Path}}+{{.}}{{end}}` | searches my code on Github | +| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance | +| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches protondb | +| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name | +| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems | +| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) | +| `wx` | `https://wttr.in/{{ .Path }}` | local weather report based on geolocation or weather for a designated city (`curl`-friendly) | #### Back up and restore You can browse to `go/.export` to see a JSON-formatted listing of all configured shortcuts - or, if you're clever, you could do something like `curl http://go/.export -o links.json` to download a copy. diff --git a/content/posts/tailscale-serve-docker-compose-sidecar/index.md b/content/posts/tailscale-serve-docker-compose-sidecar/index.md index 8c9abe1..4397cc5 100644 --- a/content/posts/tailscale-serve-docker-compose-sidecar/index.md +++ b/content/posts/tailscale-serve-docker-compose-sidecar/index.md @@ -213,15 +213,15 @@ TS_SERVE_PORT=8080 TS_FUNNEL=1 ``` -| Variable Name | Example | Description | -| --- | --- | --- | -| `TS_AUTHKEY` | `tskey-auth-somestring-somelongerstring` | used for unattended auth of the new node, get one [here](https://login.tailscale.com/admin/settings/keys) | -| `TS_HOSTNAME` | `tsdemo` | optional Tailscale hostname for the new node[^hostname] | -| `TS_STATE_DIR` | `/var/lib/tailscale/` | required directory for storing Tailscale state, this should be mounted to the container for persistence | -| `TS_TAILSCALED_EXTRA_ARGS` | `--verbose=1`[^verbose] | optional additional [flags](https://tailscale.com/kb/1278/tailscaled#flags-to-tailscaled) for `tailscaled` | -| `TS_EXTRA_ARGS` | `--ssh`[^ssh] | optional additional [flags](https://tailscale.com/kb/1241/tailscale-up) for `tailscale up` | -| `TS_SERVE_PORT` | `8080` | optional application port to expose with [Tailscale Serve](https://tailscale.com/kb/1312/serve) | -| `TS_FUNNEL` | `1` | if set (to anything), will proxy `TS_SERVE_PORT` **publicly** with [Tailscale Funnel](https://tailscale.com/kb/1223/funnel) | +| Variable Name | Example | Description | +|----------------------------|------------------------------------------|---------------------------------------------------------------------------------------------------------| +| `TS_AUTHKEY` | `tskey-auth-somestring-somelongerstring` | used for unattended auth of the new node, get one [here](https://login.tailscale.com/admin/settings/keys) | +| `TS_HOSTNAME` | `tsdemo` | optional Tailscale hostname for the new node[^hostname] | +| `TS_STATE_DIR` | `/var/lib/tailscale/` | required directory for storing Tailscale state, this should be mounted to the container for persistence | +| `TS_TAILSCALED_EXTRA_ARGS` | `--verbose=1`[^verbose] | optional additional [flags](https://tailscale.com/kb/1278/tailscaled#flags-to-tailscaled) for `tailscaled` | +| `TS_EXTRA_ARGS` | `--ssh`[^ssh] | optional additional [flags](https://tailscale.com/kb/1241/tailscale-up) for `tailscale up` | +| `TS_SERVE_PORT` | `8080` | optional application port to expose with [Tailscale Serve](https://tailscale.com/kb/1312/serve) | +| `TS_FUNNEL` | `1` | if set (to anything), will proxy `TS_SERVE_PORT` **publicly** with [Tailscale Funnel](https://tailscale.com/kb/1223/funnel) | [^hostname]: This hostname will determine the fully-qualified domain name where the resource will be served: `https://[hostname].[tailnet-name].ts.net`. So you'll want to make sure it's a good one for what you're trying to do. [^verbose]: Passing the `--verbose` flag to `tailscaled` increases the logging verbosity, which can be helpful if you need to troubleshoot. diff --git a/content/posts/tanzu-community-edition-k8s-homelab/index.md b/content/posts/tanzu-community-edition-k8s-homelab/index.md index ab740d3..c1c70dc 100644 --- a/content/posts/tanzu-community-edition-k8s-homelab/index.md +++ b/content/posts/tanzu-community-edition-k8s-homelab/index.md @@ -45,11 +45,11 @@ The Kubernetes node VMs will need to be attached to a network with a DHCP server I'll also need to set aside a few static IPs for this project. These will need to be routable and within the same subnet as the DHCP range, but excluded from that DHCP range. -| IP Address | Purpose | -| --- | --- | -| `192.168.1.60` | Control plane for Management cluster | -| `192.168.1.61` | Control plane for Workload cluster | -| `192.168.1.64 - 192.168.1.80` | IP range for Workload load balancer | +| IP Address | Purpose | +|-------------------------------|--------------------------------------| +| `192.168.1.60` | Control plane for Management cluster | +| `192.168.1.61` | Control plane for Workload cluster | +| `192.168.1.64 - 192.168.1.80` | IP range for Workload load balancer | ### Prerequisites diff --git a/content/posts/vmware-home-lab-on-intel-nuc-9/index.md b/content/posts/vmware-home-lab-on-intel-nuc-9/index.md index bc6f20a..fb19040 100644 --- a/content/posts/vmware-home-lab-on-intel-nuc-9/index.md +++ b/content/posts/vmware-home-lab-on-intel-nuc-9/index.md @@ -67,22 +67,22 @@ I've now got a fully-functioning VMware lab, complete with a physical hypervisor #### Overview My home network uses the generic `192.168.1.0/24` address space, with internet router providing DHCP addresses in the range `.100-.250`. I'm using the range `192.168.1.2-.99` for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far: -| IP Address | Hostname | Purpose | -| ---- | ---- | ---- | -| `192.168.1.1` | | Gateway | -| `192.168.1.5` | `win01` | AD DC, DNS | +| IP Address | Hostname | Purpose | +|----------------|-----------|--------------------| +| `192.168.1.1` | | Gateway | +| `192.168.1.5` | `win01` | AD DC, DNS | | `192.168.1.11` | `nuchost` | Physical ESXi host | -| `192.168.1.12` | `vcsa` | vCenter Server | +| `192.168.1.12` | `vcsa` | vCenter Server | Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks: -| VLAN ID | Network | Purpose | -| ---- | ---- | ---- | -| 1610 | `172.16.10.0/24` | Management | -| 1620 | `172.16.20.0/24` | Servers-1 | -| 1630 | `172.16.30.0/24` | Servers-2 | -| 1698 | `172.16.98.0/24` | vSAN | -| 1699 | `172.16.99.0/24` | vMotion | +| VLAN ID | Network | Purpose | +|---------|------------------|------------| +| 1610 | `172.16.10.0/24` | Management | +| 1620 | `172.16.20.0/24` | Servers-1 | +| 1630 | `172.16.30.0/24` | Servers-2 | +| 1698 | `172.16.98.0/24` | vSAN | +| 1699 | `172.16.99.0/24` | vMotion | #### vSwitch1 I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks. @@ -182,11 +182,11 @@ Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this serve ### Nested vSAN Cluster Alright, it's time to start building up the nested environment. To start, I grabbed the latest [Nested ESXi Virtual Appliance .ova](https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance), courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN: -|Hostname|1610-Management|1698-vSAN|1699-vMotion| -|----|----|----|----| -|`esxi01.lab.bowdre.net`|`172.16.10.21`|`172.16.98.21`|`172.16.99.21`| -|`esxi02.lab.bowdre.net`|`172.16.10.22`|`172.16.98.22`|`172.16.99.22`| -|`esxi03.lab.bowdre.net`|`172.16.10.23`|`172.16.98.23`|`172.16.99.23`| +| Hostname | 1610-Management | 1698-vSAN | 1699-vMotion | +|-------------------------|-----------------|----------------|----------------| +| `esxi01.lab.bowdre.net` | `172.16.10.21` | `172.16.98.21` | `172.16.99.21` | +| `esxi02.lab.bowdre.net` | `172.16.10.22` | `172.16.98.22` | `172.16.99.22` | +| `esxi03.lab.bowdre.net` | `172.16.10.23` | `172.16.98.23` | `172.16.99.23` | Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup. @@ -246,11 +246,11 @@ The [vRealize Easy Installer](https://docs.vmware.com/en/vRealize-Automation/8.2 Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records: -|FQDN|IP| -|----|----| -|`lcm.lab.bowdre.net`|`192.168.1.40`| -|`idm.lab.bowdre.net`|`192.168.1.41`| -|`vra.lab.bowdre.net`|`192.168.1.42`| +| FQDN | IP | +|----------------------|----------------| +| `lcm.lab.bowdre.net` | `192.168.1.40` | +| `idm.lab.bowdre.net` | `192.168.1.41` | +| `vra.lab.bowdre.net` | `192.168.1.42` | I then attached the installer ISO to my Windows VM and ran through the installation from there. ![vRealize Easy Installer](42n3aMim5.png) diff --git a/content/posts/vra8-custom-provisioning-part-four/index.md b/content/posts/vra8-custom-provisioning-part-four/index.md index 01711e7..3785103 100644 --- a/content/posts/vra8-custom-provisioning-part-four/index.md +++ b/content/posts/vra8-custom-provisioning-part-four/index.md @@ -89,33 +89,33 @@ So far, vRA has been automatically placing VMs on networks based solely on [whi As a quick recap, I've got five networks available for vRA, split across my two sites using tags: -|Name |Subnet |Site |Tags | -| --- | --- | --- | --- | -| d1620-Servers-1 | 172.16.20.0/24 | BOW | `net:bow` | -| d1630-Servers-2 | 172.16.30.0/24 | BOW | `net:bow` | -| d1640-Servers-3 | 172.16.40.0/24 | BOW | `net:bow` | -| d1650-Servers-4 | 172.16.50.0/24 | DRE | `net:dre` | -| d1660-Servers-5 | 172.16.60.0/24 | DRE | `net:dre` | +| Name | Subnet | Site | Tags | +|-----------------|----------------|------|-----------| +| d1620-Servers-1 | 172.16.20.0/24 | BOW | `net:bow` | +| d1630-Servers-2 | 172.16.30.0/24 | BOW | `net:bow` | +| d1640-Servers-3 | 172.16.40.0/24 | BOW | `net:bow` | +| d1650-Servers-4 | 172.16.50.0/24 | DRE | `net:dre` | +| d1660-Servers-5 | 172.16.60.0/24 | DRE | `net:dre` | I'm going to add additional tags to these networks to further define their purpose. -|Name |Purpose |Tags | -| --- | --- | --- | -| d1620-Servers-1 |Management | `net:bow`, `net:mgmt` | -| d1630-Servers-2 | Front-end | `net:bow`, `net:front` | -| d1640-Servers-3 | Back-end | `net:bow`, `net:back` | -| d1650-Servers-4 | Front-end | `net:dre`, `net:front` | -| d1660-Servers-5 | Back-end | `net:dre`, `net:back` | +| Name | Purpose | Tags | +|-----------------|------------|------------------------| +| d1620-Servers-1 | Management | `net:bow`, `net:mgmt` | +| d1630-Servers-2 | Front-end | `net:bow`, `net:front` | +| d1640-Servers-3 | Back-end | `net:bow`, `net:back` | +| d1650-Servers-4 | Front-end | `net:dre`, `net:front` | +| d1660-Servers-5 | Back-end | `net:dre`, `net:back` | I *could* just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users: -|Name |Tags | -| --- | --- | -| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` | +| Name | Tags | +|-----------------|-----------------------------------------------------| +| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` | | d1630-Servers-2 | `net:bow`, `net:front`, `net:bow-front-172.16.30.0` | -| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` | +| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` | | d1650-Servers-4 | `net:dre`, `net:front`, `net:dre-front-172.16.50.0` | -| d1660-Servers-5 | `net:dre`, `net:back`, `net:dre-back-172.16.60.0` | +| d1660-Servers-5 | `net:dre`, `net:back`, `net:dre-back-172.16.60.0` | ![Tagged networks](J_RG9JNPz.png) diff --git a/layouts/_default/single.gmi b/layouts/_default/single.gmi index 728e99d..1944a50 100644 --- a/layouts/_default/single.gmi +++ b/layouts/_default/single.gmi @@ -12,6 +12,7 @@ {{- $content := $content | replaceRE `(?m:^(?:\d+). (.+?)$)` "* $1" -}}{{/* convert ordered lists */}} {{- $content := $content | replaceRE `\n\[\^(.+?)\]:\s.*` "" -}}{{/* remove footnote definitions */}} {{- $content := $content | replaceRE `\[\^(.+?)\]` "" -}}{{/* remove footnote anchors */}} +{{- $content := $content | replaceRE `((?m:^(?:\|.*\|)+\n)+)` "```\n$1\n```\n" -}}{{/* convert markdown tables to plaintext ascii */}} {{- $content := $content | replaceRE "(?m:^`([^`]*)`$)" "```\n$1\n```\n" -}}{{/* convert single-line inline code to blocks */}} {{- $content := $content | replaceRE `\{\{%\snotice.*%\}\}` "<-- note -->" -}}{{/* convert hugo notices */}} {{- $content := $content | replaceRE `\{\{%\s/notice.*%\}\}` "<-- /note -->" -}}