mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-11-21 22:42:19 +00:00
gemini: render markdown tables as ascii blocks
This commit is contained in:
parent
fb0e44ded3
commit
9219c6db30
14 changed files with 184 additions and 184 deletions
|
@ -42,9 +42,9 @@ Okay, enough background; let's get this thing going.
|
||||||
I started by logging into my Google Cloud account at https://console.cloud.google.com, and proceeded to create a new project (named `wireguard`) to keep my WireGuard-related resources together. I then navigated to **Compute Engine** and [created a new instance](https://console.cloud.google.com/compute/instancesAdd) inside that project. The basic setup is:
|
I started by logging into my Google Cloud account at https://console.cloud.google.com, and proceeded to create a new project (named `wireguard`) to keep my WireGuard-related resources together. I then navigated to **Compute Engine** and [created a new instance](https://console.cloud.google.com/compute/instancesAdd) inside that project. The basic setup is:
|
||||||
|
|
||||||
| Attribute | Value |
|
| Attribute | Value |
|
||||||
| --- | --- |
|
|-----------------|------------------|
|
||||||
| Name | `wireguard` |
|
| Name | `wireguard` |
|
||||||
| Region | `us-east1` (or whichever [free-tier-eligible region](https://cloud.google.com/free/docs/gcp-free-tier/#compute) is closest) |
|
| Region | `us-east1` |
|
||||||
| Machine Type | `e2-micro` |
|
| Machine Type | `e2-micro` |
|
||||||
| Boot Disk Size | 10 GB |
|
| Boot Disk Size | 10 GB |
|
||||||
| Boot Disk Image | Ubuntu 20.04 LTS |
|
| Boot Disk Image | Ubuntu 20.04 LTS |
|
||||||
|
@ -326,7 +326,7 @@ _Note: the version of the WireGuard app currently available on the Play Store (v
|
||||||
Once it's installed, I open the app and click the "Plus" button to create a new tunnel, and select the _Create from scratch_ option. I click the circle-arrows icon at the right edge of the _Private key_ field, and that automatically generates this peer's private and public key pair. Simply clicking on the _Public key_ field will automatically copy the generated key to my clipboard, which will be useful for sharing it with the server. Otherwise I fill out the **Interface** section similarly to what I've done already:
|
Once it's installed, I open the app and click the "Plus" button to create a new tunnel, and select the _Create from scratch_ option. I click the circle-arrows icon at the right edge of the _Private key_ field, and that automatically generates this peer's private and public key pair. Simply clicking on the _Public key_ field will automatically copy the generated key to my clipboard, which will be useful for sharing it with the server. Otherwise I fill out the **Interface** section similarly to what I've done already:
|
||||||
|
|
||||||
| Parameter | Value |
|
| Parameter | Value |
|
||||||
| --- | --- |
|
|-------------|--------------------|
|
||||||
| Name | `wireguard-gcp` |
|
| Name | `wireguard-gcp` |
|
||||||
| Private key | `{CB_PRIVATE_KEY}` |
|
| Private key | `{CB_PRIVATE_KEY}` |
|
||||||
| Public key | `{CB_PUBLIC_KEY}` |
|
| Public key | `{CB_PUBLIC_KEY}` |
|
||||||
|
@ -338,7 +338,7 @@ Once it's installed, I open the app and click the "Plus" button to create a new
|
||||||
I then click the **Add Peer** button to tell this client about the peer it will be connecting to - the GCP-hosted instance:
|
I then click the **Add Peer** button to tell this client about the peer it will be connecting to - the GCP-hosted instance:
|
||||||
|
|
||||||
| Parameter | Value |
|
| Parameter | Value |
|
||||||
| --- | --- |
|
|----------------------|-------------------------|
|
||||||
| Public key | `{GCP_PUBLIC_KEY}` |
|
| Public key | `{GCP_PUBLIC_KEY}` |
|
||||||
| Pre-shared key | |
|
| Pre-shared key | |
|
||||||
| Persistent keepalive | |
|
| Persistent keepalive | |
|
||||||
|
|
|
@ -260,7 +260,7 @@ I'll call it `dnsConfig` and put it in my `CustomProvisioning` folder.
|
||||||
And then I create the following variables:
|
And then I create the following variables:
|
||||||
|
|
||||||
| Variable | Value | Type |
|
| Variable | Value | Type |
|
||||||
| --- | --- | --- |
|
|--------------------|--------------------------|--------------|
|
||||||
| `sshHost` | `win02.lab.bowdre.net` | string |
|
| `sshHost` | `win02.lab.bowdre.net` | string |
|
||||||
| `sshUser` | `vra` | string |
|
| `sshUser` | `vra` | string |
|
||||||
| `sshPass` | `*****` | secureString |
|
| `sshPass` | `*****` | secureString |
|
||||||
|
|
|
@ -62,19 +62,18 @@ All that is to say that (as usual) I'll be embarking upon this project in Hard M
|
||||||
Let's start with the gear (hardware and software) I needed to make this work:
|
Let's start with the gear (hardware and software) I needed to make this work:
|
||||||
|
|
||||||
| Hardware | Purpose |
|
| Hardware | Purpose |
|
||||||
| --- | --- |
|
|--------------------------------------------------------|-----------------------------------------------------------------------------|
|
||||||
| [PINE64 Quartz64 Model-A 8GB Single Board Computer](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/) | kind of the whole point |
|
| [PINE64 Quartz64 Model-A 8GB Single Board Computer](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/) | kind of the whole point |
|
||||||
| [ROCKPro64 12V 5A US Power Supply](https://pine64.com/product/rockpro64-12v-5a-us-power-supply/) | provies power for the the SBC |
|
| [ROCKPro64 12V 5A US Power Supply](https://pine64.com/product/rockpro64-12v-5a-us-power-supply/) | provides power for the the SBC |
|
||||||
| [Serial Console “Woodpecker” Edition](https://pine64.com/product/serial-console-woodpecker-edition/) | allows for serial console access |
|
| [Serial Console “Woodpecker” Edition](https://pine64.com/product/serial-console-woodpecker-edition/) | allows for serial console access |
|
||||||
| [Google USB-C Adapter](https://www.amazon.com/dp/B071G6NLHJ/) | connects the console adapter to my Chromebook |
|
| [Google USB-C Adapter](https://www.amazon.com/dp/B071G6NLHJ/) | connects the console adapter to my Chromebook |
|
||||||
| [Sandisk 64GB Micro SD Memory Card](https://www.amazon.com/dp/B00M55C1I2) | only holds the firmware; a much smaller size would be fine |
|
| [Sandisk 64GB Micro SD Memory Card](https://www.amazon.com/dp/B00M55C1I2) | only holds the firmware; a much smaller size would be fine |
|
||||||
| [Monoprice USB-C MicroSD Reader](https://www.amazon.com/dp/B00YQM8352/) | to write firmware to the SD card from my Chromebook |
|
| [Monoprice USB-C MicroSD Reader](https://www.amazon.com/dp/B00YQM8352/) | to write firmware to the SD card from my Chromebook |
|
||||||
| [Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive](https://www.amazon.com/dp/B07D7Q41PM) | ESXi boot device and local VMFS datastore |
|
| [Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive](https://www.amazon.com/dp/B07D7Q41PM) | ESXi boot device and local VMFS datastore |
|
||||||
| ~~[Cable Matters 3 Port USB 3.0 Hub with Ethernet](https://www.amazon.com/gp/product/B01J6583NK)~~ | ~~for network connectivity and to host the above USB drive~~[^v1.10] |
|
|
||||||
| [3D-printed open enclosure for QUARTZ64](https://www.thingiverse.com/thing:5308499) | protect the board a little bit while allowing for plenty of passive airflow |
|
| [3D-printed open enclosure for QUARTZ64](https://www.thingiverse.com/thing:5308499) | protect the board a little bit while allowing for plenty of passive airflow |
|
||||||
|
|
||||||
| Downloads | Purpose |
|
| Downloads | Purpose |
|
||||||
| --- | --- |
|
|----------------------------------------------------------|-------------------------------------------------------|
|
||||||
| [ESXi ARM Edition](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM) (v1.10) | hypervisor |
|
| [ESXi ARM Edition](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM) (v1.10) | hypervisor |
|
||||||
| [Tianocore EDK II firmware for Quartz64](https://github.com/jaredmcneill/quartz64_uefi/releases) (2022-07-20) | firmare image |
|
| [Tianocore EDK II firmware for Quartz64](https://github.com/jaredmcneill/quartz64_uefi/releases) (2022-07-20) | firmare image |
|
||||||
| [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) | easy way to write filesystem images to external media |
|
| [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) | easy way to write filesystem images to external media |
|
||||||
|
@ -110,7 +109,7 @@ Then it's time to write the image onto the USB drive:
|
||||||
I'll need to use the Quartz64 serial console interface and ["Woodpecker" edition console USB adapter](https://pine64.com/product/serial-console-woodpecker-edition/) to interact with the board until I get ESXi installed and can connect to it with the web interface or SSH. The adapter comes with a short breakout cable, and I connect it thusly:
|
I'll need to use the Quartz64 serial console interface and ["Woodpecker" edition console USB adapter](https://pine64.com/product/serial-console-woodpecker-edition/) to interact with the board until I get ESXi installed and can connect to it with the web interface or SSH. The adapter comes with a short breakout cable, and I connect it thusly:
|
||||||
|
|
||||||
| Quartz64 GPIO pin | Console adapter pin | Wire color |
|
| Quartz64 GPIO pin | Console adapter pin | Wire color |
|
||||||
| --- | --- | --- |
|
|-------------------|---------------------|------------|
|
||||||
| 6 | `GND` | Brown |
|
| 6 | `GND` | Brown |
|
||||||
| 8 | `RXD` | Red |
|
| 8 | `RXD` | Red |
|
||||||
| 10 | `TXD` | Orange |
|
| 10 | `TXD` | Orange |
|
||||||
|
@ -123,7 +122,7 @@ To verify that I've got things working, I go ahead and pop the micro SD card con
|
||||||
I'll need to use these settings for the connection (which are the defaults selected by Beagle Term):
|
I'll need to use these settings for the connection (which are the defaults selected by Beagle Term):
|
||||||
|
|
||||||
| Setting | Value |
|
| Setting | Value |
|
||||||
| -- | --- |
|
|--------------|----------------|
|
||||||
| Port | `/dev/ttyUSB0` |
|
| Port | `/dev/ttyUSB0` |
|
||||||
| Bitrate | `115200` |
|
| Bitrate | `115200` |
|
||||||
| Data Bit | `8 bit` |
|
| Data Bit | `8 bit` |
|
||||||
|
|
|
@ -78,8 +78,8 @@ I'm very pleased with how this quick little project turned out. Managing my shor
|
||||||
|
|
||||||
And now I can hand out handy-dandy short links!
|
And now I can hand out handy-dandy short links!
|
||||||
|
|
||||||
| Link | Description|
|
| Link | Description |
|
||||||
| --- | --- |
|
|---------------------------------|-----------------------------------------------------------|
|
||||||
| [go.bowdre.net/coso](https://l.runtimeterror.dev/coso) | Follow me on CounterSocial |
|
| [go.bowdre.net/coso](https://l.runtimeterror.dev/coso) | Follow me on CounterSocial |
|
||||||
| [go.bowdre.net/conedoge](https://l.runtimeterror.dev/conedoge) | 2014 Subaru BRZ autocross videos |
|
| [go.bowdre.net/conedoge](https://l.runtimeterror.dev/conedoge) | 2014 Subaru BRZ autocross videos |
|
||||||
| [go.bowdre.net/cooltechshit](https://l.runtimeterror.dev/cooltechshit) | A collection of cool tech shit (references and resources) |
|
| [go.bowdre.net/cooltechshit](https://l.runtimeterror.dev/cooltechshit) | A collection of cool tech shit (references and resources) |
|
||||||
|
|
|
@ -37,7 +37,7 @@ In this post, I'll describe what I did to get Gitea up and running on a tiny ARM
|
||||||
I'll be deploying this on a cloud server with these specs:
|
I'll be deploying this on a cloud server with these specs:
|
||||||
|
|
||||||
| | |
|
| | |
|
||||||
| --- | --- |
|
|------------------|-----------------------|
|
||||||
| Shape | `VM.Standard.A1.Flex` |
|
| Shape | `VM.Standard.A1.Flex` |
|
||||||
| Image | Ubuntu 22.04 |
|
| Image | Ubuntu 22.04 |
|
||||||
| CPU Count | 1 |
|
| CPU Count | 1 |
|
||||||
|
@ -260,22 +260,22 @@ The format of PostgreSQL data changes with new releases, and that means that the
|
||||||
|
|
||||||
Let's go through the extra configs in a bit more detail:
|
Let's go through the extra configs in a bit more detail:
|
||||||
| Variable setting | Purpose |
|
| Variable setting | Purpose |
|
||||||
|:--- |:--- |
|
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|`USER_UID=1003` | User ID of the `git` user on the container host |
|
| `USER_UID=1003` | User ID of the `git` user on the container host |
|
||||||
|`USER_GID=1003` | GroupID of the `git` user on the container host |
|
| `USER_GID=1003` | GroupID of the `git` user on the container host |
|
||||||
|`GITEA____APP_NAME=Gitea` | Sets the title of the site. I shortened it from `Gitea: Git with a cup of tea` because that seems unnecessarily long. |
|
| `GITEA____APP_NAME=Gitea` | Sets the title of the site. I shortened it from `Gitea: Git with a cup of tea` because that seems unnecessarily long. |
|
||||||
|`GITEA__log__MODE=file` | Enable logging |
|
| `GITEA__log__MODE=file` | Enable logging |
|
||||||
|`GITEA__openid__ENABLE_OPENID_SIGNIN=false` | Disable signin through OpenID |
|
| `GITEA__openid__ENABLE_OPENID_SIGNIN=false` | Disable signin through OpenID |
|
||||||
|`GITEA__other__SHOW_FOOTER_VERSION=false` | Anyone who hits the web interface doesn't need to know the version |
|
| `GITEA__other__SHOW_FOOTER_VERSION=false` | Anyone who hits the web interface doesn't need to know the version |
|
||||||
|`GITEA__repository__DEFAULT_PRIVATE=private` | All repos will default to private unless I explicitly override that |
|
| `GITEA__repository__DEFAULT_PRIVATE=private` | All repos will default to private unless I explicitly override that |
|
||||||
|`GITEA__repository__DISABLE_HTTP_GIT=true` | Require that all Git operations occur over SSH |
|
| `GITEA__repository__DISABLE_HTTP_GIT=true` | Require that all Git operations occur over SSH |
|
||||||
|`GITEA__server__DOMAIN=git.bowdre.net` | Domain name of the server |
|
| `GITEA__server__DOMAIN=git.bowdre.net` | Domain name of the server |
|
||||||
|`GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net` | Leverage Tailscale's [MagicDNS](https://tailscale.com/kb/1081/magicdns/) to tell clients how to SSH to the Tailscale internal IP |
|
| `GITEA__server__SSH_DOMAIN=git.tadpole-jazz.ts.net` | Leverage Tailscale's [MagicDNS](https://tailscale.com/kb/1081/magicdns/) to tell clients how to SSH to the Tailscale internal IP |
|
||||||
|`GITEA__server__ROOT_URL=https://git.bowdre.net/` | Public-facing URL |
|
| `GITEA__server__ROOT_URL=https://git.bowdre.net/` | Public-facing URL |
|
||||||
|`GITEA__server__LANDING_PAGE=explore` | Defaults to showing the "Explore" page (listing any public repos) instead of the "Home" page (which just tells about the Gitea project) |
|
| `GITEA__server__LANDING_PAGE=explore` | Defaults to showing the "Explore" page (listing any public repos) instead of the "Home" page (which just tells about the Gitea project) |
|
||||||
|`GITEA__service__DISABLE_REGISTRATION=true` | New users will not be able to self-register for access; they will have to be manually added by the Administrator account that will be created during the initial setup |
|
| `GITEA__service__DISABLE_REGISTRATION=true` | New users will not be able to self-register for access; they will have to be manually added by the Administrator account that will be created during the initial setup |
|
||||||
|`GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true` | Don't allow browsing of user accounts |
|
| `GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true` | Don't allow browsing of user accounts |
|
||||||
|`GITEA__ui__DEFAULT_THEME=arc-green` | Default to the darker theme |
|
| `GITEA__ui__DEFAULT_THEME=arc-green` | Default to the darker theme |
|
||||||
|
|
||||||
Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections:
|
Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections:
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,7 @@ Connecting a deployed Windows VM to an Active Directory domain is pretty easy; j
|
||||||
Fortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even [introduced the ability](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to:
|
Fortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even [introduced the ability](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to:
|
||||||
|
|
||||||
| **Site** | **OU** |
|
| **Site** | **OU** |
|
||||||
| --- | --- |
|
|----------|--------------------------------------------|
|
||||||
| `BOW` | `lab.bowdre.net/LAB/BOW/Computers/Servers` |
|
| `BOW` | `lab.bowdre.net/LAB/BOW/Computers/Servers` |
|
||||||
| `DRE` | `lab.bowre.net/LAB/DRE/Computers/Servers` |
|
| `DRE` | `lab.bowre.net/LAB/DRE/Computers/Servers` |
|
||||||
|
|
||||||
|
|
|
@ -43,21 +43,21 @@ The [cluster deployment steps](/tanzu-community-edition-k8s-homelab/#management-
|
||||||
|
|
||||||
**LDAPS Identity Management Source**
|
**LDAPS Identity Management Source**
|
||||||
| Field | Value | Notes |
|
| Field | Value | Notes |
|
||||||
| --- | --- | ---- |
|
|----------------|---------------------------------------------------------------|---------------------------------------------|
|
||||||
| LDAPS Endpoint | `win01.lab.bowdre.net:636` | LDAPS interface of my AD DC |
|
| LDAPS Endpoint | `win01.lab.bowdre.net:636` | LDAPS interface of my AD DC |
|
||||||
| BIND DN | `CN=LDAP Bind,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net` | DN of an account with LDAP read permissions |
|
| BIND DN | `CN=LDAP Bind,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net` | DN of an account with LDAP read permissions |
|
||||||
| BIND Password | `*******` | Password for that account |
|
| BIND Password | `*******` | Password for that account |
|
||||||
|
|
||||||
**User Search Attributes**
|
**User Search Attributes**
|
||||||
| Field | Value | Notes |
|
| Field | Value | Notes |
|
||||||
| --- | --- | --- |
|
|----------|----------------------------------|----------------------------------------------------------------------------------|
|
||||||
| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for the top-level OU containing my users |
|
| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for the top-level OU containing my users |
|
||||||
| Filter | `objectClass=(person)` | |
|
| Filter | `objectClass=(person)` | |
|
||||||
| Username | `sAMAccountName` | I want to auth as `john` rather than `john@lab.bowdre.net` (`userPrincipalName`) |
|
| Username | `sAMAccountName` | I want to auth as `john` rather than `john@lab.bowdre.net` (`userPrincipalName`) |
|
||||||
|
|
||||||
**Group Search Attributes**
|
**Group Search Attributes**
|
||||||
| Field | Value | Notes |
|
| Field | Value | Notes |
|
||||||
| --- | --- | --- |
|
|-----------------|-----------------------------------|---------------------------------------------------------------|
|
||||||
| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for OU containing my users |
|
| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for OU containing my users |
|
||||||
| Filter | `(objectClass=group)` | |
|
| Filter | `(objectClass=group)` | |
|
||||||
| Name Attribute | `cn` | Common Name |
|
| Name Attribute | `cn` | Common Name |
|
||||||
|
|
|
@ -48,7 +48,7 @@ A few days ago I migrated my original Snikket instance from Google Cloud (GCP) t
|
||||||
You can refer to my notes from last time for details on how I [created the Ubuntu 20.04 VM](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#instance-creation) and [configured the firewall rules](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration) both at the cloud infrastructure level as well as within the host using `iptables`. Snikket does need a few additional [firewall ports](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/firewall.md) beyond what was needed for my Matrix setup:
|
You can refer to my notes from last time for details on how I [created the Ubuntu 20.04 VM](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#instance-creation) and [configured the firewall rules](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration) both at the cloud infrastructure level as well as within the host using `iptables`. Snikket does need a few additional [firewall ports](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/firewall.md) beyond what was needed for my Matrix setup:
|
||||||
|
|
||||||
| Port(s) | Transport | Purpose |
|
| Port(s) | Transport | Purpose |
|
||||||
| --- | --- | --- |
|
|-------------------|-----------|-----------------------------------------------------------------------|
|
||||||
| `80, 443` | TCP | Web interface and group file sharing |
|
| `80, 443` | TCP | Web interface and group file sharing |
|
||||||
| `3478-3479` | TCP/UDP | Audio/Video data proxy negotiation and discovery ([STUN/TURN](https://www.twilio.com/docs/stun-turn/faq)) |
|
| `3478-3479` | TCP/UDP | Audio/Video data proxy negotiation and discovery ([STUN/TURN](https://www.twilio.com/docs/stun-turn/faq)) |
|
||||||
| `5349-5350` | TCP/UDP | Audio/Video data proxy negotiation and discovery (STUN/TURN over TLS) |
|
| `5349-5350` | TCP/UDP | Audio/Video data proxy negotiation and discovery (STUN/TURN over TLS) |
|
||||||
|
@ -166,7 +166,7 @@ sudo vi snikket.conf # [tl! .cmd]
|
||||||
A basic config only needs two parameters:
|
A basic config only needs two parameters:
|
||||||
|
|
||||||
| Parameter | Description |
|
| Parameter | Description |
|
||||||
| --- | --- |
|
|-----------------------|--------------------------------------------------------------|
|
||||||
| `SNIKKET_DOMAIN` | The fully-qualified domain name that clients will connect to |
|
| `SNIKKET_DOMAIN` | The fully-qualified domain name that clients will connect to |
|
||||||
| `SNIKKET_ADMIN_EMAIL` | An admin contact for the server |
|
| `SNIKKET_ADMIN_EMAIL` | An admin contact for the server |
|
||||||
|
|
||||||
|
|
|
@ -134,10 +134,10 @@ Now if I just enter `go/vcenter` I will go to the vSphere UI, while if I enter s
|
||||||
Some of my other golinks:
|
Some of my other golinks:
|
||||||
|
|
||||||
| Shortlink | Destination URL | Description |
|
| Shortlink | Destination URL | Description |
|
||||||
| --- | --- | --- |
|
|------------|--------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|
|
||||||
| `code` | `https://github.com/search?type=code&q=user:jbowdre{{with .Path}}+{{.}}{{end}}` | searches my code on Github |
|
| `code` | `https://github.com/search?type=code&q=user:jbowdre{{with .Path}}+{{.}}{{end}}` | searches my code on Github |
|
||||||
| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance |
|
| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance |
|
||||||
| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches [protondb](https://www.protondb.com/), super-handy for checking game compatibility when [Tailscale is installed on a Steam Deck](https://tailscale.com/blog/steam-deck/) |
|
| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches protondb |
|
||||||
| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name |
|
| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name |
|
||||||
| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems |
|
| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems |
|
||||||
| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) |
|
| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) |
|
||||||
|
|
|
@ -214,7 +214,7 @@ TS_FUNNEL=1
|
||||||
```
|
```
|
||||||
|
|
||||||
| Variable Name | Example | Description |
|
| Variable Name | Example | Description |
|
||||||
| --- | --- | --- |
|
|----------------------------|------------------------------------------|---------------------------------------------------------------------------------------------------------|
|
||||||
| `TS_AUTHKEY` | `tskey-auth-somestring-somelongerstring` | used for unattended auth of the new node, get one [here](https://login.tailscale.com/admin/settings/keys) |
|
| `TS_AUTHKEY` | `tskey-auth-somestring-somelongerstring` | used for unattended auth of the new node, get one [here](https://login.tailscale.com/admin/settings/keys) |
|
||||||
| `TS_HOSTNAME` | `tsdemo` | optional Tailscale hostname for the new node[^hostname] |
|
| `TS_HOSTNAME` | `tsdemo` | optional Tailscale hostname for the new node[^hostname] |
|
||||||
| `TS_STATE_DIR` | `/var/lib/tailscale/` | required directory for storing Tailscale state, this should be mounted to the container for persistence |
|
| `TS_STATE_DIR` | `/var/lib/tailscale/` | required directory for storing Tailscale state, this should be mounted to the container for persistence |
|
||||||
|
|
|
@ -46,7 +46,7 @@ The Kubernetes node VMs will need to be attached to a network with a DHCP server
|
||||||
I'll also need to set aside a few static IPs for this project. These will need to be routable and within the same subnet as the DHCP range, but excluded from that DHCP range.
|
I'll also need to set aside a few static IPs for this project. These will need to be routable and within the same subnet as the DHCP range, but excluded from that DHCP range.
|
||||||
|
|
||||||
| IP Address | Purpose |
|
| IP Address | Purpose |
|
||||||
| --- | --- |
|
|-------------------------------|--------------------------------------|
|
||||||
| `192.168.1.60` | Control plane for Management cluster |
|
| `192.168.1.60` | Control plane for Management cluster |
|
||||||
| `192.168.1.61` | Control plane for Workload cluster |
|
| `192.168.1.61` | Control plane for Workload cluster |
|
||||||
| `192.168.1.64 - 192.168.1.80` | IP range for Workload load balancer |
|
| `192.168.1.64 - 192.168.1.80` | IP range for Workload load balancer |
|
||||||
|
|
|
@ -68,7 +68,7 @@ I've now got a fully-functioning VMware lab, complete with a physical hypervisor
|
||||||
My home network uses the generic `192.168.1.0/24` address space, with internet router providing DHCP addresses in the range `.100-.250`. I'm using the range `192.168.1.2-.99` for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far:
|
My home network uses the generic `192.168.1.0/24` address space, with internet router providing DHCP addresses in the range `.100-.250`. I'm using the range `192.168.1.2-.99` for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far:
|
||||||
|
|
||||||
| IP Address | Hostname | Purpose |
|
| IP Address | Hostname | Purpose |
|
||||||
| ---- | ---- | ---- |
|
|----------------|-----------|--------------------|
|
||||||
| `192.168.1.1` | | Gateway |
|
| `192.168.1.1` | | Gateway |
|
||||||
| `192.168.1.5` | `win01` | AD DC, DNS |
|
| `192.168.1.5` | `win01` | AD DC, DNS |
|
||||||
| `192.168.1.11` | `nuchost` | Physical ESXi host |
|
| `192.168.1.11` | `nuchost` | Physical ESXi host |
|
||||||
|
@ -77,7 +77,7 @@ My home network uses the generic `192.168.1.0/24` address space, with internet r
|
||||||
Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:
|
Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:
|
||||||
|
|
||||||
| VLAN ID | Network | Purpose |
|
| VLAN ID | Network | Purpose |
|
||||||
| ---- | ---- | ---- |
|
|---------|------------------|------------|
|
||||||
| 1610 | `172.16.10.0/24` | Management |
|
| 1610 | `172.16.10.0/24` | Management |
|
||||||
| 1620 | `172.16.20.0/24` | Servers-1 |
|
| 1620 | `172.16.20.0/24` | Servers-1 |
|
||||||
| 1630 | `172.16.30.0/24` | Servers-2 |
|
| 1630 | `172.16.30.0/24` | Servers-2 |
|
||||||
|
@ -182,11 +182,11 @@ Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this serve
|
||||||
### Nested vSAN Cluster
|
### Nested vSAN Cluster
|
||||||
Alright, it's time to start building up the nested environment. To start, I grabbed the latest [Nested ESXi Virtual Appliance .ova](https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance), courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN:
|
Alright, it's time to start building up the nested environment. To start, I grabbed the latest [Nested ESXi Virtual Appliance .ova](https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance), courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN:
|
||||||
|
|
||||||
|Hostname|1610-Management|1698-vSAN|1699-vMotion|
|
| Hostname | 1610-Management | 1698-vSAN | 1699-vMotion |
|
||||||
|----|----|----|----|
|
|-------------------------|-----------------|----------------|----------------|
|
||||||
|`esxi01.lab.bowdre.net`|`172.16.10.21`|`172.16.98.21`|`172.16.99.21`|
|
| `esxi01.lab.bowdre.net` | `172.16.10.21` | `172.16.98.21` | `172.16.99.21` |
|
||||||
|`esxi02.lab.bowdre.net`|`172.16.10.22`|`172.16.98.22`|`172.16.99.22`|
|
| `esxi02.lab.bowdre.net` | `172.16.10.22` | `172.16.98.22` | `172.16.99.22` |
|
||||||
|`esxi03.lab.bowdre.net`|`172.16.10.23`|`172.16.98.23`|`172.16.99.23`|
|
| `esxi03.lab.bowdre.net` | `172.16.10.23` | `172.16.98.23` | `172.16.99.23` |
|
||||||
|
|
||||||
Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
|
Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
|
||||||
|
|
||||||
|
@ -246,11 +246,11 @@ The [vRealize Easy Installer](https://docs.vmware.com/en/vRealize-Automation/8.2
|
||||||
|
|
||||||
Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records:
|
Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records:
|
||||||
|
|
||||||
|FQDN|IP|
|
| FQDN | IP |
|
||||||
|----|----|
|
|----------------------|----------------|
|
||||||
|`lcm.lab.bowdre.net`|`192.168.1.40`|
|
| `lcm.lab.bowdre.net` | `192.168.1.40` |
|
||||||
|`idm.lab.bowdre.net`|`192.168.1.41`|
|
| `idm.lab.bowdre.net` | `192.168.1.41` |
|
||||||
|`vra.lab.bowdre.net`|`192.168.1.42`|
|
| `vra.lab.bowdre.net` | `192.168.1.42` |
|
||||||
|
|
||||||
I then attached the installer ISO to my Windows VM and ran through the installation from there.
|
I then attached the installer ISO to my Windows VM and ran through the installation from there.
|
||||||
![vRealize Easy Installer](42n3aMim5.png)
|
![vRealize Easy Installer](42n3aMim5.png)
|
||||||
|
|
|
@ -89,8 +89,8 @@ So far, vRA has been automatically placing VMs on networks based solely on [whi
|
||||||
|
|
||||||
As a quick recap, I've got five networks available for vRA, split across my two sites using tags:
|
As a quick recap, I've got five networks available for vRA, split across my two sites using tags:
|
||||||
|
|
||||||
|Name |Subnet |Site |Tags |
|
| Name | Subnet | Site | Tags |
|
||||||
| --- | --- | --- | --- |
|
|-----------------|----------------|------|-----------|
|
||||||
| d1620-Servers-1 | 172.16.20.0/24 | BOW | `net:bow` |
|
| d1620-Servers-1 | 172.16.20.0/24 | BOW | `net:bow` |
|
||||||
| d1630-Servers-2 | 172.16.30.0/24 | BOW | `net:bow` |
|
| d1630-Servers-2 | 172.16.30.0/24 | BOW | `net:bow` |
|
||||||
| d1640-Servers-3 | 172.16.40.0/24 | BOW | `net:bow` |
|
| d1640-Servers-3 | 172.16.40.0/24 | BOW | `net:bow` |
|
||||||
|
@ -99,9 +99,9 @@ As a quick recap, I've got five networks available for vRA, split across my two
|
||||||
|
|
||||||
I'm going to add additional tags to these networks to further define their purpose.
|
I'm going to add additional tags to these networks to further define their purpose.
|
||||||
|
|
||||||
|Name |Purpose |Tags |
|
| Name | Purpose | Tags |
|
||||||
| --- | --- | --- |
|
|-----------------|------------|------------------------|
|
||||||
| d1620-Servers-1 |Management | `net:bow`, `net:mgmt` |
|
| d1620-Servers-1 | Management | `net:bow`, `net:mgmt` |
|
||||||
| d1630-Servers-2 | Front-end | `net:bow`, `net:front` |
|
| d1630-Servers-2 | Front-end | `net:bow`, `net:front` |
|
||||||
| d1640-Servers-3 | Back-end | `net:bow`, `net:back` |
|
| d1640-Servers-3 | Back-end | `net:bow`, `net:back` |
|
||||||
| d1650-Servers-4 | Front-end | `net:dre`, `net:front` |
|
| d1650-Servers-4 | Front-end | `net:dre`, `net:front` |
|
||||||
|
@ -109,8 +109,8 @@ I'm going to add additional tags to these networks to further define their purpo
|
||||||
|
|
||||||
I *could* just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users:
|
I *could* just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users:
|
||||||
|
|
||||||
|Name |Tags |
|
| Name | Tags |
|
||||||
| --- | --- |
|
|-----------------|-----------------------------------------------------|
|
||||||
| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` |
|
| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` |
|
||||||
| d1630-Servers-2 | `net:bow`, `net:front`, `net:bow-front-172.16.30.0` |
|
| d1630-Servers-2 | `net:bow`, `net:front`, `net:bow-front-172.16.30.0` |
|
||||||
| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` |
|
| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` |
|
||||||
|
|
|
@ -12,6 +12,7 @@
|
||||||
{{- $content := $content | replaceRE `(?m:^(?:\d+). (.+?)$)` "* $1" -}}{{/* convert ordered lists */}}
|
{{- $content := $content | replaceRE `(?m:^(?:\d+). (.+?)$)` "* $1" -}}{{/* convert ordered lists */}}
|
||||||
{{- $content := $content | replaceRE `\n\[\^(.+?)\]:\s.*` "" -}}{{/* remove footnote definitions */}}
|
{{- $content := $content | replaceRE `\n\[\^(.+?)\]:\s.*` "" -}}{{/* remove footnote definitions */}}
|
||||||
{{- $content := $content | replaceRE `\[\^(.+?)\]` "" -}}{{/* remove footnote anchors */}}
|
{{- $content := $content | replaceRE `\[\^(.+?)\]` "" -}}{{/* remove footnote anchors */}}
|
||||||
|
{{- $content := $content | replaceRE `((?m:^(?:\|.*\|)+\n)+)` "```\n$1\n```\n" -}}{{/* convert markdown tables to plaintext ascii */}}
|
||||||
{{- $content := $content | replaceRE "(?m:^`([^`]*)`$)" "```\n$1\n```\n" -}}{{/* convert single-line inline code to blocks */}}
|
{{- $content := $content | replaceRE "(?m:^`([^`]*)`$)" "```\n$1\n```\n" -}}{{/* convert single-line inline code to blocks */}}
|
||||||
{{- $content := $content | replaceRE `\{\{%\snotice.*%\}\}` "<-- note -->" -}}{{/* convert hugo notices */}}
|
{{- $content := $content | replaceRE `\{\{%\snotice.*%\}\}` "<-- note -->" -}}{{/* convert hugo notices */}}
|
||||||
{{- $content := $content | replaceRE `\{\{%\s/notice.*%\}\}` "<-- /note -->" -}}
|
{{- $content := $content | replaceRE `\{\{%\s/notice.*%\}\}` "<-- /note -->" -}}
|
||||||
|
|
Loading…
Reference in a new issue