update posts for new code block stuff

This commit is contained in:
John Bowdre 2023-10-16 16:48:55 -05:00
parent e82a0ad937
commit 741d4fd588
53 changed files with 1173 additions and 1132 deletions

View file

@ -30,21 +30,24 @@ I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modelin
#### FreeCAD
Installing FreeCAD is as easy as:
```shell
$ sudo apt update
$ sudo apt install freecad
```command
sudo apt update
sudo apt install freecad
```
But launching `/usr/bin/freecad` caused me some weird graphical defects which rendered the application unusable. I found that I needed to pass the `LIBGL_DRI3_DISABLE=1` environment variable to eliminate these glitches:
```shell
$ env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &
```command
env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &
```
To avoid having to type that every time I wished to launch the app, I inserted this line at the bottom of my `~/.bashrc` file:
```shell
```command
alias freecad="env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &"
```
To be able to start FreeCAD from the Chrome OS launcher with that environment variable intact, edit it into the `Exec` line of the `/usr/share/applications/freecad.desktop` file:
```shell
$ sudo vi /usr/share/applications/freecad.desktop
```command
sudo vi /usr/share/applications/freecad.desktop
```
```cfg {linenos=true}
[Desktop Entry]
Version=1.0
Name=FreeCAD
@ -72,17 +75,17 @@ Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.
Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files > Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in `~/Applications/`.
To be able to actually execute the AppImage you'll need to adjust the permissions with 'chmod +x':
```shell
$ chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage
```command
chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage
```
You can then start up the app by calling the file directly:
```shell
$ ~/Applications/Ultimaker_Cura-4.7.1.AppImage &
```command
~/Applications/Ultimaker_Cura-4.7.1.AppImage &
```
AppImages don't automatically appear in the Chrome OS launcher so you'll need to create its `.desktop` file. You can do this manually if you want, but I found it a lot easier to leverage `menulibre`:
```shell
$ sudo apt update && sudo apt install menulibre
$ menulibre
```command
sudo apt update && sudo apt install menulibre
menulibre
```
Just plug in the relevant details (you can grab the appropriate icon [here](https://github.com/Ultimaker/Cura/blob/master/icons/cura-128.png)), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher.
![Using menulibre to create the launcher shortcut](VTISYOKHO.png)

View file

@ -1,45 +0,0 @@
---
title: "Accessing a Tanzu Community Edition Kubernetes Cluster from a new device" # Title of the blog post.
date: 2022-02-01T10:58:57-06:00 # Date of post creation.
# lastmod: 2022-02-01T10:58:57-06:00 # Date when last modified
description: "The Tanzu Community Edition documentation does a great job of explaining how to authenticate to a newly-deployed cluster at the tail end of the installation steps, but how do you log in from another system?" # Description used for search engine.
featured: false # Sets if post is a featured post, making appear on the home page side bar.
draft: true # Sets whether to render this page. Draft of true will not be rendered.
toc: false # Controls if a table of contents should be generated for first-level links automatically.
usePageBundles: true
# menu: main
# featureImage: "file.png" # Sets featured image on blog post.
# featureImageAlt: 'Description of image' # Alternative text for featured image.
# featureImageCap: 'This is the featured image.' # Caption (optional).
# thumbnail: "thumbnail.png" # Sets thumbnail image appearing inside card on homepage.
# shareImage: "share.png" # Designate a separate image for social media sharing.
codeLineNumbers: false # Override global value for showing of line numbers within code block.
series: Tips
tags:
- vmware
- kubernetes
- tanzu
comment: true # Disable comment if false.
---
When I [recently set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since I knew that my Chromebook Linux environment wouldn't support the `kind` bootstrap cluster used for the deployment. But now I'd like to be able to connect to the cluster directly using the `tanzu` and `kubectl` CLI tools. How do I get the appropriate cluster configuration over to my Chromebook?
The Tanzu CLI actually makes that pretty easy. I just run these commands on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) and workload (`tce-work`) clusters to a pair of files:
```shell
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml
tanzu cluster kubeconfig get tce-work --admin --export-file tce-work-kubeconfig.yaml
```
I could then use `scp` to pull the files from the VM into my local Linux environment. I then needed to [install `kubectl`](/tanzu-community-edition-k8s-homelab/#kubectl-binary) and the [`tanzu` CLI](/tanzu-community-edition-k8s-homelab/#tanzu-cli) (making sure to also [enable shell auto-completion](/enable-tanzu-cli-auto-completion-bash-zsh/) along the way!), and I could import the configurations locally:
```shell
tanzu login --kubeconfig tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
✔ successfully logged in to management cluster using the kubeconfig tce-mgmt
tanzu login --kubeconfig tce-work-kubeconfig.yaml --context tce-work-admin@tce-work --name tce-work
✔ successfully logged in to management cluster using the kubeconfig tce-work
```

View file

@ -21,7 +21,7 @@ I'll start this by adding a few new inputs to the cloud template in Cloud Assemb
I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`.
```yaml
```yaml {linenos=true}
inputs:
[...]
description:
@ -50,7 +50,7 @@ I'll also need to add these to the `resources` section of the template so that t
I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string.
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@ -80,7 +80,7 @@ The first thing this workflow needs to do is parse `inputProperties (Properties)
![Get VM Object action](5ATk99aPW.png)
The script for this task is fairly straightforward:
```js
```js {linenos=true}
// JavaScript: Get VM Object
// Inputs: inputProperties (Properties)
// Outputs: vm (VC:VirtualMachine)
@ -99,7 +99,7 @@ The first part of the script creates a new VM config spec, inserts the descripti
The second part uses a built-in action to set the `Point of Contact` and `Ticket` custom attributes accordingly.
```js
```js {linenos=true}
// Javascript: Set Notes
// Inputs: vm (VC:VirtualMachine), inputProperties (Properties)
// Outputs: None

View file

@ -34,7 +34,7 @@ Once the VM is created, I power it on and hop into the web console. The default
### Configure Networking
My next step was to configure a static IP address by creating `/etc/systemd/network/10-static-en.network` and entering the following contents:
```conf
```cfg {linenos=true}
[Match]
Name=eth0
@ -48,7 +48,7 @@ By the way, that `192.168.1.5` address is my Windows DC/DNS server that I use fo
I also disabled DHCP by setting `DHCP=no` in `/etc/systemd/network/99-dhcp-en.network`:
```conf
```cfg {linenos=true}
[Match]
Name=e*
@ -70,26 +70,26 @@ Now that I'm in, I run `tdnf update` to make sure the VM is fully up to date.
### Install docker-compose
Photon OS ships with Docker preinstalled, but I need to install `docker-compose` on my own to simplify container deployment. Per the [install instructions](https://docs.docker.com/compose/install/#install-compose), I run:
```shell
```commandroot
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
And then verify that it works:
```shell
root@adguard [ ~]# docker-compose --version
```commandroot-session
docker-compose --version
docker-compose version 1.29.2, build 5becea4c
```
I'll also want to enable and start Docker:
```shell
```commandroot
systemctl enable docker
systemctl start docker
```
### Disable DNSStubListener
By default, the `resolved` daemon is listening on `127.0.0.53:53` and will prevent docker from binding to that port. Fortunately it's [pretty easy](https://github.com/pi-hole/docker-pi-hole#installing-on-ubuntu) to disable the `DNSStubListener` and free up the port:
```shell
```commandroot
sed -r -i.orig 's/#?DNSStubListener=yes/DNSStubListener=no/g' /etc/systemd/resolved.conf
rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
systemctl restart systemd-resolved
@ -99,14 +99,14 @@ systemctl restart systemd-resolved
Okay, now for the fun part.
I create a directory for AdGuard to live in, and then create a `docker-compose.yaml` therein:
```shell
```commandroot
mkdir ~/adguard
cd ~/adguard
vi docker-compose.yaml
```
And I define the container:
```yaml
```yaml {linenos=true}
version: "3"
services:
@ -133,8 +133,8 @@ services:
Then I can fire it up with `docker-compose up --detach`:
```shell
root@adguard [ ~/adguard ]# docker-compose up --detach
```commandroot-session
docker-compose up --detach
Creating network "adguard_default" with the default driver
Pulling adguard (adguard/adguardhome:latest)...
latest: Pulling from adguard/adguardhome

View file

@ -29,7 +29,7 @@ I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/bl
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
{{% /notice %}}
```shell
```shell {linenos=true}
#!/bin/bash
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
# expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical

View file

@ -40,8 +40,11 @@ When I originally wrote this post back in September 2018, the containerized BitW
1. Log in to the [Google Domain admin portal](https://domains.google.com/registrar) and [create a new Dynamic DNS record](https://domains.google.com/registrar). This will provide a username and password specific for that record.
2. Log in to the GCE instance and run `sudo apt-get update` followed by `sudo apt-get install ddclient`. Part of the install process prompts you to configure things... just accept the defaults and move on.
3. Edit the `ddclient` config file to look like this, substituting the username, password, and FDQN from Google Domains:
```shell
$ sudo vi /etc/ddclient.conf
```command
sudo vim /etc/ddclient.conf
```
```cfg {linenos=true,hl_lines=["10-12"]}
# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf
@ -57,7 +60,7 @@ $ sudo vi /etc/ddclient.conf
```
4. `sudo vi /etc/default/ddclient` and make sure that `run_daemon="true"`:
```shell
```cfg {linenos=true,hl_lines=16}
# Configuration for ddclient scripts
# generated from debconf on Sat Sep 8 21:58:02 UTC 2018
#
@ -80,21 +83,21 @@ run_daemon="true"
daemon_interval="300"
```
5. Restart the `ddclient` service - twice for good measure (daemon mode only gets activated on the second go *because reasons*):
```shell
$ sudo systemctl restart ddclient
$ sudo systemctl restart ddclient
```command
sudo systemctl restart ddclient
sudo systemctl restart ddclient
```
6. After a few moments, refresh the Google Domains page to verify that your instance's external IP address is showing up on the new DDNS record.
### Install Docker
*Steps taken from [here](https://docs.docker.com/install/linux/docker-ce/debian/).*
1. Update `apt` package index:
```shell
$ sudo apt-get update
```command
sudo apt-get update
```
2. Install package management prereqs:
```shell
$ sudo apt-get install \
```command-session
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
@ -102,47 +105,47 @@ $ sudo apt-get install \
software-properties-common
```
3. Add Docker GPG key:
```shell
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
```command
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
```
4. Add the Docker repo:
```shell
$ sudo add-apt-repository \
```command-session
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
```
5. Update apt index again:
```shell
$ sudo apt-get update
```command
sudo apt-get update
```
6. Install Docker:
```shell
$ sudo apt-get install docker-ce
```command
sudo apt-get install docker-ce
```
### Install Certbot and generate SSL cert
*Steps taken from [here](https://certbot.eff.org/instructions?ws=other&os=debianbuster).*
1. Install Certbot:
```shell
$ sudo apt-get install certbot
```command
sudo apt-get install certbot
```
2. Generate certificate:
```shell
$ sudo certbot certonly --standalone -d [FQDN]
```command
sudo certbot certonly --standalone -d [FQDN]
```
3. Create a directory to store the new certificates and copy them there:
```shell
$ sudo mkdir -p /ssl/keys/
$ sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/
$ sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/
```command
sudo mkdir -p /ssl/keys/
sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/
sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/
```
### Set up vaultwarden
*Using the container image available [here](https://github.com/dani-garcia/vaultwarden).*
1. Let's just get it up and running first:
```shell
$ sudo docker run -d --name vaultwarden \
```command-session
sudo docker run -d --name vaultwarden \
-e ROCKET_TLS={certs='"/ssl/fullchain.pem", key="/ssl/privkey.pem"}' \
-e ROCKET_PORT='8000' \
-v /ssl/keys/:/ssl/ \
@ -154,9 +157,9 @@ $ sudo docker run -d --name vaultwarden \
2. At this point you should be able to point your web browser at `https://[FQDN]` and see the BitWarden login screen. Click on the Create button and set up a new account. Log in, look around, add some passwords, etc. Everything should basically work just fine.
3. Unless you want to host passwords for all of the Internet you'll probably want to disable signups at some point by adding the `env` option `SIGNUPS_ALLOWED=false`. And you'll need to set `DOMAIN=https://[FQDN]` if you want to use U2F authentication:
```shell
$ sudo docker stop vaultwarden
$ sudo docker rm vaultwarden
$ sudo docker run -d --name vaultwarden \
sudo docker stop vaultwarden
sudo docker rm vaultwarden
sudo docker run -d --name vaultwarden \
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
-e ROCKET_PORT='8000' \
-e SIGNUPS_ALLOWED=false \
@ -170,16 +173,19 @@ $ sudo docker run -d --name vaultwarden \
### Install vaultwarden as a service
*So we don't have to keep manually firing this thing off.*
1. Create a script to stop, remove, update, and (re)start the `vaultwarden` container:
1. Create a script at `/usr/local/bin/start-vaultwarden.sh` to stop, remove, update, and (re)start the `vaultwarden` container:
```command
sudo vim /usr/local/bin/start-vaultwarden.sh
```
```shell
$ sudo vi /usr/local/bin/start-vaultwarden.sh
#!/bin/bash
#!/bin/bash
docker stop vaultwarden
docker rm vaultwarden
docker pull vaultwarden/server
docker stop vaultwarden
docker rm vaultwarden
docker pull vaultwarden/server
docker run -d --name vaultwarden \
docker run -d --name vaultwarden \
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
-e ROCKET_PORT='8000' \
-e SIGNUPS_ALLOWED=false \
@ -189,11 +195,17 @@ $ sudo vi /usr/local/bin/start-vaultwarden.sh
-v /icon_cache/ \
-p 0.0.0.0:443:8000 \
vaultwarden/server:latest
$ sudo chmod 744 /usr/local/bin/start-vaultwarden.sh
```
```command
sudo chmod 744 /usr/local/bin/start-vaultwarden.sh
```
2. And add it as a `systemd` service:
```shell
$ sudo vi /etc/systemd/system/vaultwarden.service
```command
sudo vim /etc/systemd/system/vaultwarden.service
```
```cfg
[Unit]
Description=BitWarden container
Requires=docker.service
@ -206,13 +218,19 @@ $ sudo vi /etc/systemd/system/vaultwarden.service
[Install]
WantedBy=default.target
$ sudo chmod 644 /etc/systemd/system/vaultwarden.service
```
```command
sudo chmod 644 /etc/systemd/system/vaultwarden.service
```
3. Try it out:
```shell
$ sudo systemctl start vaultwarden
$ sudo systemctl status vaultwarden
● bitwarden.service - BitWarden container
```command
sudo systemctl start vaultwarden
```
```command-session
sudo systemctl status vaultwarden
● bitwarden.service - BitWarden container
Loaded: loaded (/etc/systemd/system/vaultwarden.service; enabled; vendor preset: enabled)
Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago
Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS)
@ -224,8 +242,8 @@ $ sudo systemctl status vaultwarden
└─control
└─13229 /usr/bin/docker stop vaultwarden
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645
```
### Conclusion

View file

@ -96,7 +96,7 @@ I'm also going to head in to **Administration > IP Related Management > Sections
### Script time
Well that's enough prep work; now it's time for the Python3 [script](https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py):
```python
```python {linenos=true}
# The latest version of this script can be found on Github:
# https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py
@ -478,7 +478,7 @@ if __name__ == "__main__":
```
I'll run it and provide the path to the network export CSV file:
```bash
```command
python3 phpipam-bulk-import.py ~/networks.csv
```
@ -570,7 +570,7 @@ So now phpIPAM knows about the vSphere networks I care about, and it can keep tr
... but I haven't actually *deployed* an agent yet. I'll do that by following the same basic steps [described here](/tanzu-community-edition-k8s-homelab/#phpipam-agent) to spin up my `phpipam-agent` on Kubernetes, and I'll plug in that automagically-generated code for the `IPAM_AGENT_KEY` environment variable:
```yaml
```yaml {linenos=true}
---
apiVersion: apps/v1
kind: Deployment

View file

@ -24,21 +24,23 @@ comment: true # Disable comment if false.
It's super handy when a Linux config file is loaded with comments to tell you precisely how to configure the thing, but all those comments can really get in the way when you're trying to review the current configuration.
Next time, instead of scrolling through page after page of lengthy embedded explanations, just use:
```shell
```command
egrep -v "^\s*(#|$)" $filename
```
For added usefulness, I alias this command to `ccat` (which my brain interprets as "commentless cat") in [my `~/.zshrc`](https://github.com/jbowdre/dotfiles/blob/main/zsh/.zshrc):
```shell
```command
alias ccat='egrep -v "^\s*(#|$)"'
```
Now instead of viewing all 75 lines of a [mostly-default Vagrantfile](/create-vms-chromebook-hashicorp-vagrant), I just see the 7 that matter:
```shell
; wc -l Vagrantfile
```command-session
wc -l Vagrantfile
75 Vagrantfile
```
; ccat Vagrantfile
```command-session
ccat Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "oopsme/windows11-22h2"
config.vm.provider :libvirt do |libvirt|
@ -46,8 +48,10 @@ Vagrant.configure("2") do |config|
libvirt.memory = 4096
end
end
```
; ccat Vagrantfile | wc -l
```command-session
ccat Vagrantfile | wc -l
7
```

View file

@ -67,7 +67,7 @@ Anyway, after switching to the cheaper Standard tier I can click on the **Extern
##### Security Configuration
The **Security** section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose:
```sh
```command
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard
```
@ -90,24 +90,24 @@ I'll click **Create** and move on.
#### WireGuard Server Setup
Once the **Compute Engine > Instances** [page](https://console.cloud.google.com/compute/instances) indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH:
```sh
```command
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP}
```
##### Preparation
And, as always, I'll first make sure the OS is fully updated before doing anything else:
```sh
```command
sudo apt update
sudo apt upgrade
```
Then I'll install `ufw` to easily manage the host firewall, `qrencode` to make it easier to generate configs for mobile clients, `openresolv` to avoid [this issue](https://superuser.com/questions/1500691/usr-bin-wg-quick-line-31-resolvconf-command-not-found-wireguard-debian/1500896), and `wireguard` to, um, guard the wires:
```sh
```command
sudo apt install ufw qrencode openresolv wireguard
```
Configuring the host firewall with `ufw` is very straight forward:
```sh
```shell
# First, SSH:
sudo ufw allow 22/tcp
# and WireGuard:
@ -117,34 +117,36 @@ sudo ufw enable
```
The last preparatory step is to enable packet forwarding in the kernel so that the instance will be able to route traffic between the remote clients and my home network (once I get to that point). I can configure that on-the-fly with:
```sh
```command
sudo sysctl -w net.ipv4.ip_forward=1
```
To make it permanent, I'll edit `/etc/sysctl.conf` and uncomment the same line:
```sh
$ sudo vi /etc/sysctl.conf
```command
sudo vi /etc/sysctl.conf
```
```cfg
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
```
##### WireGuard Interface Config
I'll switch to the root user, move into the `/etc/wireguard` directory, and issue `umask 077` so that the files I'm about to create will have a very limited permission set (to be accessible by root, and _only_ root):
```sh
```command
sudo -i
cd /etc/wireguard
umask 077
```
Then I can use the `wg genkey` command to generate the server's private key, save it to a file called `server.key`, pass it through `wg pubkey` to generate the corresponding public key, and save that to `server.pub`:
```sh
```command
wg genkey | tee server.key | wg pubkey > server.pub
```
As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
The format of the interface configuration file will need to look something like this:
```
```cfg
[Interface] # this section defines the local WireGuard interface
Address = # CIDR-format IP address of the virtual WireGuard interface
ListenPort = # WireGuard listens on this port for incoming traffic (randomized if not specified)
@ -162,7 +164,7 @@ AllowedIPs = # which IPs will be routed to this peer
There will be a single `[Interface]` section in each peer's configuration file, but they may include multiple `[Peer]` sections. For my config, I'll use the `10.200.200.0/24` network for WireGuard, and let this server be `10.200.200.1`, the VyOS router in my home lab `10.200.200.2`, and I'll assign IPs to the other peers from there. I found a note that Google Cloud uses an MTU size of `1460` bytes so that's what I'll set on this end. I'm going to configure WireGuard to use the VyOS router as the DNS server, and I'll specify my internal `lab.bowdre.net` search domain. Finally, I'll leverage the `PostUp` and `PostDown` directives to enable and disable NAT so that the server will be able to forward traffic between networks for me.
So here's the start of my GCP WireGuard server's `/etc/wireguard/wg0.conf`:
```sh
```cfg
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.200.200.1/24
@ -175,20 +177,23 @@ PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING
```
I don't have any other peers ready to add to this config yet, but I can go ahead and bring up the interface all the same. I'm going to use the `wg-quick` wrapper instead of calling `wg` directly since it simplifies a bit of the configuration, but first I'll need to enable the `wg-quick@{INTERFACE}` service so that it will run automatically at startup:
```sh
```command
systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0
```
I can now bring up the interface with `wg-quick up wg0` and check the status with `wg show`:
```
root@wireguard:~# wg-quick up wg0
```commandroot-session
wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.200.200.1/24 dev wg0
[#] ip link set mtu 1460 up dev wg0
[#] resolvconf -a wg0 -m 0 -x
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
```
```commandroot-session
root@wireguard:~# wg show
interface: wg0
public key: {GCP_PUBLIC_IP}
@ -200,13 +205,13 @@ I'll come back here once I've got a peer config to add.
### Configure VyoS Router as WireGuard Peer
Comparatively, configuring WireGuard on VyOS is a bit more direct. I'll start by entering configuration mode and generating and binding a key pair for this interface:
```sh
```commandroot
configure
run generate pki wireguard key-pair install interface wg0
```
And then I'll configure the rest of the options needed for the interface:
```sh
```commandroot
set interfaces wireguard wg0 address '10.200.200.2/24'
set interfaces wireguard wg0 description 'VPN to GCP'
set interfaces wireguard wg0 peer wireguard-gcp address '{GCP_PUBLIC_IP}'
@ -219,25 +224,25 @@ set interfaces wireguard wg0 peer wireguard-gcp public-key '{GCP_PUBLIC_KEY}'
Note that this time I'm allowing all IPs (`0.0.0.0/0`) so that this WireGuard interface will pass traffic intended for any destination (whether it's local, remote, or on the Internet). And I'm specifying a [25-second `persistent-keepalive` interval](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence) to help ensure that this NAT-ed tunnel stays up even when it's not actively passing traffic - after all, I'll need the GCP-hosted peer to be able to initiate the connection so I can access the home network remotely.
While I'm at it, I'll also add a static route to ensure traffic for the WireGuard tunnel finds the right interface:
```sh
```commandroot
set protocols static route 10.200.200.0/24 interface wg0
```
And I'll add the new `wg0` interface as a listening address for the VyOS DNS forwarder:
```sh
```commandroot
set service dns forwarding listen-address '10.200.200.2'
```
I can use the `compare` command to verify the changes I've made, and then apply and save the updated config:
```sh
```commandroot
compare
commit
save
```
I can check the status of WireGuard on VyOS (and view the public key!) like so:
```sh
$ show interfaces wireguard wg0 summary
```commandroot-session
show interfaces wireguard wg0 summary
interface: wg0
public key: {VYOS_PUBLIC_KEY}
private key: (hidden)
@ -253,7 +258,7 @@ peer: {GCP_PUBLIC_KEY}
See? That part was much easier to set up! But it doesn't look like it's actually passing traffic yet... because while the VyOS peer has been configured with the GCP peer's public key, the GCP peer doesn't know anything about the VyOS peer yet.
So I'll copy `{VYOS_PUBLIC_KEY}` and SSH back to the GCP instance to finish that configuration. Once I'm there, I can edit `/etc/wireguard/wg0.conf` as root and add in a new `[Peer]` section at the bottom, like this:
```
```cfg
[Peer]
# VyOS
PublicKey = {VYOS_PUBLIC_KEY}
@ -263,7 +268,7 @@ AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
After saving the file, I can either restart WireGuard by bringing the interface down and back up (`wg-quick down wg0 && wg-quick up wg0`), or I can reload it on the fly with:
```sh
```command
sudo -i
wg syncconf wg0 <(wg-quick strip wg0)
```
@ -271,8 +276,8 @@ wg syncconf wg0 <(wg-quick strip wg0)
(I can't just use `wg syncconf wg0` directly since `/etc/wireguard/wg0.conf` includes the `PostUp`/`PostDown` commands which can only be parsed by the `wg-quick` wrapper, so I'm using `wg-quick strip {INTERFACE}` to grab the contents of the config file, remove the problematic bits, and then pass what's left to the `wg syncconf {INTERFACE}` command to update the current running config.)
Now I can check the status of WireGuard on the GCP end:
```sh
root@wireguard:~# wg show
```commandroot-session
wg show
interface: wg0
public key: {GCP_PUBLIC_KEY}
private key: (hidden)
@ -286,16 +291,18 @@ peer: {VYOS_PUBLIC_KEY}
```
Hey, we're passing traffic now! And I can verify that I can ping stuff on my home and lab networks from the GCP instance:
```sh
john@wireguard:~$ ping -c 1 192.168.1.5
```command-session
ping -c 1 192.168.1.5
PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data.
64 bytes from 192.168.1.5: icmp_seq=1 ttl=127 time=35.6 ms
--- 192.168.1.5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms
```
john@wireguard:~$ ping -c 1 172.16.10.1
```command-session
ping -c 1 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=35.3 ms
@ -340,14 +347,17 @@ I _shouldn't_ need the keepalive for the "Road Warrior" peers connecting to the
Now I can go ahead and save this configuration, but before I try (and fail) to connect I first need to tell the cloud-hosted peer about the Chromebook. So I fire up an SSH session to my GCP instance, become root, and edit the WireGuard configuration to add a new `[Peer]` section.
```sh
```command
sudo -i
```
```commandroot
vi /etc/wireguard/wg0.conf
```
Here's the new section that I'll add to the bottom of the config:
```sh
```cfg
[Peer]
# Chromebook
PublicKey = {CB_PUBLIC_KEY}
@ -357,7 +367,7 @@ AllowedIPs = 10.200.200.3/32
This one is acting as a single-node endpoint (rather than an entryway into other networks like the VyOS peer) so setting `AllowedIPs` to only the peer's IP makes sure that WireGuard will only send it traffic specifically intended for this peer.
So my complete `/etc/wireguard/wg0.conf` looks like this so far:
```sh
```cfg
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.200.200.1/24
@ -380,14 +390,14 @@ AllowedIPs = 10.200.200.3/32
```
Now to save the file and reload the WireGuard configuration again:
```sh
```commandroot
wg syncconf wg0 <(wg-quick strip wg0)
```
At this point I can activate the connection in the WireGuard Android app, wait a few seconds, and check with `wg show` to confirm that the tunnel has been established successfully:
```sh
root@wireguard:~# wg show
```commandroot-session
wg show
interface: wg0
public key: {GCP_PUBLIC_KEY}
private key: (hidden)
@ -413,20 +423,23 @@ And I can even access my homelab when not at home!
Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings.
I'll start by SSHing to the GCP instance, elevating to root, setting the restrictive `umask` again, and creating a new folder to store client configurations.
```sh
```command
sudo -i
```
```commandroot
umask 077
mkdir /etc/wireguard/clients
cd /etc/wireguard/clients
```
As before, I'll use the built-in `wg` commands to generate the private and public key pair:
```sh
```command
wg genkey | tee phone1.key | wg pubkey > phone1.pub
```
I can then use those keys to assemble the config for the phone:
```sh
```cfg
# /etc/wireguard/clients/phone1.conf
[Interface]
PrivateKey = {PHONE1_PRIVATE_KEY}
@ -440,19 +453,19 @@ Endpoint = {GCP_PUBLIC_IP}:51820
```
I'll also add the interface address and corresponding public key to a new `[Peer]` section of `/etc/wireguard/wg0.conf`:
```sh
```cfg
[Peer]
PublicKey = {PHONE1_PUBLIC_KEY}
AllowedIPs = 10.200.200.4/32
```
And reload the WireGuard config:
```sh
```commandroot
wg syncconf wg0 <(wg-quick strip wg0)
```
Back in the `clients/` directory, I can use `qrencode` to render the phone configuration file (keys and all!) as a QR code:
```sh
```commandroot
qrencode -t ansiutf8 < phone1.conf
```
![QR code config](20211028_qrcode_config.png)
@ -465,7 +478,7 @@ I can even access my vSphere lab environment - not that it offers a great mobile
Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access.
```sh
```commandroot
rm -f /etc/wireguard/clients/*
```

View file

@ -31,9 +31,8 @@ It took a bit of fumbling, but this article describes what it took to get a Vagr
### Install the prerequisites
There are are a few packages which need to be installed before we can move on to the Vagrant-specific stuff. It's quite possible that these are already on your system.... but if they *aren't* already present you'll have a bad problem[^problem].
```shell
sudo apt update
sudo apt install \
```command-session
sudo apt update && sudo apt install \
build-essential \
gpg \
lsb-release \
@ -43,41 +42,41 @@ sudo apt install \
[^problem]: and [will not go to space today](https://xkcd.com/1133/).
I'll be configuring Vagrant to use [`libvirt`](https://libvirt.org/) to interface with the [Kernel Virtual Machine (KVM)](https://www.linux-kvm.org/page/Main_Page) virtualization solution (rather than something like VirtualBox that would bring more overhead) so I'll need to install some packages for that as well:
```shell
```command
sudo apt install virt-manager libvirt-dev
```
And to avoid having to `sudo` each time I interact with `libvirt` I'll add myself to that group:
```shell
```command
sudo gpasswd -a $USER libvirt ; newgrp libvirt
```
And to avoid [this issue](https://github.com/virt-manager/virt-manager/issues/333) I'll make a tweak to the `qemu.conf` file:
```shell
```command
echo "remember_owner = 0" | sudo tee -a /etc/libvirt/qemu.conf
sudo systemctl restart libvirtd
```
I'm also going to use `rsync` to share a [synced folder](https://developer.hashicorp.com/vagrant/docs/synced-folders/basic_usage) between the host and the VM guest so I'll need to make sure that's installed too:
```shell
```command
sudo apt install rsync
```
### Install Vagrant
With that out of the way, I'm ready to move on to the business of installing Vagrant. I'll start by adding the HashiCorp repository:
```shell
```command
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
```
I'll then install the Vagrant package:
```shell
```command
sudo apt update
sudo apt install vagrant
```
I also need to install the [`vagrant-libvirt` plugin](https://github.com/vagrant-libvirt/vagrant-libvirt) so that Vagrant will know how to interact with `libvirt`:
```shell
```command
vagrant plugin install vagrant-libvirt
```
@ -87,13 +86,13 @@ Now I can get to the business of creating my first VM with Vagrant!
Vagrant VMs are distributed as Boxes, and I can browse some published Boxes at [app.vagrantup.com/boxes/search?provider=libvirt](https://app.vagrantup.com/boxes/search?provider=libvirt) (applying the `provider=libvirt` filter so that I only see Boxes which will run on my chosen virtualization provider). For my first VM, I'll go with something light and simple: [`generic/alpine38`](https://app.vagrantup.com/generic/boxes/alpine38).
So I'll create a new folder to contain the Vagrant configuration:
```shell
```command
mkdir vagrant-alpine
cd vagrant-alpine
```
And since I'm referencing a Vagrant Box which is published on Vagrant Cloud, downloading the config is as simple as:
```shell
```command
vagrant init generic/alpine38
```
@ -106,7 +105,7 @@ the comments in the Vagrantfile as well as documentation on
```
Before I `vagrant up` the joint, I do need to make a quick tweak to the default Vagrantfile, which is what tells Vagrant how to configure the VM. By default, Vagrant will try to create a synced folder using NFS and will throw a nasty error when that (inevitably[^inevitable]) fails. So I'll open up the Vagrantfile to review and edit it:
```shell
```command
vim Vagrantfile
```
@ -135,8 +134,8 @@ end
```
With that, I'm ready to fire up this VM with `vagrant up`! Vagrant will look inside `Vagrantfile` to see the config, pull down the `generic/alpine38` Box from Vagrant Cloud, boot the VM, configure it so I can SSH in to it, and mount the synced folder:
```shell
; vagrant up
```command-session
vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Box 'generic/alpine38' could not be found. Attempting to find and install...
default: Box Provider: libvirt
@ -161,8 +160,8 @@ Bringing machine 'default' up with 'libvirt' provider...
```
And then I can use `vagrant ssh` to log in to the new VM:
```shell
; vagrant ssh
```command-session
vagrant ssh
alpine38:~$ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
@ -173,19 +172,19 @@ BUG_REPORT_URL="http://bugs.alpinelinux.org"
```
I can also verify that the synced folder came through as expected:
```shell
alpine38:~$ ls -l /vagrant
```command-session
ls -l /vagrant
total 4
-rw-r--r-- 1 vagrant vagrant 3117 Feb 20 15:51 Vagrantfile
```
Once I'm finished poking at this VM, shutting it down is as easy as:
```shell
```command
vagrant halt
```
And if I want to clean up and remove all traces of the VM, that's just:
```shell
```command
vagrant destroy
```
@ -201,7 +200,7 @@ Windows 11 makes for a pretty hefty VM which will require significant storage sp
{{% /notice %}}
Again, I'll create a new folder to hold the Vagrant configuration and do a `vagrant init`:
```shell
```command
mkdir vagrant-win11
cd vagrant-win11
vagrant init oopsme/windows11-22h2
@ -221,22 +220,22 @@ end
[^ram]: Note here that `libvirt.memory` is specified in MB. Windows 11 boots happily with 4096 MB of RAM.... and somewhat less so with just 4 MB. *Ask me how I know...*
Now it's time to bring it up. This one's going to take A While as it syncs the ~12GB Box first.
```shell
```command
vagrant up
```
Eventually it should spit out that lovely **Machine booted and ready!** message and I can log in! I *can* do a `vagrant ssh` again to gain a shell in the Windows environment, but I'll probably want to interact with those sweet sweet graphics. That takes a little bit more effort.
First, I'll use `virsh -c qemu:///system list` to see the running VM(s):
```shell
; virsh -c qemu:///system list
```command-session
virsh -c qemu:///system list
Id Name State
---------------------------------------
10 vagrant-win11_default running
```
Then I can tell `virt-viewer` that I'd like to attach a session there:
```shell
```command
virt-viewer -c qemu:///system -a vagrant-win11_default
```

View file

@ -59,7 +59,7 @@ Set-Service -Name sshd -StartupType Automatic -Status Running
#### A quick test
At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
```powershell
$ ssh vra@win02.lab.bowdre.net
ssh vra@win02.lab.bowdre.net
vra@win02.lab.bowdre.net's password:
Windows PowerShell
@ -111,7 +111,7 @@ resources:
```
So here's the complete cloud template that I've been working on:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
site:
@ -280,7 +280,7 @@ Now we're ready for the good part: inserting a new scriptable task into the work
![Task inputs](20210809_task_inputs.png)
And here's the JavaScript for the task:
```js
```js {linenos=true}
// JavaScript: Create DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
// Outputs: None
@ -341,7 +341,7 @@ The schema will include a single scriptable task:
And it's going to be *pretty damn similar* to the other one:
```js
```js {linenos=true}
// JavaScript: Delete DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
// Outputs: None
@ -396,8 +396,8 @@ Once the deployment completes, I go back into vRO, find the most recent item in
![Workflow success!](20210813_workflow_success.png)
And I can run a quick query to make sure that name actually resolves:
```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A
```command-session
dig +short bow-ttst-xxx023.lab.bowdre.net A
172.16.30.10
```
@ -410,8 +410,8 @@ Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning ta
![VM Deprovisioning workflow](20210813_workflow_deletion.png)
And I can `dig` a little more to make sure the name doesn't resolve anymore:
```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A
```command-session
dig +short bow-ttst-xxx023.lab.bowdre.net A
```

View file

@ -42,13 +42,13 @@ I'm going to use the [Docker setup](https://docs.ntfy.sh/install/#docker) on a s
#### Ntfy in Docker
So I'll start by creating a new directory at `/opt/ntfy/` to hold the goods, and create a compose config.
```shell
$ sudo mkdir -p /opt/ntfy
$ sudo vim /opt/ntfy/docker-compose.yml
```command
sudo mkdir -p /opt/ntfy
sudo vim /opt/ntfy/docker-compose.yml
```
`/opt/ntfy/docker-compose.yml`:
```yaml
```yaml {linenos=true}
# /opt/ntfy/docker-compose.yml
version: "2.3"
services:
@ -78,8 +78,8 @@ This config will create/mount folders in the working directory to store the ntfy
I can go ahead and bring it up:
```shell
$ sudo docker-compose up -d
```command-session
sudo docker-compose up -d
Creating network "ntfy_default" with the default driver
Pulling ntfy (binwiederhier/ntfy:)...
latest: Pulling from binwiederhier/ntfy
@ -92,8 +92,8 @@ Creating ntfy ... done
#### Caddy Reverse Proxy
I'll also want to add [the following](https://docs.ntfy.sh/config/#nginxapache2caddy) to my Caddy config:
`/etc/caddy/Caddyfile`:
```
```caddyfile {linenos=true}
# /etc/caddy/Caddyfile
ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
reverse_proxy localhost:2586
@ -109,8 +109,8 @@ ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
```
And I'll restart Caddy to apply the config:
```shell
$ sudo systemctl restart caddy
```command
sudo systemctl restart caddy
```
Now I can point my browser to `https://ntfy.runtimeterror.dev` and see the web interface:
@ -121,8 +121,8 @@ I can subscribe to a new topic:
![Subscribing to a public topic](subscribe_public_topic.png)
And publish a message to it:
```shell
$ curl -d "Hi" https://ntfy.runtimeterror.dev/testy
```command-session
curl -d "Hi" https://ntfy.runtimeterror.dev/testy
{"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"}
```
@ -134,16 +134,16 @@ Which will then show up as a notification in my browser:
So now I've got my own ntfy server, and I've verified that it works for unauthenticated notifications. I don't really want to operate *anything* on the internet without requiring authentication, though, so I'm going to configure ntfy to prevent unauthenticated reads and writes.
I'll start by creating a `server.yml` config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to `deny-all`:
`/opt/ntfy/etc/ntfy/server.yml`:
```yaml
# /opt/ntfy/etc/ntfy/server.yml
auth-file: "/var/lib/ntfy/user.db"
auth-default-access: "deny-all"
base-url: "https://ntfy.runtimeterror.dev"
```
I can then restart the container, and try again to subscribe to the same (or any other topic):
```shell
$ sudo docker-compose down && sudo docker-compose up -d
```command
sudo docker-compose down && sudo docker-compose up -d
```
@ -151,31 +151,35 @@ Now I get prompted to log in:
![Login prompt](login_required.png)
I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container:
```shell
$ sudo docker exec -it ntfy /bin/sh
```command
sudo docker exec -it ntfy /bin/sh
```
For now, I'm going to create three users: one as an administrator, one as a "writer", and one as a "reader". I'll be prompted for a password for each:
```shell
$ ntfy user add --role=admin administrator
```command-session
ntfy user add --role=admin administrator
user administrator added with role admin
$ ntfy user add writer
```
```command-session
ntfy user add writer
user writer added with role user
$ ntfy user add reader
```
```command-session
ntfy user add reader
user reader added with role user
```
The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that `writer` can write to all topics, and `reader` can read from all topics:
```shell
$ ntfy access writer '*' write
$ ntfy access reader '*' read
```command
ntfy access writer '*' write
ntfy access reader '*' read
```
I could lock these down further by selecting specific topic names instead of `'*'` but this will do fine for now.
Let's go ahead and verify the access as well:
```shell
$ ntfy access
```command-session
ntfy access
user administrator (role: admin, tier: none)
- read-write access to all topics (admin role)
user reader (role: user, tier: none)
@ -188,14 +192,14 @@ user * (role: anonymous, tier: none)
```
While I'm at it, I also want to configure an access token to be used with the `writer` account. I'll be able to use that instead of username+password when publishing messages.
```shell
$ ntfy token add writer
```command-session
ntfy token add writer
token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires
```
I can go back to the web, subscribe to the `testy` topic again using the `reader` credentials, and then test sending an authenticated notification with `curl`:
```shell
$ curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \
```command-session
curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \
-d "Once more, with auth!" \
https://ntfy.runtimeterror.dev/testy
{"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"}
@ -227,9 +231,9 @@ curl \
Note that I'm using a new topic name now: `server_alerts`. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications.
Okay, now let's make it executable and then give it a quick test:
```shell
$ chmod +x /usr/local/bin/ntfy_push.sh
$ /usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote."
```command
chmod +x /usr/local/bin/ntfy_push.sh
/usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote."
```
![Script test](script_test.png)
@ -248,14 +252,14 @@ MESSAGE="System boot complete"
```
And this one should be executable as well:
```shell
$ chmod +x /usr/local/bin/ntfy_boot_complete.sh
```command
chmod +x /usr/local/bin/ntfy_boot_complete.sh
```
##### Service Definition
Finally I can create and register the service definition so that the script will run at each system boot.
`/etc/systemd/system/ntfy_boot_complete.service`:
```
```cfg
[Unit]
After=network.target
@ -266,7 +270,7 @@ ExecStart=/usr/local/bin/ntfy_boot_complete.sh
WantedBy=default.target
```
```shell
```command
sudo systemctl daemon-reload
sudo systemctl enable --now ntfy_boot_complete.service
```
@ -285,8 +289,8 @@ Enabling ntfy as a notification handler is pretty straight-forward, and it will
##### Notify Configuration
I'll add ntfy to Home Assistant by using the [RESTful Notifications](https://www.home-assistant.io/integrations/notify.rest/) integration. For that, I just need to update my instance's `configuration.yaml` to configure the connection.
`configuration.yaml`:
```yaml
```yaml {linenos=true}
# configuration.yaml
notify:
- name: ntfy
platform: rest
@ -302,6 +306,7 @@ notify:
The `Authorization` line references a secret stored in `secrets.yaml`:
```yaml
# secrets.yaml
ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m
```

View file

@ -51,13 +51,13 @@ Running `tanzu completion --help` will tell you what's needed, and you can just
```
So to get the completions to load automatically whenever you start a `bash` shell, run:
```shell
```command
tanzu completion bash > $HOME/.tanzu/completion.bash.inc
printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile
```
For a `zsh` shell, it's:
```shell
```command
echo "autoload -U compinit; compinit" >> ~/.zshrc
tanzu completion zsh > "${fpath[1]}/_tanzu"
```

View file

@ -85,7 +85,7 @@ Let's start with the gear (hardware and software) I needed to make this work:
The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/).
After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`:
```
```command
gunzip QUARTZ64_EFI.img.gz
zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img
```
@ -98,7 +98,7 @@ I can then write it to the micro SD card by opening CRU, clicking on the gear ic
I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename:
```
```command
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin}
```
@ -201,12 +201,12 @@ As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new
#### Deploying Photon OS
VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM:
```shell
```command
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova
```
and then spawn a quick Python web server to share it out:
```shell
python3 -m http.server
```command-session
python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
```
@ -232,12 +232,12 @@ The default password for Photon's `root` user is `changeme`. You'll be forced to
![First login, and the requisite password change](first_login.png)
Now that I'm in, I'll set the hostname appropriately:
```bash
```commandroot
hostnamectl set-hostname pho01
```
For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file:
```bash
```commandroot-session
cat > /etc/systemd/network/10-static-en.network << "EOF"
[Match]
@ -251,7 +251,8 @@ DHCP = no
IPForward = yes
EOF
```
```commandroot
chmod 644 /etc/systemd/network/10-static-en.network
systemctl restart systemd-networkd
```
@ -259,21 +260,23 @@ systemctl restart systemd-networkd
I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale.
With networking sorted, it's probably a good idea to check for and apply any available updates:
```bash
```commandroot
tdnf update -y
```
I'll also go ahead and create a normal user account (with sudo privileges) for me to use:
```bash
```commandroot
useradd -G wheel -m john
passwd john
```
Now I can use SSH to connect to the VM and ditch the web console:
```bash
ssh pho01.lab.bowdre.net
```command-session
ssh pho01.lab.bowdre.net
Password:
john@pho01 [ ~ ]$ sudo whoami
```
```command-session
sudo whoami
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
@ -292,43 +295,44 @@ Looking good! I'll now move on to the justification[^justification] for this ent
#### Installing Tailscale
If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM:
```bash
```command
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz
```
Then I can unpack it:
```bash
```command
sudo tdnf install tar
tar xvf tailscale_arm64.tgz
cd tailscale_1.22.2_arm64/
```
So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory:
```bash
john@pho01 [ ~/tailscale_1.22.2_arm64 ]$
.:
```command-session
ls
total 32288
drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd
-rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale
-rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled
./systemd:
```
```command-session
ls ./systemd
total 8
-rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults
-rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service
```
Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions:
```bash
```command
sudo install -m 755 tailscale /usr/bin/
sudo install -m 755 tailscaled /usr/sbin/
```
Then I'll descend to the `systemd` folder and see what's up:
```bash
john@pho01 [ ~/tailscale_1.22.2_arm64/ ]$ cd systemd/
john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.defaults
```command
cd systemd/
```
```command-session
cat tailscaled.defaults
# Set the port to listen on for incoming VPN packets.
# Remote nodes will automatically be informed about the new port number,
# but you might want to configure this in order to set external firewall
@ -337,8 +341,9 @@ PORT="41641"
# Extra flags you might want to pass to tailscaled.
FLAGS=""
john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.service
```
```command-session
cat tailscaled.service
[Unit]
Description=Tailscale node agent
Documentation=https://tailscale.com/kb/
@ -366,23 +371,23 @@ WantedBy=multi-user.target
```
`tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms:
```bash
```command
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled
```
`tailscaled.service` will get dropped in `/usr/lib/systemd/system/`:
```bash
```command
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/
```
Then I'll enable the service and start it:
```bash
```command
sudo systemctl enable tailscaled.service
sudo systemctl start tailscaled.service
```
And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well:
```bash
```command
sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24"
```
@ -408,7 +413,6 @@ Now I can remotely access the VM (and thus my homelab!) from any of my other Tai
### Conclusion
I actually received the Quartz64 waay back on March 2nd, and it's taken me until this week to get all the pieces in place and working the way I wanted.
{{< tweet user="johndotbowdre" id="1499194756148125701" >}}
As is so often the case, a lot of time and effort would have been saved if I had RTFM'd[^rtfm] before diving in to the deep end. I definitely hadn't anticipated all the limitations that would come with the Quartz64 SBC before ordering mine. Now that it's done, though, I'm pretty pleased with the setup, and I feel like I learned quite a bit along the way. I keep reminding myself that this is still a very new hardware platform. I'm excited to see how things improve with future development efforts.

View file

@ -74,8 +74,8 @@ Success! My new ingress rules appear at the bottom of the list.
![New rules added](s5Y0rycng.png)
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
```
$ sudo iptables -L INPUT --line-numbers
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
@ -87,14 +87,14 @@ num target prot opt source destination
```
Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all:
```
```command
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
```
And then I'll confirm that the order is correct:
```
$ sudo iptables -L INPUT --line-numbers
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
@ -108,8 +108,8 @@ num target prot opt source destination
```
I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.)
```
$ nmap -Pn matrix.bowdre.net
```command-session
nmap -Pn matrix.bowdre.net
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT
Nmap scan report for matrix.bowdre.net(150.136.6.180)
Host is up (0.086s latency).
@ -126,15 +126,15 @@ Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds
Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up:
Make rules persistent:
```
$ sudo netfilter-persistent save
```command-session
sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
### Reverse proxy setup
I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration:
```conf
```nginx {linenos=true}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
@ -159,7 +159,7 @@ server {
```
And this sample Apache one:
```conf
```apache {linenos=true}
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com
@ -185,7 +185,7 @@ And this sample Apache one:
```
I also found this sample config for another web server called [Caddy](https://caddyserver.com):
```
```caddy {linenos=true}
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
@ -198,7 +198,7 @@ example.com:8448 {
One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either:
```sh
```command
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add -
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
@ -207,8 +207,8 @@ sudo apt install caddy
```
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
```
$ sudo vi /etc/caddy/Caddyfile
```caddy {linenos=true}
# /etc/caddy/Caddyfile
matrix.bowdre.net {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
@ -228,15 +228,15 @@ I set up the `bowdre.net` section to return the appropriate JSON string to tell
(I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.)
Now to enable the `caddy` service, start it, and restart it so that it loads the new config:
```
```command
sudo systemctl enable caddy
sudo systemctl start caddy
sudo systemctl restart caddy
```
If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening.
```
$ nmap -Pn matrix.bowdre.net
```command-session
nmap -Pn matrix.bowdre.net
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT
Nmap scan report for matrix.bowdre.net (150.136.6.180)
Host is up (0.034s latency).
@ -265,56 +265,58 @@ Okay, let's actually serve something up now.
#### Docker setup
Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system:
```sh
```command-session
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
```
```command
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
```command
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
```
I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose):
```sh
```command
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`:
```
```command
sudo usermod -G docker -a ubuntu
```
I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working:
```
$ docker --version
```command-session
docker --version
Docker version 20.10.7, build f0df350
$ docker-compose --version
```
```command-session
docker-compose --version
docker-compose version 1.29.2, build 5becea4c
```
#### Synapse setup
Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container:
```
```command
sudo mkdir -p /opt/matrix/synapse/data
cd /opt/matrix/synapse
```
And then I'll create the compose file to define the deployment:
```yaml
$ sudo vi docker-compose.yml
```yaml {linenos=true}
# /opt/matrix/synapse/docker-compose.yaml
services:
synapse:
container_name: "synapse"
@ -328,8 +330,8 @@ services:
Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`):
```sh
$ docker run -it --rm \
```command-session
docker run -it --rm \
-v "/opt/matrix/synapse/data:/data" \
-e SYNAPSE_SERVER_NAME=bowdre.net \
-e SYNAPSE_REPORT_STATS=yes \
@ -373,15 +375,15 @@ so that I can create a user account without fumbling with the CLI. I'll be sure
There are a bunch of other useful configurations that can be made here, but these will do to get things going for now.
Time to start it up:
```
$ docker-compose up -d
```command-session
docker-compose up -d
Creating network "synapse_default" with the default driver
Creating synapse ... done
```
And use `docker ps` to confirm that it's running:
```
$ docker ps
```command-session
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse
```
@ -400,6 +402,7 @@ And I can view the JSON report at the bottom of the page to confirm that it's co
"m.server": "matrix.bowdre.net:443",
"CacheExpiresAt": 0
},
}
```
Now I can fire up my [Matrix client of choice](https://element.io/get-started)), specify my homeserver using its full FQDN, and [register](https://app.element.io/#/register) a new user account:
@ -414,15 +417,13 @@ All in, I'm pretty pleased with how this little project turned out, and I learne
### Update: Updating
After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:
```sh
```command
sudo apt update
sudo apt upgrade
# And, if needed:
sudo reboot
```
Here's what I do to update the container:
```sh
```bash
# Move to the working directory
cd /opt/matrix/synapse
# Pull a new version of the synapse image

View file

@ -14,32 +14,32 @@ I found myself with a sudden need for parsing a Linux server's logs to figure ou
### Find IP-ish strings
This will get you all occurrences of things which look vaguely like IPv4 addresses:
```shell
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT
```
(It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.)
### Filter out `localhost`
The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression):
```shell
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1"
```
### Count up the duplicates
Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears):
```shell
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c
```
### Sort the results
We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top:
```shell
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r
```
### Top 5
And, finally, let's use `head -n 5` to only get the first five results:
```shell
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5
```
@ -47,7 +47,7 @@ grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | gre
You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`!
So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go:
```bash
```command
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done
```
Nice!

View file

@ -39,8 +39,8 @@ ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verif
```
Further, attempting to pull down that URL with `curl` also failed:
```sh
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
```commandroot-session
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html
@ -61,21 +61,21 @@ So here's what I did to get things working in my homelab:
![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png)
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
3. Append the certificate to the end of the system `ca-bundle.crt`:
```sh
```commandroot
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt
```
4. Test that I can now `curl` from vRA without a certificate error:
```sh
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
```commandroot-session
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""}
```
5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding
```
```cfg
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
```
above the `ExecStart` line:
```sh
root@ssc [ ~ ]# cat /usr/lib/systemd/system/raas.service
```cfg {linenos=true,hl_lines=16}
# /usr/lib/systemd/system/raas.service
[Unit]
Description=The SaltStack Enterprise API Server
After=network.target
@ -97,7 +97,7 @@ TimeoutStopSec=90
WantedBy=multi-user.target
```
6. Stop and restart the `raas` service:
```sh
```command
systemctl daemon-reload
systemctl stop raas
systemctl start raas
@ -110,7 +110,7 @@ systemctl start raas
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
2. Use `openssl` to extract the individual certificates:
```sh
```command
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem
```
Copy it to the SSC appliance, and then pick up with Step 3 above.

View file

@ -82,10 +82,9 @@ And now I can hand out handy-dandy short links!
| Link | Description|
| --- | --- |
| [go.bowdre.net/ghia](https://go.bowdre.net/ghia) | 1974 VW Karmann Ghia project |
| [go.bowdre.net/coso](https://go.bowdre.net/coso) | Follow me on CounterSocial |
| [go.bowdre.net/conedoge](https://go.bowdre.net/conedoge) | 2014 Subaru BRZ autocross videos |
| [go.bowdre.net/matrix](https://go.bowdre.net/matrix) | Chat with me on Matrix |
| [go.bowdre.net/twits](https://go.bowdre.net/twits) | Follow me on Twitter |
| [go.bowdre.net/stadia](https://go.bowdre.net/stadia) | Game with me on Stadia |
| [go.bowdre.net/cooltechshit](https://go.bowdre.net/cooltechshit) | A collection of cool tech shit (references and resources) |
| [go.bowdre.net/stuffiuse](https://go.bowdre.net/stuffiuse) | Things that I use (and think you should use too) |
| [go.bowdre.net/shorterer](https://go.bowdre.net/shorterer) | This post! |

View file

@ -44,7 +44,7 @@ After hitting **Execute**, the Swagger UI will populate the *Responses* section
![curl request format](login_controller_3.png)
So I could easily replicate this using the `curl` utility by just copying and pasting the following into a shell:
```shell
```command-session
curl -X 'POST' \
'https://vra.lab.bowdre.net/csp/gateway/am/api/login' \
-H 'accept: */*' \
@ -175,7 +175,7 @@ As you can see, Swagger can really help to jump-start the exploration of a new A
[HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]:
```shell
```command
curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add -
sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list
sudo apt update
@ -183,8 +183,8 @@ sudo apt install httpie
```
Once installed, running `http` will give me a quick overview of how to use this new tool:
```shell {hl_lines=[3]}
; http
```command-session
http
usage:
http [METHOD] URL [REQUEST_ITEM ...]
@ -198,12 +198,12 @@ HTTPie cleverly interprets anything passed after the URL as a [request item](htt
> Each request item is simply a key/value pair separated with the following characters: `:` (headers), `=` (data field, e.g., JSON, form), `:=` (raw data field), `==` (query parameters), `@` (file upload).
So my earlier request for an authentication token becomes:
```shell
```command
https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net'
```
{{% notice tip "Working with Self-Signed Certificates" %}}
If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option `--verify=no` to ignore certificate errors:
```
```command
https --verify=no POST [URL] [REQUEST_ITEMS]
```
{{% /notice %}}
@ -216,12 +216,12 @@ Running that will return a bunch of interesting headers but I'm mainly intereste
```
There's the auth token[^token] that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield:
```shell
```command
token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag
```
So now if I want to find out which images have been configured in vRA, I can ask:
```shell
```command
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token"
```
{{% notice note "Request Items" %}}
@ -229,7 +229,7 @@ Remember from above that HTTPie will automatically insert key/value pairs separa
{{% /notice %}}
And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region:
```json {hl_lines=[11,14,37,40,53,56]}
```json {linenos=true,hl_lines=[11,14,37,40,53,56]}
{
"content": [
{
@ -376,7 +376,7 @@ I'll head into **Library > Actions** to create a new action inside my `com.virtu
| `configurationName` | `string` | Name of Configuration |
| `variableName` | `string` | Name of desired variable inside Configuration |
```javascript
```javascript {linenos=true}
/*
JavaScript: getConfigValue action
Inputs: path (string), configurationName (string), variableName (string)
@ -396,7 +396,7 @@ Next, I'll create another action in my `com.virtuallypotato.utility` module whic
![vraLogin action](vraLogin_action.png)
```javascript
```javascript {linenos=true}
/*
JavaScript: vraLogin action
Inputs: none
@ -428,7 +428,7 @@ I like to clean up after myself so I'm also going to create a `vraLogout` action
|:--- |:--- |:--- |
| `token` | `string` | Auth token of the session to destroy |
```javascript
```javascript {linenos=true}
/*
JavaScript: vraLogout action
Inputs: token (string)
@ -458,7 +458,7 @@ My final "utility" action for this effort will run in between `vraLogin` and `vr
|`uri`|`string`|Path to API controller (`/iaas/api/flavor-profiles`)|
|`content`|`string`|Any additional data to pass with the request|
```javascript
```javascript {linenos=true}
/*
JavaScript: vraExecute action
Inputs: token (string), method (string), uri (string), content (string)
@ -496,7 +496,7 @@ This action will:
Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
Anyway, here's my first swing:
```JavaScript
```JavaScript {linenos=true}
/*
JavaScript: vraTester action
Inputs: none
@ -513,7 +513,7 @@ Pretty simple, right? Let's see if it works:
![vraTester action](vraTester_action.png)
It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit:
```json {hl_lines=[17,35,56,74]}
```json {linenos=true,hl_lines=[17,35,56,74]}
[
{
"tags": [],
@ -609,7 +609,7 @@ This action will basically just repeat the call that I tested above in `vraTeste
![vraGetZones action](vraGetZones_action.png)
```javascript
```javascript {linenos=true}
/*
JavaScript: vraGetZones action
Inputs: none
@ -639,7 +639,7 @@ Oh, and the whole thing is wrapped in a conditional so that the code only execut
|:--- |:--- |:--- |
| `zoneName` | `string` | The name of the Zone selected in the request form |
```javascript
```javascript {linenos=true}
/* JavaScript: vraGetImages action
Inputs: zoneName (string)
Return type: array/string
@ -708,7 +708,7 @@ Next I'll repeat the same steps to create a new `image` input. This time, though
![Binding the input](image_input.png)
The full code for my template now looks like this:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
zoneName:

View file

@ -50,7 +50,7 @@ I've described the [process of creating a new instance on OCI in a past post](/f
### Prepare the server
Once the server's up and running, I go through the usual steps of applying any available updates:
```bash
```command
sudo apt update
sudo apt upgrade
```
@ -58,12 +58,12 @@ sudo apt upgrade
#### Install Tailscale
And then I'll install Tailscale using their handy-dandy bootstrap script:
```bash
```command
curl -fsSL https://tailscale.com/install.sh | sh
```
When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to identify the server with an [ACL tag](https://tailscale.com/kb/1068/acl-tags/). ([Within my tailnet](/secure-networking-made-simple-with-tailscale/#acls)[^tailnet], all of my other clients are able to connect to devices bearing the `cloud` tag but `cloud` servers can only reach back to other devices for performing DNS lookups.)
```bash
```command
sudo tailscale up --advertise-tags "tag:cloud"
```
@ -72,12 +72,16 @@ sudo tailscale up --advertise-tags "tag:cloud"
#### Install Docker
Next I install Docker and `docker-compose`:
```bash
```command
sudo apt install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
```command
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin
```
@ -85,8 +89,8 @@ sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-com
#### Configure firewall
This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. [As before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration), I need to be mindful of the explicit `REJECT all` rule at the bottom of the `INPUT` chain:
```bash
$ sudo iptables -L INPUT --line-numbers
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ts-input all -- anywhere anywhere
@ -99,15 +103,15 @@ num target prot opt source destination
```
So I'll insert the new rules at line 6:
```bash
```command
sudo iptables -L INPUT --line-numbers
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
```
And confirm that it did what I wanted it to:
```bash
$ sudo iptables -L INPUT --line-numbers
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ts-input all -- anywhere anywhere
@ -122,8 +126,8 @@ num target prot opt source destination
```
That looks good, so let's save the new rules:
```bash
$ sudo netfilter-persistent save
```command-session
sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
@ -139,19 +143,19 @@ I'm now ready to move on with installing Gitea itself.
I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations.
Here's where I create the account and also generate what will become the SSH key used by the git server:
```bash
```command
sudo useradd -s /bin/bash -m git
sudo -u git ssh-keygen -t ecdsa -C "Gitea Host Key"
```
The `git` user's SSH public key gets added as-is directly to that user's `authorized_keys` file:
```bash
```command
sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
sudo -u git chmod 600 /home/git/.ssh/authorized_keys
```
When other users add their SSH public keys into Gitea's web UI, those will get added to `authorized_keys` with a little something extra: an alternate command to perform git actions instead of just SSH ones:
```
```cfg
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey>
```
@ -160,11 +164,13 @@ No users have added their keys to Gitea just yet so if you look at `/home/git/.s
{{% /notice %}}
So I'll go ahead and create that extra command:
```bash
```command-session
cat <<"EOF" | sudo tee /usr/local/bin/gitea
#!/bin/sh
ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
EOF
```
```command
sudo chmod +x /usr/local/bin/gitea
```
@ -174,26 +180,26 @@ So when I use a `git` command to interact with the server via SSH, the commands
That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea.
I'm going to place this in `/opt/gitea`:
```bash
```command
sudo mkdir -p /opt/gitea
cd /opt/gitea
```
And I want to be sure that my new `git` user owns the `./data` directory which will be where the git contents get stored:
```bash
```command
sudo mkdir data
sudo chown git:git -R data
```
Now to create the file:
```bash
```command
sudo vi docker-compose.yaml
```
The basic contents of the file came from the [Gitea documentation for Installation with Docker](https://docs.gitea.io/en-us/install-with-docker/), but I also included some (highlighted) additional environment variables based on the [Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/):
`docker-compose.yaml`:
```yaml {hl_lines=["12-13","19-31",38,43]}
```yaml {linenos=true,hl_lines=["12-13","19-31",38,43]}
version: "3"
networks:
@ -292,7 +298,7 @@ With the config in place, I'm ready to fire it up:
#### Start containers
Starting Gitea is as simple as
```bash
```command
sudo docker-compose up -d
```
which will spawn both the Gitea server as well as a `postgres` database to back it.
@ -305,7 +311,7 @@ I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tie
#### Install Caddy
So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system:
```bash
```command
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
@ -315,12 +321,12 @@ sudo apt install caddy
#### Configure Caddy
Configuring Caddy is as simple as creating a Caddyfile:
```bash
```command
sudo vi /etc/caddy/Caddyfile
```
Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port `3000` that used by the Docker container:
```
```caddy
git.bowdre.net {
reverse_proxy localhost:3000
}
@ -330,7 +336,7 @@ That's it. I don't need to worry about headers or ACME configurations or anythin
#### Start Caddy
All that's left at this point is to start up Caddy:
```bash
```command
sudo systemctl enable caddy
sudo systemctl start caddy
sudo systemctl restart caddy
@ -357,25 +363,26 @@ And then I can log out and log back in with my new non-admin identity!
#### Add SSH public key
Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following [GitHub's instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent):
```shell
```command
ssh-keygen -t ed25519 -C "user@example.com"
```
I'll view the contents of the public key - and go ahead and copy the output for future use:
```
; cat ~/.ssh/id_ed25519.pub
```command-session
cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com
```
Back in the Gitea UI, I'll click the user menu up top and select **Settings**, then the *SSH / GPG Keys* tab, and click the **Add Key** button:
![User menu](user_menu.png)
![Adding a public key](add_key.png)
I can give the key a name and then paste in that public key, and then click the lower **Add Key** button to insert the new key.
To verify that the SSH passthrough magic I [configured earlier](#prepare-git-user) is working, I can take a look at `git`'s `authorized_keys` file:
```shell{hl_lines=3}
; sudo tail -2 /home/git/.ssh/authorized_keys
```command-session
sudo tail -2 /home/git/.ssh/authorized_keys
# gitea public key
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com
```
@ -388,7 +395,7 @@ I'm already limiting this server's exposure by blocking inbound SSH (except for
[Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
Installing Fail2ban is simple:
```shell
```command
sudo apt update
sudo apt install fail2ban
```
@ -404,22 +411,22 @@ Specifically, I'll want to watch `log/gitea.log` for messages like the following
```
So let's create that filter:
```shell
```command
sudo vi /etc/fail2ban/filter.d/gitea.conf
```
`/etc/fail2ban/filter.d/gitea.conf`:
```
```cfg
# /etc/fail2ban/filter.d/gitea.conf
[Definition]
failregex = .*(Failed authentication attempt|invalid credentials).* from <HOST>
ignoreregex =
```
Next I create the jail, which tells Fail2ban what to do:
```shell
```command
sudo vi /etc/fail2ban/jail.d/gitea.conf
```
`/etc/fail2ban/jail.d/gitea.conf`:
```
```cfg
# /etc/fail2ban/jail.d/gitea.conf
[gitea]
enabled = true
filter = gitea
@ -433,14 +440,14 @@ action = iptables-allports
This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds).
Then I just need to enable and start Fail2ban:
```shell
```command
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
```
To verify that it's working, I can deliberately fail to log in to the web interface and watch `/var/log/fail2ban.log`:
```shell
; sudo tail -f /var/log/fail2ban.log
```command-session
sudo tail -f /var/log/fail2ban.log
2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26
```
@ -470,10 +477,10 @@ The real point of this whole exercise was to sync my Obsidian vault to a Git ser
Once it's created, the new-but-empty repository gives me instructions on how I can interact with it. Note that the SSH address uses the special `git.tadpole-jazz.ts.net` Tailscale domain name which is only accessible within my tailnet.
![Emtpy repository](empty_repo.png)
![Empty repository](empty_repo.png)
Now I can follow the instructions to initialize my local Obsidian vault (stored at `~/obsidian-vault/`) as a git repository and perform my initial push to Gitea:
```shell
```command
cd ~/obsidian-vault/
git init
git add .

View file

@ -23,13 +23,13 @@ If you'd just like to import a working phpIPAM integration into your environment
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection:
```shell
```command
sudo mkdir /etc/apache2/certificate
cd /etc/apache2/certificate/
sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key
```
I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443:
```xml
```apache {linenos=true}
<VirtualHost *:80>
ServerName ipam.lab.bowdre.net
Redirect permanent / https://ipam.lab.bowdre.net
@ -54,7 +54,8 @@ After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` re
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom:
```yaml
```yaml {linenos=true,hl_lines="17-20"}
# /etc/netplan/99-netcfg-vmware.yaml
network:
version: 2
renderer: networkd
@ -76,13 +77,17 @@ network:
metric: 100
```
I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network:
```command
sudo netplan apply
```
john@ipam:~$ sudo netplan apply
john@ipam:~$ ip route
```command-session
ip route
default via 192.168.1.1 dev ens160 proto static
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
john@ipam:~$ ping 172.16.10.12
```
```command-session
ping 172.16.10.12
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data.
64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms
64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms
@ -94,7 +99,7 @@ rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms
```
Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes:
```
```cron
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php
```
@ -200,7 +205,7 @@ Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out
I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration.
The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration:
```xml
```xml {linenos=true,hl_lines="2-4"}
<properties>
<provider.name>phpIPAM</provider.name>
<provider.description>phpIPAM integration for vRA</provider.description>
@ -216,7 +221,7 @@ The README tells you to extract the .zip and make a simple modification to the `
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field:
```json
```json {linenos=true,hl_lines=[12,38]}
{
"layout":{
"pages":[
@ -316,7 +321,7 @@ Example payload:
```
The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code:
```python
```python {linenos=true}
def do_validate_endpoint(self, auth_credentials, cert):
# Your implemention goes here
@ -327,7 +332,7 @@ def do_validate_endpoint(self, auth_credentials, cert):
response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password))
```
The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit:
```python
```python {linenos=true}
def do_validate_endpoint(self, auth_credentials, cert):
# Build variables
username = auth_credentials["privateKeyId"]
@ -336,19 +341,19 @@ def do_validate_endpoint(self, auth_credentials, cert):
apiAppId = self.inputs["endpointProperties"]["apiAppId"]
```
As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable:
```python
```python {linenos=true}
uri = f'https://{hostname}/api/{apiAppId}/
auth = (username, password)
```
I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`.
```python
```python {linenos=true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
return req
```
And we'll call that function from `do_validate_endpoint()`:
```python
```python {linenos=true}
# Test auth connection
try:
response = auth_session(uri, auth, cert)
@ -367,7 +372,7 @@ After completing each operation, run `mvn package -PcollectDependencies -Duser.i
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
![Extensibility action runs](e4PTJxfqH.png)
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
```json
```json {linenos=true}
{
"__metadata": {
"headers": {
@ -394,7 +399,7 @@ That's one operation in the bank!
### Step 6: 'Get IP Ranges' action
So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token:
```python
```python {linenos=true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -404,7 +409,7 @@ def auth_session(uri, auth, cert):
return token
```
We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token:
```python
```python {linenos=true}
def do_get_ip_ranges(self, auth_credentials, cert):
# Build variables
username = auth_credentials["privateKeyId"]
@ -418,7 +423,7 @@ def do_get_ip_ranges(self, auth_credentials, cert):
token = auth_session(uri, auth, cert)
```
We can then query for the list of subnets, just like we did earlier:
```python
```python {linenos=true}
# Request list of subnets
subnet_uri = f'{uri}/subnets/'
ipRanges = []
@ -429,7 +434,7 @@ I decided to add the extra `filter_by=isPool&filter_value=1` argument to the que
{{% notice note "Update" %}}
I now filter for networks identified by the designated custom field like so:
```python
```python {linenos=true}
# Request list of subnets
subnet_uri = f'{uri}/subnets/'
if enableFilter == "true":
@ -447,7 +452,7 @@ I now filter for networks identified by the designated custom field like so:
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
For instance, these are pretty direct matches:
```python
```python {linenos=true}
ipRange['id'] = str(subnet['id'])
ipRange['description'] = str(subnet['description'])
ipRange['subnetPrefixLength'] = str(subnet['mask'])
@ -458,32 +463,32 @@ ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}"
```
Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses:
```python
```python {linenos=true}
network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask']))
ipRange['ipVersion'] = 'IPv' + str(network.version)
ipRange['startIPAddress'] = str(network[1])
ipRange['endIPAddress'] = str(network[-2])
```
I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list:
```python
```python {linenos=true}
try:
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
except:
ipRange['dnsServerAddresses'] = []
```
I can also nest another API request to find which address is marked as the gateway for a given subnet:
```python
```python {linenos=true}
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
if gw_req.status_code == 200:
gateway = gw_req.json()['data'][0]['ip']
ipRange['gatewayAddress'] = gateway
```
And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA:
```python
```python {linenos=true}
ipRanges.append(ipRange)
```
After rearranging a bit and tossing in some logging, here's what I've got:
```python
```python {linenos=true}
for subnet in subnets:
ipRange = {}
ipRange['id'] = str(subnet['id'])
@ -539,7 +544,7 @@ Next, we need to figure out how to allocate an IP.
### Step 7: 'Allocate IP' action
I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over.
```python
```python {linenos=true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -566,7 +571,7 @@ def do_allocate_ip(self, auth_credentials, cert):
}
```
I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included:
```python
```python {linenos=true}
allocation_result = []
try:
resource = self.inputs["resourceInfo"]
@ -581,7 +586,7 @@ except Exception as e:
raise e
```
I also added `bundle` to the `allocate()` function:
```python
```python {linenos=true}
def allocate(resource, allocation, context, endpoint, bundle):
last_error = None
@ -598,7 +603,7 @@ def allocate(resource, allocation, context, endpoint, bundle):
raise last_error
```
The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables:
```python
```python {linenos=true}
def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle):
if int(allocation['size']) ==1:
vmName = resource['name']
@ -612,7 +617,7 @@ def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle)
raise Exception("Not implemented")
```
I construct a `payload` that will be passed to the phpIPAM API when an IP gets allocated to a VM:
```python
```python {linenos=true}
payload = {
'hostname': vmName,
'description': f'Reserved by vRA for {owner} at {datetime.now()}'
@ -621,13 +626,13 @@ payload = {
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP.
```python
```python {linenos=true}
allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/'
allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert)
allocate_req = allocate_req.json()
```
Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file).
```python
```python {linenos=true}
if allocate_req['success']:
version = ipaddress.ip_address(allocate_req['data']).version
result = {
@ -643,7 +648,7 @@ else:
return result
```
I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation:
```python
```python {linenos=true}
def rollback(allocation_result, bundle):
uri = bundle['uri']
token = bundle['token']
@ -671,7 +676,7 @@ Almost done!
### Step 8: 'Deallocate IP' action
The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization:
```python
```python {linenos=true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -707,7 +712,7 @@ def do_deallocate_ip(self, auth_credentials, cert):
}
```
And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action:
```python
```python {linenos=true}
def deallocate(resource, deallocation, bundle):
uri = bundle['uri']
token = bundle['token']
@ -731,7 +736,7 @@ You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/
[2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12
```
And the Outputs section of the Details tab will show:
```json
```json {linenos=true}
{
"ipDeallocations": [
{

View file

@ -52,7 +52,7 @@ Now to reference these specs from a cloud template...
### Cloud template
I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template:
```yaml
```yaml {linenos=true}
inputs:
[...]
adJoin:
@ -66,7 +66,7 @@ This new `adJoin` input is a boolean so it will appear on the request form as a
In the `resources` section of the template, I'll set a new property called `ignoreActiveDirectory` to be the inverse of the `adJoin` input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use `activeDirectory: relativeDN` to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the `customizationSpec` and use [cloud template conditional syntax](https://docs.vmware.com/en/vRealize-Automation/8.4/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html#conditions-4) to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern `'${conditional-expresion ? true-value : false-value}'`).
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@ -81,7 +81,7 @@ resources:
Here's the current cloud template in its entirety:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
site:

View file

@ -54,7 +54,7 @@ Sounds pretty cool, right? I'm not going to go too deep into "how to Packer" in
## Prerequisites
### Install Packer
Before being able to *use* Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package:
```shell
```command
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install packer
@ -113,7 +113,7 @@ Let's quickly run through that build process, and then I'll back up and examine
### `ubuntu-k8s.pkr.hcl`
#### `packer` block
The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build:
```
``` {linenos=true}
// BLOCK: packer
// The Packer configuration.
packer {
@ -134,7 +134,7 @@ As I mentioned above, I'll be using the official [`vsphere` plugin](https://gith
#### `data` block
This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above).
```text
``` {linenos=true}
// BLOCK: data
// Defines data sources.
data "sshkey" "install" {
@ -147,7 +147,7 @@ This will generate an ECDSA keypair, and the public key will include the identif
#### `locals` block
Locals are a type of Packer variable which aren't explicitly declared in the `variables.pkr.hcl` file. They only exist within the context of a single build (hence the "local" name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines):
```text {hl_lines=[10,17]}
```text {linenos=true,hl_lines=[10,17]}
// BLOCK: locals
// Defines local variables.
locals {
@ -182,7 +182,7 @@ The `source` block tells the `vsphere-iso` builder how to connect to vSphere, wh
You'll notice that most of this is just mapping user-defined variables (with the `var.` prefix) to properties used by `vsphere-iso`:
```text
```text {linenos=true}
// BLOCK: source
// Defines the builder configuration blocks.
source "vsphere-iso" "ubuntu-k8s" {
@ -284,7 +284,7 @@ source "vsphere-iso" "ubuntu-k8s" {
#### `build` block
This block brings everything together and executes the build. It calls the `source.vsphere-iso.ubuntu-k8s` block defined above, and also ties in a `file` and a few `shell` provisioners. `file` provisioners are used to copy files (like SSL CA certificates) into the VM, while the `shell` provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages.
```text
```text {linenos=true}
// BLOCK: build
// Defines the builders to run, provisioners, and post-processors.
build {
@ -323,7 +323,7 @@ Before looking at the build-specific variable definitions, let's take a quick lo
Most of these carry descriptions with them so I won't restate them outside of the code block here:
```text
```text {linenos=true}
/*
DESCRIPTION:
Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso).
@ -724,7 +724,7 @@ The full `variables.pkr.hcl` can be viewed [here](https://github.com/jbowdre/vsp
Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values.
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
```text
```text {linenos=true}
/*
DESCRIPTION:
Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso).
@ -745,7 +745,7 @@ vsphere_folder = "_Templates"
```
I'll then describe the properties of the VM itself:
```text
```text {linenos=true}
// Guest Operating System Settings
vm_guest_os_language = "en_US"
vm_guest_os_keyboard = "us"
@ -771,7 +771,7 @@ common_remove_cdrom = true
```
Then I'll configure Packer to convert the VM to a template once the build is finished:
```text
```text {linenos=true}
// Template and Content Library Settings
common_template_conversion = true
common_content_library_name = null
@ -786,7 +786,7 @@ common_ovf_export_path = ""
```
Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity:
```text
```text {linenos=true}
// Removable Media Settings
common_iso_datastore = "nuchost-local"
iso_url = null
@ -797,7 +797,7 @@ iso_checksum_value = "5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74
```
And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the `cloud-init` coniguration into the Ubuntu installer:
```text
```text {linenos=true}
// Boot Settings
vm_boot_order = "disk,cdrom"
vm_boot_wait = "4s"
@ -814,7 +814,7 @@ vm_boot_command = [
Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the `cloud-init` configuration in just a minute...)
```text
```text {linenos=true}
// Communicator Settings
communicator_port = 22
communicator_timeout = "20m"
@ -832,7 +832,7 @@ ssh_keys = [
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template.
The last bit of this file also designates the desired version of Kubernetes to be installed.
```text
```text {linenos=true}
// Provisioner Settings
post_install_scripts = [
"scripts/wait-for-cloud-init.sh",
@ -864,7 +864,7 @@ Okay, so we've covered the Packer framework that creates the VM; now let's take
See the bits that look `${ like_this }`? Those place-holders will take input from the [`locals` block of `ubuntu-k8s.pkr.hcl`](#locals-block) mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
```yaml
```yaml {linenos=true}
#cloud-config
autoinstall:
version: 1
@ -1068,7 +1068,7 @@ You can find all of the scripts [here](https://github.com/jbowdre/vsphere-k8s/tr
#### `wait-for-cloud-init.sh`
This simply holds up the process until the `/var/lib/cloud//instance/boot-finished` file has been created, signifying the completion of the `cloud-init` process:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Waiting for cloud-init...'
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
@ -1078,7 +1078,7 @@ done
#### `cleanup-subiquity.sh`
Next I clean up any network configs that may have been created during the install process:
```shell
```shell {linenos=true}
#!/bin/bash -eu
if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg
@ -1093,7 +1093,7 @@ fi
#### `install-ca-certs.sh`
The [`file` provisioner](#build-block) mentioned above helpfully copied my custom CA certs to the `/tmp/certs/` folder on the VM; this script will install them into the certificate store:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Installing custom certificates...'
sudo cp /tmp/certs/* /usr/local/share/ca-certificates/
@ -1106,7 +1106,7 @@ sudo /usr/sbin/update-ca-certificates
#### `disable-multipathd.sh`
This disables `multipathd`:
```shell
```shell {linenos=true}
#!/bin/bash -eu
sudo systemctl disable multipathd
echo 'Disabling multipathd'
@ -1114,7 +1114,7 @@ echo 'Disabling multipathd'
#### `disable-release-upgrade-motd.sh`
And this one disable the release upgrade notices that would otherwise be displayed upon each login:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Disabling release update MOTD...'
sudo chmod -x /etc/update-motd.d/91-release-upgrade
@ -1122,7 +1122,7 @@ sudo chmod -x /etc/update-motd.d/91-release-upgrade
#### `persist-cloud-init-net.sh`
I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick `cloud-init` option to help make sure that happens:
```shell
```shell {linenos=true}
#!/bin/sh -eu
echo '>> Preserving network settings...'
echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
@ -1131,7 +1131,7 @@ echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
#### `configure-sshd.sh`
Then I just set a few options for the `sshd` configuration, like disabling root login:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Configuring SSH'
sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
@ -1143,7 +1143,7 @@ sudo sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/
This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM.
First I enable the required `overlay` and `br_netfilter` modules:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo ">> Installing Kubernetes components..."
@ -1159,7 +1159,7 @@ sudo modprobe br_netfilter
```
Then I'll make some networking tweaks to enable forwarding and bridging:
```shell
```shell {linenos=true}
# Configure networking
echo ".. configure networking"
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
@ -1172,7 +1172,7 @@ sudo sysctl --system
```
Next, set up `containerd` as the container runtime:
```shell
```shell {linenos=true}
# Setup containerd
echo ".. setup containerd"
sudo apt-get update && sudo apt-get install -y containerd apt-transport-https jq
@ -1182,7 +1182,7 @@ sudo systemctl restart containerd
```
Then disable swap:
```shell
```shell {linenos=true}
# Disable swap
echo ".. disable swap"
sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^\(.*\)$/#\1/g' /etc/fstab
@ -1190,7 +1190,7 @@ sudo swapoff -a
```
Next I'll install the Kubernetes components and (crucially) `apt-mark hold` them so they won't be automatically upgraded without it being a coordinated change:
```shell
```shell {linenos=true}
# Install Kubernetes
echo ".. install kubernetes version ${KUBEVERSION}"
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
@ -1201,7 +1201,7 @@ sudo apt-mark hold kubelet kubeadm kubectl
#### `update-packages.sh`
Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Checking for and installing updates...'
sudo apt-get update && sudo apt-get -y upgrade
@ -1214,7 +1214,7 @@ After the reboot, all that's left are some cleanup tasks to get the VM ready to
#### `cleanup-cloud-init.sh`
I'll start with cleaning up the `cloud-init` state:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Cleaning up cloud-init state...'
sudo cloud-init clean -l
@ -1222,7 +1222,7 @@ sudo cloud-init clean -l
#### `enable-vmware-customization.sh`
And then be (re)enable the ability for VMware to be able to customize the guest successfully:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Enabling legacy VMware Guest Customization...'
echo 'disable_vmware_customization: true' | sudo tee -a /etc/cloud/cloud.cfg
@ -1231,7 +1231,7 @@ sudo vmware-toolbox-cmd config set deployPkg enable-custom-scripts true
#### `zero-disk.sh`
I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file:
```shell
```shell {linenos=true}
#!/bin/bash -eu
echo '>> Zeroing free space to reduce disk size'
sudo sh -c 'dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync'
@ -1240,7 +1240,7 @@ sudo sh -c 'rm -f /EMPTY; sync; sleep 1; sync'
#### `generalize.sh`
Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the `packer_key` identifier since that won't be needed anymore.
```shell
```shell {linenos=true}
#!/bin/bash -eu
# Prepare a VM to become a template.
@ -1293,7 +1293,7 @@ sudo rm -f /root/.bash_history
### Kick out the jams (or at least the build)
Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the `.pkr.hcl` files, and then run the Packer build command:
```shell
```command
packer packer build -on-error=abort -force .
```

View file

@ -113,7 +113,7 @@ LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
#### Deploying the cluster
That's the only thing I need to manually edit so now I can go ahead and create the cluster with:
```
```command
tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml
```
@ -136,11 +136,12 @@ Some addons might be getting installed! Check their status by running the follow
```
I obediently follow the instructions to switch to the correct context and verify that the addons are all running:
```bash
kubectl config use-context tce-mgmt-admin@tce-mgmt
```command-session
kubectl config use-context tce-mgmt-admin@tce-mgmt
Switched to context "tce-mgmt-admin@tce-mgmt".
kubectl get apps -A
```
```command-session
kubectl get apps -A
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
tkg-system antrea Reconcile succeeded 5m2s 11m
tkg-system metrics-server Reconcile succeeded 39s 11m
@ -158,21 +159,25 @@ I've got a TCE cluster now but it's not quite ready for me to authenticate with
#### Load Balancer deployment
The [guide I'm following from the TCE site](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/) assumes that I'm using NSX-ALB in my environment, but I'm not. So, [as before](/tanzu-community-edition-k8s-homelab/#deploying-kube-vip-as-a-load-balancer), I'll need to deploy [Scott Rosenberg's `kube-vip` Carvel package](https://github.com/vrabbi/tkgm-customizations):
```bash
```command
git clone https://github.com/vrabbi/tkgm-customizations.git
cd tkgm-customizations/carvel-packages/kube-vip-package
kubectl apply -n tanzu-package-repo-global -f metadata.yml
kubectl apply -n tanzu-package-repo-global -f package.yaml
```
```command-session
cat << EOF > values.yaml
vip_range: 192.168.1.64-192.168.1.70
EOF
```
```command
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
```
#### Modifying services to use the Load Balancer
With the load balancer in place, I can follow the TCE instruction to modify the Pinniped and Dex services to switch from the `NodePort` type to the `LoadBalancer` type so they can be easily accessed from outside of the cluster. This process starts by creating a file called `pinniped-supervisor-svc-overlay.yaml` and pasting in the following overlay manifest:
```yaml
```yaml {linenos=true}
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
---
@ -203,8 +208,8 @@ spec:
```
This overlay will need to be inserted into the `pinniped-addon` secret which means that the contents need to be converted to a base64-encoded string:
```bash
base64 -w 0 pinniped-supervisor-svc-overlay.yaml
```command-session
base64 -w 0 pinniped-supervisor-svc-overlay.yaml
I0AgbG9hZCgi[...]==
```
{{% notice note "Avoid newlines" %}}
@ -212,14 +217,14 @@ The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines
{{% /notice %}}
I'll copy the resulting base64 string (which is much longer than the truncated form I'm using here), and paste it into the following command to patch the secret (which will be named after the management cluster name so replace the `tce-mgmt` part as appropriate):
```bash
kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}'
```command-session
kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}'
secret/tce-mgmt-pinniped-addon patched
```
I can watch as the `pinniped-supervisor` and `dexsvc` services get updated with the new service type:
```bash
kubectl get svc -A -w
```command-session
kubectl get svc -A -w
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 <none> 443:31234/TCP
tanzu-system-auth dexsvc NodePort 100.70.238.106 <none> 5556:30167/TCP
@ -231,11 +236,13 @@ tanzu-system-auth dexsvc LoadBalancer 100.70.238.106
```
I'll also need to restart the `pinniped-post-deploy-job` job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created:
```bash
kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job
```command-session
kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job
job.batch "pinniped-post-deploy-job" deleted
```
kubectl get jobs -A -w
```command-session
kubectl get jobs -A -w
NAMESPACE NAME COMPLETIONS DURATION AGE
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
@ -247,7 +254,7 @@ pinniped-supervisor pinniped-post-deploy-job 1/1 9s 9s
Right now, I've got all the necessary components to support LDAPS authentication with my TCE management cluster but I haven't done anything yet to actually define who should have what level of access. To do that, I'll create a `ClusterRoleBinding`.
I'll toss this into a file I'll call `tanzu-admins-crb.yaml`:
```yaml
```yaml {linenos=true}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
@ -267,23 +274,24 @@ I have a group in Active Directory called `Tanzu-Admins` which contains a group
Once applied, users within that group will be granted the `cluster-admin` role[^roles].
Let's do it:
```bash
kubectl apply -f tanzu-admins-crb.yaml
```command-session
kubectl apply -f tanzu-admins-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created
```
Thus far, I've been using the default administrator context to interact with the cluster. Now it's time to switch to the non-admin context:
```bash
tanzu management-cluster kubeconfig get
```command-session
tanzu management-cluster kubeconfig get
You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt'
kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt
```
```command-session
kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt
Switched to context "tanzu-cli-tce-mgmt@tce-mgmt".
```
After assuming the non-admin context, the next time I try to interact with the cluster it should kick off the LDAPS authentication process. It won't look like anything is happening in the terminal:
```bash
kubectl get nodes
```command-session
kubectl get nodes
```
@ -294,8 +302,8 @@ Doing so successfully will yield:
![Dex login success!](dex_login_success.png)
And the `kubectl` command will return the expected details:
```bash
kubectl get nodes
```command-session
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vmware.1
tce-mgmt-md-0-847db9ddc-5bwjs Ready <none> 28h v1.21.5+vmware.1
@ -318,8 +326,8 @@ Other users hoping to work with a Tanzu Community Edition cluster will also need
At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one.
I was then able to deploy the workload cluster with:
```bash
tanzu cluster create --file tce-work-deploy.yaml
```command-session
tanzu cluster create --file tce-work-deploy.yaml
Validating configuration...
Creating workload cluster 'tce-work'...
Waiting for cluster to be initialized...
@ -333,30 +341,33 @@ Workload cluster 'tce-work' created
```
Access the admin context:
```bash
tanzu cluster kubeconfig get --admin tce-work
```command-session
tanzu cluster kubeconfig get --admin tce-work
Credentials of cluster 'tce-work' have been saved
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
kubectl config use-context tce-work-admin@tce-work
```
```command-session
kubectl config use-context tce-work-admin@tce-work
Switched to context "tce-work-admin@tce-work".
```
Apply the same ClusterRoleBinding from before[^crb]:
```bash
kubectl apply -f tanzu-admins-crb.yaml
```command-session
kubectl apply -f tanzu-admins-crb.yaml
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created
```
And finally switch to the non-admin context and log in with my AD account:
```bash
tanzu cluster kubeconfig get tce-work
```command-session
tanzu cluster kubeconfig get tce-work
You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-work@tce-work'
kubectl config use-context tanzu-cli-tce-work@tce-work
```
```command-session
kubectl config use-context tanzu-cli-tce-work@tce-work
Switched to context "tanzu-cli-tce-work@tce-work".
kubectl get nodes
```
```command-session
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tce-work-control-plane-zts6r Ready control-plane,master 12m v1.21.5+vmware.1
tce-work-md-0-bcfdc4d79-vn9xb Ready <none> 11m v1.21.5+vmware.1
@ -376,8 +387,8 @@ It took me quite a bit of trial and error to get this far and (being a k8s novic
#### Checking and modifying `dex` configuration
I had a lot of trouble figuring out how to correctly format the `member:1.2.840.113556.1.4.1941:` attribute in the LDAPS config so that it wouldn't get split into multiple attributes due to the trailing colon - and it took me forever to discover that was even the issue. What eventually did the trick for me was learning that I could look at (and modify!) the configuration for the `dex` app with:
```bash
kubectl -n tanzu-system-auth edit configmaps dex
```command-session
kubectl -n tanzu-system-auth edit configmaps dex
[...]
groupSearch:
baseDN: OU=LAB,DC=lab,DC=bowdre,DC=net
@ -396,12 +407,13 @@ This let me make changes on the fly until I got a working configuration and then
#### Reviewing `dex` logs
Authentication attempts (at least on the LDAPS side of things) will show up in the logs for the `dex` pod running in the `tanzu-system-auth` namespace. This is a great place to look to see if the user isn't being found, credentials are invalid, or the groups aren't being enumerated correctly:
```bash
kubectl -n tanzu-system-auth get pods
```command-session
kubectl -n tanzu-system-auth get pods
NAME READY STATUS RESTARTS AGE
dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h
kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl
```
```command-session
kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl
# no such user
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=johnny))","time":"2022-03-06T22:29:57Z"}
{"level":"error","msg":"ldap: no results returned for filter: \"(\u0026(objectClass=person)(sAMAccountName=johnny))\"","time":"2022-03-06T22:29:57Z"}
@ -420,7 +432,7 @@ dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h
I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to.
So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file:
```bash
```command
rm ~/.config/tanzu/pinniped/sessions.yaml
```

View file

@ -24,7 +24,7 @@ comment: true # Disable comment if false.
When I [set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the `kind` bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?
The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the `tanzu management-cluster kubeconfig get` command on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) cluster to a file:
```shell
```command
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml
```
@ -32,8 +32,8 @@ I then used `scp` to pull the file from the VM into my local Linux environment,
Now I'm ready to import the configuration locally with `tanzu login` on my Chromebook:
```shell
tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
```command-session
tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
✔ successfully logged in to management cluster using the kubeconfig tce-mgmt
```
@ -42,12 +42,13 @@ Pass in the full path to the exported kubeconfig file. This will help the Tanzu
{{% /notice %}}
Even though that's just importing the management cluster it actually grants access to both the management and workload clusters:
```shell
tanzu cluster list
```command-session
tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none> dev
tanzu cluster get tce-work
```
```command-session
tanzu cluster get tce-work
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
@ -62,8 +63,9 @@ NAME READY SEVERITY RE
└─Workers
└─MachineDeployment/tce-work-md-0
└─Machine/tce-work-md-0-687444b744-crc9q True 24h
tanzu management-cluster get
```
```command-session
tanzu management-cluster get
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
@ -90,24 +92,26 @@ Providers:
```
And I can then tell `kubectl` about the two clusters:
```shell
tanzu management-cluster kubeconfig get tce-mgmt --admin
```command-session
tanzu management-cluster kubeconfig get tce-mgmt --admin
Credentials of cluster 'tce-mgmt' have been saved
You can now access the cluster by running 'kubectl config use-context tce-mgmt-admin@tce-mgmt'
tanzu cluster kubeconfig get tce-work --admin
```
```command-session
tanzu cluster kubeconfig get tce-work --admin
Credentials of cluster 'tce-work' have been saved
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
```
And sure enough, there are my contexts:
```shell
kubectl config get-contexts
```command-session
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
* tce-work-admin@tce-work tce-work tce-work-admin
kubectl get nodes -o wide
```
```command-session
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
tce-work-md-0-687444b744-crc9q Ready <none> 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6

View file

@ -17,7 +17,7 @@ I can, and here's how I do it.
### The Script
The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
```powershell
```powershell {linenos=true}
# PowerCLI_Custom_Functions.ps1
# Usage:
# 0) Edit $vCenterList to reference the vCenters in your environment.

View file

@ -28,7 +28,7 @@ Now that VMware [has released](https://blogs.vmware.com/vsphere/2022/01/announci
I start off by heading to [tenable.com/products/nessus/nessus-essentials](https://www.tenable.com/products/nessus/nessus-essentials) to register for a (free!) license key which will let me scan up to 16 hosts. I'll receive the key and download link in an email, but I'm not actually going to use that link to download the Nessus binary. I've got this shiny-and-new [Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/) that could use some more real workloads so I'll instead opt for the [Docker version](https://hub.docker.com/r/tenableofficial/nessus).
Tenable provides an [example `docker-compose.yml`](https://community.tenable.com/s/article/Deploy-Nessus-docker-image-with-docker-compose) to make it easy to get started:
```yaml
```yaml {linenos=true}
version: '3.1'
services:
@ -46,7 +46,7 @@ services:
```
I can use that knowledge to craft something I can deploy on Kubernetes:
```yaml
```yaml {linenos=true}
apiVersion: v1
kind: Service
metadata:
@ -95,15 +95,15 @@ spec:
Note that I'm configuring the `LoadBalancer` to listen on port `443` and route traffic to the pod on port `8834` so that I don't have to remember to enter an oddball port number when I want to connect to the web interface.
And now I can just apply the file:
```bash
kubectl apply -f nessus.yaml
```command-session
kubectl apply -f nessus.yaml
service/nessus created
deployment.apps/nessus created
```
I'll give it a moment or two to deploy and then check on the service to figure out what IP I need to use to connect:
```bash
kubectl get svc/nessus
```command-session
kubectl get svc/nessus
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s
```

View file

@ -28,7 +28,7 @@ I've got a [`Connect-vCenters` function](/logging-in-to-multiple-vcenter-servers
What I came up with is using `Get-Datacenter` to enumerate each virtual datacenter, and then list the VMs matching my query within:
```powershell
```powershell {linenos=true}
$linuxVms = foreach( $datacenter in ( Get-Datacenter )) {
Get-Datacenter $datacenter | Get-VM | Where { $_.ExtensionData.Config.GuestFullName -notmatch "win" -and $_.Name -notmatch "vcls" } | `
Select @{ N="Datacenter";E={ $datacenter.Name }},

View file

@ -23,7 +23,7 @@ comment: true # Disable comment if false.
We've been working lately to use [HashiCorp Packer](https://www.packer.io/) to standardize and automate our VM template builds, and we found a need to pull in all of the contents of a specific directory on an internal web server. This would be pretty simple for Linux systems using `wget -r`, but we needed to find another solution for our Windows builds.
A coworker and I cobbled together a quick PowerShell solution which will download the files within a specified web URL to a designated directory (without recreating the nested folder structure):
```powershell
```powershell {linenos=true}
$outputdir = 'C:\Scripts\Download\'
$url = 'https://win01.lab.bowdre.net/stuff/files/'

View file

@ -20,17 +20,17 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep
1. Open an SSH session to *all* the vCenters within the SSO domain.
2. Log in and enter `shell` to access the shell on each vCenter.
3. Verify that replication is healthy by running `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that `Partner is 0 changes behind.`:
```shell
root@vcsa [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
```commandroot-session
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
Partner: vcsa2.lab.bowdre.net
Host available: Yes
Status available: Yes
My last change number: 9346
Partner has seen my change number: 9346
Partner is 0 changes behind.
root@vcsa2 [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
```
```commandroot-session
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
Partner: vcsa.lab.bowdre.net
Host available: Yes
Status available: Yes
@ -40,13 +40,8 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep
```
4. Stop `vmdird` on each vCenter by running `/bin/service-control --stop vmdird`:
```shell
root@vcsa [ ~ ]# /bin/service-control --stop vmdird
Operation not cancellable. Please wait for it to finish...
Performing stop operation on service vmdird...
Successfully stopped service vmdird
root@vcsa2 [ ~ ]# /bin/service-control --stop vmdird
```commandroot-session
/bin/service-control --stop vmdird
Operation not cancellable. Please wait for it to finish...
Performing stop operation on service vmdird...
Successfully stopped service vmdird
@ -54,13 +49,8 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep
5. Snapshot the vCenter appliance VMs.
6. Start replication on each server again with `/bin/service-control --start vmdird`:
```shell
root@vcsa [ ~ ]# /bin/service-control --start vmdird
Operation not cancellable. Please wait for it to finish...
Performing start operation on service vmdird...
Successfully started service vmdird
root@vcsa2 [ ~ ]# /bin/service-control --start vmdird
```commandroot-session
/bin/service-control --start vmdird
Operation not cancellable. Please wait for it to finish...
Performing start operation on service vmdird...
Successfully started service vmdird

View file

@ -37,7 +37,7 @@ So yeah. That's, uh, *not great.*
If you've got any **Windows Server 2022** VMs with **[Secure Boot](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-888353FE241C.html)** enabled on **ESXi 6.7/7.x**, you'll want to make sure they *do not* get **KB5022842** until this problem is resolved.
I put together a quick PowerCLI query to help identify impacted VMs in my environment:
```powershell
```powershell {linenos=true}
$secureBoot2022VMs = foreach($datacenter in (Get-Datacenter)) {
$datacenter | Get-VM |
Where-Object {$_.Guest.OsFullName -Match 'Microsoft Windows Server 2022' -And $_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled} |

View file

@ -18,7 +18,7 @@ The Jekyll theme I'm using ([Minimal Mistakes](https://github.com/mmistakes/mini
![Posts by category](20210724-posts-by-category.png)
It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md):
```markdown
```markdown {linenos=true}
---
title: "Posts by Category"
layout: categories
@ -30,7 +30,7 @@ author_profile: true
The `title` indicates what's going to be written in bold text at the top of the page, the `permalink` says that it will be accessible at `http://localhost/categories/`, and the nice little `author_profile` sidebar will appear on the left.
This page then calls the `categories` layout, which is defined in [`_layouts/categories.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/categories.html):
```liquid
```liquid {linenos=true}
{% raw %}---
layout: archive
---
@ -81,7 +81,7 @@ I wanted my solution to preserve the formatting that's used by the theme elsewhe
### Defining a new layout
I create a new file called `_layouts/series.html` which will define how these new series pages get rendered. It starts out just like the default `categories.html` one:
```liquid
```liquid {linenos=true}
{% raw %}---
layout: archive
---
@ -95,7 +95,7 @@ That `{{ content }}` block will let me define text to appear above the list of a
```
I'll be including two custom variables in the [Front Matter](https://jekyllrb.com/docs/front-matter/) for my category pages: `tag` to specify what category to filter on, and `sort_order` which will be set to `reverse` if I want the older posts up top. I'll be able to access these in the layout as `page.tag` and `page.sort_order`, respectively. So I'll go ahead and grab all the posts which are categorized with `page.tag`, and then decide whether the posts will get sorted normally or in reverse:
```liquid
```liquid {linenos=true}
{% raw %}{% assign posts = site.categories[page.tag] %}
{% if page.sort_order == 'reverse' %}
{% assign posts = posts | reverse %}
@ -103,7 +103,7 @@ I'll be including two custom variables in the [Front Matter](https://jekyllrb.co
```
And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page:
```liquid
```liquid {linenos=true}
{% raw %}<div class="entries-{{ entries_layout }}">
{% for post in posts %}
{% include archive-single.html type=entries_layout %}
@ -112,7 +112,7 @@ And then I'll loop through each post (in either normal or reverse order) and ins
```
Putting it all together now, here's my new `_layouts/series.html` file:
```liquid
```liquid {linenos=true}
{% raw %}---
layout: archive
---
@ -133,7 +133,7 @@ layout: archive
### Series pages
Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly:
```markdown
```markdown {linenos=true}
{% raw %}---
title: "Adventures in vRealize Automation 8"
layout: series
@ -154,7 +154,7 @@ Check it out [here](/series/vra8):
![vRA8 series](20210724-vra8-series.png)
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
```markdown
```markdown {linenos=true}
{% raw %}---
title: "Tips & Tricks"
layout: series
@ -171,7 +171,7 @@ header:
### Changing the category permalink
Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path:
```yaml
```yaml {linenos=true}
category_archive:
type: liquid
# path: /categories/
@ -182,7 +182,7 @@ tag_archive:
```
I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink:
```markdown
```markdown {linenos=true}
{% raw %}---
title: "Posts by Series"
layout: categories
@ -198,7 +198,7 @@ The bottom of each post has a section which lists the tags and categories to whi
That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
```liquid
```liquid {linenos=true}
{% raw %} <footer class="page__meta">
{% if site.data.ui-text[site.locale].meta_label %}
<h4 class="page__meta-title">{{ site.data.ui-text[site.locale].meta_label }}</h4>
@ -209,7 +209,7 @@ I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal
```
It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/page__taxonomy.html) is being used to display the tags and categories, so I then went to that file in the `_include` directory:
```liquid
```liquid {linenos=true}
{% raw %}{% if site.tag_archive.type and page.tags[0] %}
{% include tag-list.html %}
{% endif %}
@ -220,7 +220,7 @@ It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mista
```
Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/category-list.html) is what I actually want. Here's that file:
```liquid
```liquid {linenos=true}
{% raw %}{% case site.category_archive.type %}
{% when "liquid" %}
{% assign path_type = "#" %}
@ -243,7 +243,7 @@ Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes
```
I'm using the `liquid` archive approach since I can't use the `jekyll-archives` plugin, so I can see that it's setting the `path_type` to `"#"`. And near the bottom of the file, I can see that it's assembling the category link by slugifying the `category_word`, sticking the `path_type` in front of it, and then putting the `site.category_archive.path` (which I edited earlier in `_config.yml`) in front of that. So that's why my category links look like `/series/#category`. I can just edit the top of this file to statically set `path_type = nil` and that should clear this up in a jiffy:
```liquid
```liquid {linenos=true}
{% raw %}{% assign path_type = nil %}
{% if site.category_archive.path %}
{% assign categories_sorted = page.categories | sort_natural %}
@ -251,7 +251,7 @@ I'm using the `liquid` archive approach since I can't use the `jekyll-archives`
```
To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](https://github.com/mmistakes/minimal-mistakes/blob/master/_data/ui-text.yml) to update the string used for `categories_label`:
```yaml
```yaml {linenos=true}
meta_label :
tags_label : "Tags:"
categories_label : "Series:"
@ -264,7 +264,7 @@ Much better!
### Updating the navigation header
And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit `_data/navigation.yml` with links to my new pages:
```yaml
```yaml {linenos=true}
main:
- title: "vRealize Automation 8"
url: /series/vra8

View file

@ -29,7 +29,7 @@ I will also add some properties to tell PowerCLI (and the `Invoke-VmScript` cmdl
##### Inputs section
I'll kick this off by going into Cloud Assembly and editing the `WindowsDemo` template I've been working on for the past few eons. I'll add a `diskSize` input:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
site: [...]
@ -49,7 +49,7 @@ inputs:
The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
I'll also drop in an `adminsList` input at the bottom of the section:
```yaml
```yaml {linenos=true}
[...]
poc_email: [...]
ticket: [...]
@ -71,7 +71,7 @@ In the Resources section of the cloud template, I'm going to add a few propertie
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
```yaml
```yaml {linenos=true}
[...]
resources:
Cloud_vSphere_Machine_1:
@ -93,7 +93,7 @@ resources:
```
And I will add in a `storage` property as well which will automatically adjust the deployed VMDK size to match the specified input:
```yaml
```yaml {linenos=true}
[...]
description: '${input.description}'
networks: [...]
@ -108,7 +108,7 @@ And I will add in a `storage` property as well which will automatically adjust t
##### Complete template
Okay, all together now:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
site:
@ -296,7 +296,7 @@ And I'll pop over to the right side to map the Action Constants I created earlie
![Mapping constants in action](20210901_map_constants_to_action.png)
Now for The Script:
```powershell
```powershell {linenos=true}
<# vRA 8.x ABX action to perform certain in-guest actions post-deploy:
Windows:
- auto-update VM tools

View file

@ -90,7 +90,7 @@ Next it updates the links for any thumbnail images mentioned in the front matter
Lastly, it changes the `usePageBundles` flag from `false` to `true` so that Hugo knows what we've done.
```bash
```bash {linenos=true}
#!/bin/bash
# Hasty script to convert a given standard Hugo post (where the post content and
# images are stored separately) to a Page Bundle (where the content and images are

View file

@ -24,7 +24,7 @@ Hashnode helpfully automatically backs up my posts in Markdown format to a priva
I wanted to download those images to `./assets/images/posts-2020/` within my local Jekyll working directory, and then update the `*.md` files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:
```bash
```bash {linenos=true}
#!/bin/bash
# Hasty script to process a blog post markdown file, capture the URL for embedded images,
# download the image locally, and modify the markdown file with the relative image path.
@ -49,7 +49,7 @@ done
I could then run that against all of the Markdown posts under `./_posts/` with:
```bash
```command
for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done
```

View file

@ -54,7 +54,7 @@ The first step in getting up and running with Tailscale is to sign up at [https:
Once you have a Tailscale account, you're ready to install the Tailscale client. The [download page](https://tailscale.com/download) outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
```bash
```command
curl -fsSL https://tailscale.com/install.sh | sh
```
@ -71,8 +71,8 @@ There are also Tailscale apps available for [iOS](https://tailscale.com/download
#### Basic `tailscale up`
Running `sudo tailscale up` then reveals the next step:
```bash
sudo tailscale up
```command-session
sudo tailscale up
To authenticate, visit:
@ -83,7 +83,7 @@ I can copy that address into a browser and I'll get prompted to log in to my Tai
That was pretty easy, right? But what about if I can't easily get to a web browser from the terminal session on a certain device? No worries, `tailscale up` has a flag for that:
```bash
```command
sudo tailscale up --qr
```
@ -93,28 +93,28 @@ That will convert the URL to a QR code that I can scan from my phone.
There are a few additional flags that can be useful under certain situations:
- `--advertise-exit-node` to tell the tailnet that this could be used as an exit node for internet traffic
```bash
```command
sudo tailscale up --advertise-exit-node
```
- `--advertise-routes` to let the node perform subnet routing functions to provide connectivity to specified local subnets
```bash
```command
sudo tailscale up --advertise-routes "192.168.1.0/24,172.16.0.0/16"
```
- `--advertise-tags`[^tags] to associate the node with certain tags for ACL purposes (like `tag:home` to identify stuff in my home network and `tag:cloud` to label external cloud-hosted resources)
```bash
```command
sudo tailscale up --advertise-tags "tag:cloud"
```
- `--hostname` to manually specific a hostname to use within the tailnet
```bash
```command
sudo tailscale up --hostname "tailnode"
```
- `--shields-up` to block incoming traffic
```bash
```command
sudo tailscale up --shields-up
```
These flags can also be combined with each other:
```bash
```command
sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr
```
@ -122,14 +122,14 @@ sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr
#### Sidebar: Tailscale on VyOS
Getting Tailscale on [my VyOS virtual router](/vmware-home-lab-on-intel-nuc-9/#vyos) was unfortunately a little more involved than [leveraging the built-in WireGuard capability](/cloud-based-wireguard-vpn-remote-homelab-access/#configure-vyos-router-as-wireguard-peer). I found the [vyos-tailscale](https://github.com/DMarby/vyos-tailscale) project to help with building a customized VyOS installation ISO with the `tailscaled` daemon added in. I was then able to copy the ISO over to my VyOS instance and install it as if it were a [standard upgrade](https://docs.vyos.io/en/latest/installation/update.html). I could then bring up the interface, advertise my home networks, and make it available as an exit node with:
```bash
```command
sudo tailscale up --advertise-exit-node --advertise-routes "192.168.1.0/24,172.16.0.0/16"
```
#### Other `tailscale` commands
Once there are a few members, I can use the `tailscale status` command to see a quick overview of the tailnet:
```bash
tailscale status
```command-session
tailscale status
100.115.115.39 deb01 john@ linux -
100.118.115.69 ipam john@ linux -
100.116.90.109 johns-iphone john@ iOS -
@ -145,8 +145,8 @@ Without doing any other configuration beyond just installing Tailscale and conne
`tailscale ping` lets me check the latency between two Tailscale nodes at the Tailscale layer; the first couple of pings will likely be delivered through a nearby DERP server until the NAT traversal magic is able to kick in:
```bash
tailscale ping snikket
```command-session
tailscale ping snikket
pong from snikket (100.75.110.50) via DERP(nyc) in 34ms
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
@ -155,8 +155,8 @@ pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
The `tailscale netcheck` command will give me some details about my local Tailscale node, like whether it's able to pass UDP traffic, which DERP server is the closest, and the latency to all Tailscale DERP servers:
```bash
tailscale netcheck
```command-session
tailscale netcheck
Report:
* UDP: true
@ -244,7 +244,7 @@ This ACL file uses a format called [HuJSON](https://github.com/tailscale/hujson)
I'm going to start by creating a group called `admins` and add myself to that group. This isn't strictly necessary since I am the only user in the organization, but I feel like it's a nice practice anyway. Then I'll add the `tagOwners` section to map each tag to its owner, the new group I just created:
```json
```json {linenos=true}
{
"groups": {
"group:admins": ["john@example.com"],
@ -276,7 +276,7 @@ Each ACL rule consists of four named parts:
4. `ports` - a list of destinations (and optional ports).
So I'll add this to the top of my policy file:
```json
```json {linenos=true}
{
"acls": [
{
@ -305,7 +305,7 @@ Earlier I configured Tailscale to force all nodes to use my home DNS server for
2. Add a new ACL rule to allow DNS traffic to reach the DNS server from the cloud.
Option 2 sounds better to me so that's what I'm going to do. Instead of putting an IP address directly into the ACL rule I'd rather use a hostname, and unfortunately the Tailscale host names aren't available within ACL rule declarations. But I can define a host alias in the policy to map a friendly name to the IP:
```json
```json {linenos=true}
{
"hosts": {
"win01": "100.124.116.125"
@ -314,7 +314,7 @@ Option 2 sounds better to me so that's what I'm going to do. Instead of putting
```
And I can then create a new rule for `"users": ["tag:cloud"]` to add an exception for `win01:53`:
```json
```json {linenos=true}
{
"acls": [
{
@ -331,7 +331,7 @@ And I can then create a new rule for `"users": ["tag:cloud"]` to add an exceptio
And that gets DNS working again for my cloud servers while still serving the results from my NextDNS configuration. Here's the complete policy configuration:
```json
```json {linenos=true}
{
"acls": [
{

View file

@ -37,43 +37,43 @@ You're ready to roll once the Terminal opens and gives you a prompt:
![Hello, Penguin!](0-h1flLZs.png)
Your first action should be to go ahead and install any patches:
```shell
```command
sudo apt update
sudo apt upgrade
```
### Zsh, Oh My Zsh, and powerlevel10k theme
I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting `zsh` is straight forward:
```shell
```command
sudo apt install zsh
```
Go ahead and launch `zsh` (by typing '`zsh`') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to `vi`-style, and enable both `autocd` and `appendhistory`. Once you're back at the (new) `penguin%` prompt we can move on to installing the [Oh My Zsh plugin framework](https://github.com/ohmyzsh/ohmyzsh).
Just grab the installer script like so:
```shell
```command
wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
```
Review it if you'd like (and you should! *Always* review code before running it!!), and then execute it:
```shell
```command
sh install.sh
```
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
![Oh my!](8q-WT0AyC.png)
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
```shell
```command
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
```
Now we just need to edit `~/.zshrc` to point to the new theme:
```shell
```command
sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc
```
We'll need to launch another instance of `zsh` for the theme change to take effect so first lets go ahead and manually set `zsh` as our default shell. We can use `sudo` to get around the whole "don't have a password set" inconvenience:
```shell
```command
sudo chsh -s /bin/zsh [username]
```
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
![pwerlevel10k configurator](K1ScSuWcg.png)
![powerlevel10k configurator](K1ScSuWcg.png)
This theme is crazy-configurable, but fortunately the configurator wizard does a great job of helping you choose the options that work best for you.
I pick the Classic prompt style, Unicode character set, Dark prompt color, 24-hour time, Angled separators, Sharp prompt heads, Flat prompt tails, 2-line prompt height, Dotted prompt connection, Right prompt frame, Sparse prompt spacing, Fluent prompt flow, Enabled transient prompt, Verbose instant prompt, and (finally) Yes to apply the changes.
@ -82,7 +82,7 @@ Looking good!
### Visual Studio Code
I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer [here](https://code.visualstudio.com/Download#) or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
```shell
```command
curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb
sudo apt install ./code_arm64.deb
```
@ -104,7 +104,7 @@ Once you connect the phone to Linux, check the phone to approve the debugging co
I'm working on setting up a [VMware homelab on an Intel NUC 9](https://twitter.com/johndotbowdre/status/1317558182936563714) so being able to automate things with PowerCLI will be handy.
PowerShell for ARM is still in an early stage so while [it is supported](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#support-for-arm-processors) it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives [here](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#linux), and I grabbed the latest `-linux-arm64.tar.gz` release I could find [here](https://github.com/PowerShell/PowerShell/releases).
```shell
```command
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz
sudo mkdir -p /opt/microsoft/powershell/7
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
@ -124,7 +124,7 @@ Woot!
The Linux (Beta) environment consists of a hardened virtual machine (named `termina`) running an LXC Debian container (named `penguin`). Know what would be even more fun? Let's run some other containers inside our container!
The docker installation has a few prerequisites:
```shell
```command-session
sudo apt install \
apt-transport-https \
ca-certificates \
@ -133,18 +133,18 @@ sudo apt install \
software-properties-common
```
Then we need to grab the Docker repo key:
```shell
```command
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
```
And then we can add the repo:
```shell
```command-session
sudo add-apt-repository \
"deb [arch=arm64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
```
And finally update the package cache and install `docker` and its friends:
```shell
```command
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
```
@ -163,13 +163,13 @@ So while I can use the Duet for designing 3D models, I won't be able to actually
I came across [a Reddit post](https://www.reddit.com/r/Crostini/comments/jnbqv3/successfully_running_jupyter_notebook_on_samsung/) today describing how to install `conda` and get a Jupyter Notebook running on arm64 so I had to give it a try. It actually wasn't that bad!
The key is to grab the appropriate version of [conda Miniforge](https://github.com/conda-forge/miniforge), make it executable, and run the installer:
```shell
```command
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
chmod +x Miniforge3-Linux-aarch64.sh
./Miniforge3-Linux-aarch64.sh
```
Exit the terminal and relaunch it, and then install Jupyter:
```shell
```command
conda install -c conda-forge notebook
```

View file

@ -32,7 +32,10 @@ I shared a [few months back](/federated-matrix-server-synapse-on-oracle-clouds-f
I recently came across the [Snikket project](https://snikket.org/), which [aims](https://snikket.org/about/goals/) to make decentralized end-to-end encrypted personal messaging simple and accessible for *everyone*, with an emphasis on providing a consistent experience across the network. Snikket does this by maintaining a matched set of server and client[^2] software with feature and design parity, making it incredibly easy to deploy and manage the server, and simplifying user registration with invite links. In contrast to Matrix, Snikket does not operate an open server on which users can self-register but instead requires users to be invited to a hosted instance. The idea is that a server would be used by small groups of family and friends where every user knows (and trusts!) the server operator while also ensuring the complete decentralization of the network[^3].
How simple is the server install?
{{< tweet user="johndotbowdre" id="1461356940466933768" >}}
> I spun up a quick @snikket_im XMPP server last night to check out the project - and I do mean QUICK. It took me longer to register a new domain than to deploy the server on GCP and create my first account through the client.
>
> — John (@johndotbowdre) November 18, 2021
Seriously, their [4-step quick-start guide](https://snikket.org/service/quickstart/) is so good that I didn't feel the need to do a blog post about my experience. I've now been casually using Snikket for a bit over month and remain very impressed both by the software and the project itself, and have even deployed a new Snikket instance for my family to use. My parents were actually able to join the chat without any issues, which is a testament to how easy it is from a user perspective too.
A few days ago I migrated my original Snikket instance from Google Cloud (GCP) to the same Oracle Cloud Infrastructure (OCI) virtual server that's hosting my Matrix homeserver so I thought I might share some notes first on the installation process. At the end, I'll share the tweaks which were needed to get Snikket to run happily alongside Matrix.
@ -55,7 +58,7 @@ You can refer to my notes from last time for details on how I [created the Ubunt
| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) |
As a gentle reminder, Oracle's `iptables` configuration inserts a `REJECT all` rule at the bottom of each chain. I needed to make sure that each of my `ALLOW` rules get inserted above that point. So I used `iptables -L INPUT --line-numbers` to identify which line held the `REJECT` rule, and then used `iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT` to insert the new rules above that point.
```bash
```command
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dports 3478-3479 -j ACCEPT
@ -70,8 +73,8 @@ sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000
```
Then to verify the rules are in the right order:
```bash
$ sudo iptables -L INPUT --line-numbers -n
```command-session
sudo iptables -L INPUT --line-numbers -n
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ts-input all -- 0.0.0.0/0 0.0.0.0/0
@ -93,8 +96,8 @@ num target prot opt source destination
```
Before moving on, it's important to save them so the rules will persist across reboots!
```bash
$ sudo netfilter-persistent save
```command-session
sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
@ -143,20 +146,20 @@ Now we're ready to...
### Install Snikket
This starts with just making a place for Snikket to live:
```bash
```command
sudo mkdir /etc/snikket
cd /etc/snikket
```
And then grabbing the Snikket `docker-compose` file:
```bash
```command
sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml
```
And then creating a very minimal configuration file:
```bash
```command
sudo vi snikket.conf
```
@ -173,7 +176,7 @@ In my case, I'm going to add two additional parameters to restrict the UDP TURN
So here's my config:
```
```cfg {linenos=true}
SNIKKET_DOMAIN=chat.vpota.to
SNIKKET_ADMIN_EMAIL=ops@example.com
@ -185,7 +188,7 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
### Start it up!
With everything in place, I can start up the Snikket server:
```bash
```command
sudo docker-compose up -d
```
@ -194,7 +197,7 @@ This will take a moment or two to pull down all the required container images, s
Of course, I don't yet have a way to log in, and like I mentioned earlier Snikket doesn't offer open user registration. Every user (even me, the admin!) has to be invited. Fortunately I can generate my first invite directly from the command line:
```bash
```command
sudo docker exec snikket create-invite --admin --group default
```
@ -248,7 +251,7 @@ One of the really cool things about Caddy is that it automatically generates SSL
Fortunately, the [Snikket reverse proxy documentation](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#basic) was recently updated with a sample config for making this happen. Matrix and Snikket really only overlap on ports `80` and `443` so those are the only ports I'll need to handle, which lets me go for the "Basic" configuration instead of the "Advanced" one. I can just adapt the sample config from the documentation and add that to my existing `/etc/caddy/Caddyfile` alongside the config for Matrix:
```
```caddy {linenos=true}
http://chat.vpota.to,
http://groups.chat.vpota.to,
http://share.chat.vpota.to {
@ -291,7 +294,7 @@ Since Snikket is completely containerized, moving between hosts is a simple matt
The Snikket team has actually put together a couple of scripts to assist with [backing up](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/backup.sh) and [restoring](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) an instance. I just adapted the last line of each to do what I needed:
```bash
```command-session
sudo docker run --rm --volumes-from=snikket \
-v "/home/john/snikket-backup/":/backup debian:buster-slim \
tar czf /backup/snikket-"$(date +%F-%H%m)".tar.gz /snikket
@ -299,9 +302,11 @@ sudo docker run --rm --volumes-from=snikket \
That will drop a compressed backup of the `snikket_data` volume into the specified directory, `/home/john/snikket-backup/`. While I'm at it, I'll also go ahead and copy the `docker-compose.yml` and `snikket.conf` files from `/etc/snikket/`:
```bash
$ sudo cp -a /etc/snikket/* /home/john/snikket-backup/
$ ls -l /home/john/snikket-backup/
```command
sudo cp -a /etc/snikket/* /home/john/snikket-backup/
```
```command-session
ls -l /home/john/snikket-backup/
total 1728
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
@ -309,13 +314,13 @@ total 1728
```
And I can then zip that up for easy transfer:
```bash
```command
tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/
```
This would be a great time to go ahead and stop this original Snikket instance. After all, nothing that happens after the backup was exported is going to carry over anyway.
```bash
```command
sudo docker-compose down
```
{{% notice tip "Update DNS" %}}
@ -325,17 +330,19 @@ This is also a great time to update the `A` record for `chat.vpota.to` so that i
Now I just need to transfer the archive from one server to the other. I've got [Tailscale](https://tailscale.com/)[^11] running on my various cloud servers so that they can talk to each other through a secure WireGuard tunnel (remember [WireGuard](/cloud-based-wireguard-vpn-remote-homelab-access/)?) without having to open any firewall ports between them, and that means I can just use `scp` to transfer the file without any fuss. I can even leverage Tailscale's [Magic DNS](https://tailscale.com/kb/1081/magicdns/) feature to avoid worrying with any IPs, just the hostname registered in Tailscale (`chat-oci`):
```bash
```command
scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/
```
Next, I SSH in to the new server and unzip the archive:
```bash
$ ssh snikket-oci-server
$ tar xf snikket-backup.tar.gz
$ cd snikket-backup
$ ls -l
```command
ssh snikket-oci-server
tar xf snikket-backup.tar.gz
cd snikket-backup
```
```command-session
ls -l
total 1728
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
@ -344,7 +351,7 @@ total 1728
Before I can restore the content of the `snikket-data` volume on the new server, I'll need to first go ahead and set up Snikket again. I've already got `docker` and `docker-compose` installed from when I installed Matrix so I'll skip to creating the Snikket directory and copying in the `docker-compose.yml` and `snikket.conf` files.
```bash
```command
sudo mkdir /etc/snikket
sudo cp docker-compose.yml /etc/snikket/
sudo cp snikket.conf /etc/snikket/
@ -353,7 +360,7 @@ cd /etc/snikket
Before I fire this up on the new host, I need to edit the `snikket.conf` to tell Snikket to use those different ports defined in the reverse proxy configuration using [a couple of `SNIKKET_TWEAK_*` lines](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#snikket):
```
```cfg {linenos=true}
SNIKKET_DOMAIN=chat.vpota.to
SNIKKET_ADMIN_EMAIL=ops@example.com
@ -364,7 +371,7 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
```
Alright, let's start up the Snikket server:
```bash
```command
sudo docker-compose up -d
```
@ -372,7 +379,7 @@ After a moment or two, I can point a browser to `https://chat.vpota.to` and see
Now I can borrow the last line from the [`restore.sh` script](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) to bring in my data:
```bash
```command-session
sudo docker run --rm --volumes-from=snikket \
--mount type=bind,source="/home/john/snikket-backup/snikket-2021-12-19-1745.tar.gz",destination=/backup.tar.gz \
debian:buster-slim \

View file

@ -15,7 +15,7 @@ tags:
Following a recent update, I found that the [Linux development environment](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/containers_and_vms.md) on my Framework Chromebook would fail to load if the [Tailscale](/secure-networking-made-simple-with-tailscale) daemon was already running. It seems that the Tailscale virtual interface may have interfered with how the CrOS Terminal app was expecting to connect to the Linux container. I initially worked around the problem by just disabling the `tailscaled` service, but having to remember to start it up manually was a pretty heavy cognitive load.
Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the `systemctl edit` command to create a quick override configuration:
```shell
```command
sudo systemctl edit tailscaled
```

View file

@ -80,20 +80,20 @@ After clicking the **Generate key** button, the key will be displayed. This is t
### Docker setup
The [golink repo](https://github.com/tailscale/golink) offers this command for running the container:
```shell
```command
docker run -it --rm ghcr.io/tailscale/golink:main
```
The doc also indicates that I can pass the auth key to the golink service via the `TS_AUTHKEY` environment variable, and that all the configuration will be stored in `/home/nonroot` (which will be owned by uid/gid `65532`). I'll take this knowledge and use it to craft a `docker-compose.yml` to simplify container management.
```shell
```command
mkdir -p golink/data
cd golink
chmod 65532:65532 data
vi docker-compose.yaml
```
```yaml
```yaml {linenos=true}
# golink docker-compose.yaml
version: '3'
services:
@ -138,9 +138,7 @@ Some of my other golinks:
| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance |
| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches [protondb](https://www.protondb.com/), super-handy for checking game compatibility when [Tailscale is installed on a Steam Deck](https://tailscale.com/blog/steam-deck/) |
| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name |
| `vpot8` | `https://www.virtuallypotato.com/{{with .Path}}search?query={{.}}{{end}}` | searches this here site |
| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems |
| `tools` | `https://neeva.com/spaces/m_Bhx8tPfYQbOmaW1UHz-3a_xg3h2amlogo2GzgD` | shortcut to my [Tech Toolkit space](https://neeva.com/spaces/m_Bhx8tPfYQbOmaW1UHz-3a_xg3h2amlogo2GzgD) on Neeva |
| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) |
| `wx` | `https://wttr.in/{{ .Path }}` | local weather report based on geolocation or weather for a designated city (`curl`-friendly) |
@ -148,7 +146,7 @@ Some of my other golinks:
You can browse to `go/.export` to see a JSON-formatted listing of all configured shortcuts - or, if you're clever, you could do something like `curl http://go/.export -o links.json` to download a copy.
To restore, just pass `--snapshot /path/to/links.json` when starting golink. What I usually do is copy the file into the `data` folder that I'm mounting as a Docker volume, and then just run:
```shell
```command
sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json
```

View file

@ -30,20 +30,20 @@ Here's a condensed list of the [steps that I took to manually install Tailscale]
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
2. Download and extract it to the system:
```shell
```command
wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz
tar xvf tailscale_1.34.1_arm64.tgz
cd tailscale_1.34.1_arm64/
```
3. Install the binaries and service files:
```shell
```command
sudo install -m 755 tailscale /usr/bin/
sudo install -m 755 tailscaled /usr/sbin/
sudo install -m 644 systemd/tailscaled.defaults /etc/default/tailscaled
sudo install -m 644 systemd/tailscaled.service /usr/lib/systemd/system/
```
4. Start the service:
```shell
```command
sudo systemctl enable tailscaled
sudo systemctl start tailscaled
```

View file

@ -68,8 +68,8 @@ I've already got Docker installed on this machine, but if I didn't I would follo
I also verify that my install is using `cgroup` version 1 as version 2 is not currently supported:
```bash
docker info | grep -i cgroup
```command-session
docker info | grep -i cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 1
```
@ -79,59 +79,63 @@ Next up, I'll install `kubectl` [as described here](https://kubernetes.io/docs/t
I can look at the [releases page on GithHub](https://github.com/kubernetes/kubernetes/releases) to see that the latest release for me is `1.22.5`. With this newfound knowledge I can follow the [Install kubectl binary with curl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) instructions to grab that specific version:
```bash
curl -LO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl
```command-session
curl -LO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 100 154 0 0 2298 0 --:--:-- --:--:-- --:--:-- 2298
100 44.7M 100 44.7M 0 0 56.9M 0 --:--:-- --:--:-- --:--:-- 56.9M
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
```command-session
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
[sudo] password for john:
kubectl version --client
```
```command-session
kubectl version --client
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
```
#### `kind` binary
It's not strictly a requirement, but having the `kind` executable available will be handy for troubleshooting during the bootstrap process in case anything goes sideways. It can be installed in basically the same was as `kubectl`:
```bash
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
```command-session
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 98 100 98 0 0 513 0 --:--:-- --:--:-- --:--:-- 513
100 655 100 655 0 0 2212 0 --:--:-- --:--:-- --:--:-- 10076
100 6660k 100 6660k 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
kind version
```
```command
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
```
```command-session
kind version
kind v0.11.1 go1.16.5 linux/amd64
```
#### Tanzu CLI
The final bit of required software is the Tanzu CLI, which can be downloaded from the [project on GitHub](https://github.com/vmware-tanzu/community-edition/releases).
```bash
```command-session
curl -H "Accept: application/vnd.github.v3.raw" \
-L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
bash -s v0.9.1 linux
```
And then unpack it and run the installer:
```bash
```command
tar xf tce-linux-amd64-v0.9.1.tar.gz
cd tce-linux-amd64-v0.9.1
./install.sh
```
I can then verify the installation is working correctly:
```bash
tanzu version
```command-session
tanzu version
version: v0.2.1
buildDate: 2021-09-29
sha: ceaa474
@ -142,14 +146,14 @@ Okay, now it's time for the good stuff - creating some shiny new Tanzu clusters!
#### Management cluster
I need to create a Management cluster first and I'd like to do that with the UI, so that's as simple as:
```bash
```command
tanzu management-cluster create --ui
```
I should then be able to access the UI by pointing a web browser at `http://127.0.0.1:8080`... but I'm running this on a VM without a GUI, so I'll need to back up and tell it to bind on `0.0.0.0:8080` so the web installer will be accessible across the network. I can also include `--browser none` so that the installer doesn't bother with trying to launch a browser locally.
```bash
tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none
```command-session
tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none
Validating the pre-requisites...
Serving kickstart UI at http://[::]:8080
@ -186,19 +190,19 @@ I skip the Tanzu Mission Control piece (since I'm still waiting on access to [TM
See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly **Deploy** button doesn't seem to work while connected to the web server remotely.
```bash
```command
tanzu management-cluster create --file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6
```
In fact, I'm going to copy that file into my working directory and give it a more descriptive name so that I can re-use it in the future.
```bash
```command
cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml ~/projects/tanzu-homelab/tce-mgmt.yaml
```
Now I can run the install command:
```bash
```command
tanzu management-cluster create --file ./tce-mgmt.yaml -v 6
```
@ -246,8 +250,8 @@ Some addons might be getting installed! Check their status by running the follow
I can run that last command to go ahead and verify that the addon installation has completed:
```bash
kubectl get apps -A
```command-session
kubectl get apps -A
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
tkg-system antrea Reconcile succeeded 26s 6m49s
tkg-system metrics-server Reconcile succeeded 36s 6m49s
@ -257,8 +261,8 @@ tkg-system vsphere-csi Reconcile succeeded 36s 6m50s
```
And I can use the Tanzu CLI to get some other details about the new management cluster:
```bash
tanzu management-cluster get tce-mgmt
```command-session
tanzu management-cluster get tce-mgmt
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
@ -292,7 +296,7 @@ Excellent! Things are looking good so I can move on to create the cluster which
#### Workload cluster
I won't use the UI for this but will instead take a copy of my `tce-mgmt.yaml` file and adapt it to suit the workload needs (as described [here](https://tanzucommunityedition.io/docs/latest/workload-clusters/)).
```bash
```command
cp tce-mgmt.yaml tce-work.yaml
vi tce-work.yaml
```
@ -310,8 +314,8 @@ I *could* change a few others if I wanted to[^i_wont]:
After saving my changes to the `tce-work.yaml` file, I'm ready to deploy the cluster:
```bash
tanzu cluster create --file tce-work.yaml
```command-session
tanzu cluster create --file tce-work.yaml
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'tce-work'...
@ -324,8 +328,8 @@ Workload cluster 'tce-work' created
```
Right on! I'll use `tanzu cluster get` to check out the workload cluster:
```bash
tanzu cluster get tce-work
```command-session
tanzu cluster get tce-work
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
@ -356,8 +360,8 @@ Excellent, I've got a Tanzu management cluster and a Tanzu workload cluster. Wha
If I run `kubectl get nodes` right now, I'll only get information about the management cluster:
```bash
kubectl get nodes
```command-session
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tce-mgmt-control-plane-xtdnx Ready control-plane,master 18h v1.21.2+vmware.1
tce-mgmt-md-0-745b858d44-4c9vv Ready <none> 17h v1.21.2+vmware.1
@ -366,16 +370,16 @@ tce-mgmt-md-0-745b858d44-4c9vv Ready <none> 17h v1.21.2+v
#### Setting the right context
To be able to deploy stuff to the workload cluster, I need to tell `kubectl` how to talk to it. And to do that, I'll first need to use `tanzu` to capture the cluster's kubeconfig:
```bash
tanzu cluster kubeconfig get tce-work --admin
```command-session
tanzu cluster kubeconfig get tce-work --admin
Credentials of cluster 'tce-work' have been saved
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
```
I can now run `kubectl config get-contexts` and see that I have access to contexts on both management and workload clusters:
```bash
kubectl config get-contexts
```command-session
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
tce-work-admin@tce-work tce-work tce-work-admin
@ -383,10 +387,12 @@ CURRENT NAME CLUSTER AUTHINFO NAMESPACE
And I can switch to the `tce-work` cluster like so:
```bash
kubectl config use-context tce-work-admin@tce-work
```command-session
kubectl config use-context tce-work-admin@tce-work
Switched to context "tce-work-admin@tce-work".
kubectl get nodes
```
```command-session
kubectl get nodes
NAME STATUS ROLES AGE VERSION
tce-work-control-plane-8km9m Ready control-plane,master 17h v1.21.2+vmware.1
tce-work-md-0-687444b744-cck4x Ready <none> 17h v1.21.2+vmware.1
@ -399,11 +405,12 @@ Before I move on to deploying actually *useful* workloads, I'll start with deplo
I can check out the sample deployment that William put together [here](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb.yaml), and then deploy it with:
```bash
kubectl create ns yelb
```command-session
kubectl create ns yelb
namespace/yelb created
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml
```
```command-session
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml
service/redis-server created
service/yelb-db created
service/yelb-appserver created
@ -412,8 +419,9 @@ deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
kubectl -n yelb get pods
```
```command-session
kubectl -n yelb get pods
NAME READY STATUS RESTARTS AGE
redis-server-74556bbcb7-r9jqc 1/1 Running 0 10s
yelb-appserver-d584bb889-2jspg 1/1 Running 0 10s
@ -423,15 +431,15 @@ yelb-ui-8f54fd88c-k2dw9 1/1 Running 0 10s
Once the app is running, I can point my web browser at it to see it in action. But what IP do I use?
```bash
kubectl -n yelb get svc/yelb-ui
```command-session
kubectl -n yelb get svc/yelb-ui
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yelb-ui NodePort 100.71.228.116 <none> 80:30001/TCP 84s
```
This demo is using a `NodePort` type service to expose the front end, which means it will be accessible on port `30001` on the node it's running on. I can find that IP by:
```bash
kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:"
```command-session
kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:"
Node: tce-work-md-0-687444b744-cck4x/192.168.1.145
```
@ -439,18 +447,19 @@ So I can point my browser at `http://192.168.1.145:30001` and see the demo:
![yelb demo page](yelb_nodeport_demo.png)
After marveling at my own magnificence[^magnificence] for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the `yelb` namespace to clean up the work I just did:
```bash
kubectl delete ns yelb
```command-session
kubectl delete ns yelb
namespace "yelb" deleted
```
Now let's move on and try to deploy `yelb` behind a `LoadBalancer` service so it will get its own IP. William has a [deployment spec](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb-lb.yaml) for that too.
```bash
kubectl create ns yelb
```command-session
kubectl create ns yelb
namespace/yelb created
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml
```
```command-session
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml
service/redis-server created
service/yelb-db created
service/yelb-appserver created
@ -459,8 +468,9 @@ deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
kubectl -n yelb get pods
```
```command-session
kubectl -n yelb get pods
NAME READY STATUS RESTARTS AGE
redis-server-74556bbcb7-q6l62 1/1 Running 0 7s
yelb-appserver-d584bb889-p5qgd 1/1 Running 0 7s
@ -469,8 +479,8 @@ yelb-ui-8f54fd88c-pm9qw 1/1 Running 0 7s
```
And I can take a look at that service...
```bash
kubectl -n yelb get svc/yelb-ui
```command-session
kubectl -n yelb get svc/yelb-ui
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yelb-ui LoadBalancer 100.67.177.185 <pending> 80:32339/TCP 15s
```
@ -482,20 +492,24 @@ Wait a minute. That external IP is *still* `<pending>`. What gives? Oh yeah I ne
#### Deploying `kube-vip` as a load balancer
Fortunately, William Lam [wrote up some tips](https://williamlam.com/2021/10/quick-tip-install-kube-vip-as-service-load-balancer-with-tanzu-community-edition-tce.html) for handling that too. It's [based on work by Scott Rosenberg](https://github.com/vrabbi/tkgm-customizations). The quick-and-dirty steps needed to make this work are:
```bash
```command
git clone https://github.com/vrabbi/tkgm-customizations.git
cd tkgm-customizations/carvel-packages/kube-vip-package
kubectl apply -n tanzu-package-repo-global -f metadata.yml
kubectl apply -n tanzu-package-repo-global -f package.yaml
```
```command-session
cat << EOF > values.yaml
vip_range: 192.168.1.64-192.168.1.80
EOF
```
```command
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
```
Now I can check out the `yelb-ui` service again:
```bash
kubectl -n yelb get svc/yelb-ui
```command-session
kubectl -n yelb get svc/yelb-ui
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m
```
@ -504,8 +518,8 @@ And it's got an IP! I can point my browser to `http://192.168.1.65` now and see:
![Successful LoadBalancer test!](yelb_loadbalancer_demo.png)
I'll keep the `kube-vip` load balancer since it'll come in handy, but I have no further use for `yelb`:
```bash
kubectl delete ns yelb
```command-session
kubectl delete ns yelb
namespace "yelb" deleted
```
@ -519,7 +533,7 @@ Then I create a new vSphere Storage Policy called `tkg-storage-policy` which sta
![My Tanzu storage policy](storage_policy.png)
So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called `vsphere-sc.yaml`:
```yaml
```yaml {linenos=true}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
@ -530,13 +544,13 @@ parameters:
```
And then apply it with :
```bash
kubectl apply -f vsphere-sc.yaml
```command-session
kubectl apply -f vsphere-sc.yaml
storageclass.storage.k8s.io/vsphere created
```
I can test that I can create a Persistent Volume Claim against the new `vsphere` Storage Class by putting this in a new file called `vsphere-pvc.yaml`:
```yaml
```yaml {linenos=true}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
@ -553,14 +567,14 @@ spec:
```
And applying it:
```bash
kubectl apply -f demo-pvc.yaml
```command-session
kubectl apply -f demo-pvc.yaml
persistentvolumeclaim/vsphere-demo-1 created
```
I can see the new claim, and confirm that its status is `Bound`:
```bash
kubectl get pvc
```command-session
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi RWO vsphere 4m25s
```
@ -569,8 +583,8 @@ And for bonus points, I can see that the container volume was created on the vSp
![Container Volume in vSphere](container_volume_in_vsphere.png)
So that's storage sorted. I'll clean up my test volume before moving on:
```bash
kubectl delete -f demo-pvc.yaml
```command-session
kubectl delete -f demo-pvc.yaml
persistentvolumeclaim "vsphere-demo-1" deleted
```
@ -583,8 +597,8 @@ So I set to work exploring some containerization options, and I found [phpipam-d
To start, I'll create a new namespace to keep things tidy:
```bash
kubectl create ns ipam
```command-session
kubectl create ns ipam
namespace/ipam created
```
@ -600,7 +614,7 @@ I'll use each container's original `docker-compose` configuration and adapt that
#### phpipam-db
The phpIPAM database will live inside a MariaDB container. Here's the relevant bit from `docker-compose`:
```yaml
```yaml {linenos=true}
services:
phpipam-db:
image: mariadb:latest
@ -615,7 +629,7 @@ services:
So it will need a `Service` exposing the container's port `3306` so that other pods can connect to the database. For my immediate demo, using `type: ClusterIP` will be sufficient since all the connections will be coming from within the cluster. When I do this for real, it will need to be `type: LoadBalancer` so that the agent running on a different cluster can connect. And it will need a `PersistentVolumeClaim` so it can store the database data at `/var/lib/mysql`. It will also get passed an environment variable to set the initial `root` password on the database instance (which will be used later during the phpIPAM install to create the initial `phpipam` database).
It might look like this on the Kubernetes side:
```yaml
```yaml {linenos=true}
# phpipam-db.yaml
apiVersion: v1
kind: Service
@ -686,7 +700,7 @@ Moving on:
#### phpipam-www
This is the `docker-compose` excerpt for the web component:
```yaml
```yaml {linenos=true}
services:
phpipam-web:
image: phpipam/phpipam-www:1.5x
@ -704,7 +718,7 @@ services:
Based on that, I can see that my `phpipam-www` pod will need a container running the `phpipam/phpipam-www:1.5x` image, a `Service` of type `LoadBalancer` to expose the web interface on port `80`, a `PersistentVolumeClaim` mounted to `/phpipam/css/images/logo`, and some environment variables passed in to configure the thing. Note that the `IPAM_DATABASE_PASS` variable defines the password used for the `phpipam` user on the database (not the `root` user referenced earlier), and the `IPAM_DATABASE_WEBHOST=%` variable will define which hosts that `phpipam` database user will be able to connect from; setting it to `%` will make sure that my remote agent can connect to the database even if I don't know where the agent will be running.
Here's how I'd adapt that into a structure that Kubernetes will understand:
```yaml
```yaml {linenos=true}
# phpipam-www.yaml
apiVersion: v1
kind: Service
@ -778,7 +792,7 @@ spec:
#### phpipam-cron
This container has a pretty simple configuration in `docker-compose`:
```yaml
```yaml {linenos=true}
services:
phpipam-cron:
image: phpipam/phpipam-cron:1.5x
@ -791,7 +805,7 @@ services:
No exposed ports, no need for persistence - just a base image and a few variables to tell it how to connect to the database and how often to run the scans:
```yaml
```yaml {linenos=true}
# phpipam-cron.yaml
apiVersion: apps/v1
kind: Deployment
@ -824,7 +838,7 @@ spec:
#### phpipam-agent
And finally, my remote scan agent. Here's the `docker-compose`:
```yaml
```yaml {linenos=true}
services:
phpipam-agent:
container_name: phpipam-agent
@ -846,7 +860,7 @@ services:
It's got a few additional variables to make it extra-configurable, but still no need for persistence or network exposure. That `IPAM_AGENT_KEY` variable will need to get populated the appropriate key generated within the new phpIPAM deployment, but we can deal with that later.
For now, here's how I'd tell Kubernetes about it:
```yaml
```yaml {linenos=true}
# phpipam-agent.yaml
apiVersion: apps/v1
kind: Deployment
@ -891,31 +905,31 @@ spec:
#### Deployment and configuration of phpIPAM
I can now go ahead and start deploying these containers, starting with the database one (upon which all the others rely):
```bash
kubectl apply -f phpipam-db.yaml
```command-session
kubectl apply -f phpipam-db.yaml
service/phpipam-db created
persistentvolumeclaim/phpipam-db-pvc created
deployment.apps/phpipam-db created
```
And the web server:
```bash
kubectl apply -f phpipam-www.yaml
```command-session
kubectl apply -f phpipam-www.yaml
service/phpipam-www created
persistentvolumeclaim/phpipam-www-pvc created
deployment.apps/phpipam-www created
```
And the cron runner:
```bash
kubectl apply -f phpipam-cron.yaml
```command-session
kubectl apply -f phpipam-cron.yaml
deployment.apps/phpipam-cron created
```
I'll hold off on the agent container for now since I'll need to adjust the configuration slightly after getting phpIPAM set up, but I will go ahead and check out my work so far:
```bash
kubectl -n ipam get all
```command-session
kubectl -n ipam get all
NAME READY STATUS RESTARTS AGE
pod/phpipam-cron-6c994897c4-6rsnp 1/1 Running 0 4m30s
pod/phpipam-db-5f4c47d4b9-sb5bd 1/1 Running 0 16m
@ -963,8 +977,8 @@ I'll copy the agent code and plug it into my `phpipam-agent.yaml` file:
```
And then deploy that:
```bash
kubectl apply -f phpipam-agent.yaml
```command-session
kubectl apply -f phpipam-agent.yaml
deployment.apps/phpipam-agent created
```

View file

@ -41,19 +41,19 @@ The host will need to be in maintenance mode in order to apply the upgrade, and
### 3. Place host in maintenance mode
I can do that by SSH'ing to the host and running:
```shell
```commandroot
esxcli system maintenanceMode set -e true
```
And can confirm that it happened with:
```shell
```commandroot-session
esxcli system maintenanceMode get
Enabled
```
### 4. Identify the profile name
Because this is an *upgrade* from one major release to another rather than a simple *update*, I need to know the name of the profile which will be applied. I can identify that with:
```shell
```commandroot-session
esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip
Name Vendor Acceptance Level Creation Time Modification Time
---------------------------- ------------ ---------------- ------------------- -----------------
@ -68,13 +68,12 @@ In this case, I'll use the `ESXi-8.0.0-20513097-standard` profile.
### 5. Install the upgrade
Now for the moment of truth:
```shell
esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-2051309
7-depot.zip -p ESXi-8.0.0-20513097-standard
```commandroot
esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip -p ESXi-8.0.0-20513097-standard
```
When it finishes (successfully), it leaves a little message that the update won't be complete until the host is rebooted, so I'll go ahead and do that as well:
```shell
```commandroot
reboot
```

View file

@ -16,7 +16,7 @@ Unfortunately, I found that this approach can take a long time to run and often
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found [this blog post with the solution](https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/).
So here's what I put together:
```powershell
```powershell {linenos=true}
# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
# It creates a scheduled task to start the update process after a one-minute delay so that you don't have to maintain
# the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later.

View file

@ -54,29 +54,31 @@ This needs to be run directly on the vCenter appliance so you'll need to copy th
Once that's done, just execute this on your local workstation to copy the `.zip` from your `~/Downloads/` folder to the VCSA's `/tmp/` directory:
```shell
```command
scp ~/Downloads/vdt-v1.1.4.zip root@vcsa.lab.bowdre.net:/tmp/
```
### 3. Extract
Now pop back over to an SSH session to the VCSA, extract the `.zip`, and get ready for action:
```shell
root@VCSA [ ~ ]# cd /tmp
root@VCSA [ /tmp ]# unzip vdt-v1.1.4.zip
```commandroot
cd /tmp
```
```commandroot-session
unzip vdt-v1.1.4.zip
Archive: vdt-v1.1.4.zip
3557676756cffd658fd61aab5a6673269104e83c
creating: vdt-v1.1.4/
...
inflating: vdt-v1.1.4/vdt.py
root@VCSA [ /tmp ]# cd vdt-v1.1.4/
```
```commandroot
cd vdt-v1.1.4/
```
### 4. Execute
Now for the fun part:
```shell
root@VCSA [ /tmp/vdt-v1.1.4 ]# python vdt.py
```commandroot-session
python vdt.py
_________________________
RUNNING PULSE CHECK
@ -165,7 +167,7 @@ at your discretion to reduce the size of log bundles.
```
Those core files can be useful for investigating specific issues, but holding on to them long-term doesn't really do much good. _After checking to be sure I don't need them_, I can get rid of them all pretty easily like so:
```shell
```commandroot
find /storage/core/ -name "core.*" -type f -mtime +3 -exec rm {} \;
```
@ -185,7 +187,7 @@ Oh yeah, let's turn that back on with `systemctl start ntpd`.
```
That's a good thing to know. I'll [take care of that](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenter.configuration.doc/GUID-48BAF973-4FD3-4FF3-B1B6-5F7286C9B59A.html) while I'm thinking about it.
```shell
```commandroot
chage -M -1 -E -1 root
```

View file

@ -28,12 +28,12 @@ I found that the quite-popular [Minimal Mistakes](https://mademistakes.com/work/
A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
```shell
```command
sudo apt-get install ruby-full build-essential zlib1g-dev
```
I added the following to my `~/.zshrc` file so that the gems would be installed under my home directory rather than somewhere more privileged:
```shell
```command
export GEM_HOME="$HOME/gems"
export PATH="$HOME/gems/bin:$PATH"
```
@ -41,15 +41,15 @@ export PATH="$HOME/gems/bin:$PATH"
And then ran `source ~/.zshrc` so the change would take immediate effect.
I could then install Jekyll:
```shell
```command
gem install jekyll bundler
```
I then `cd`ed to the local repo and ran `bundle install` to also load up the components specified in the repo's `Gemfile`.
And, finally, I can run this to start up the local Jekyll server instance:
```shell
bundle exec jekyll serve -l --drafts
```command-session
bundle exec jekyll serve -l --drafts
Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
Source: /home/jbowdre/projects/jbowdre.github.io
Destination: /home/jbowdre/projects/jbowdre.github.io/_site

View file

@ -12,7 +12,7 @@ tags:
- meta
---
```shell
```command
cp -a virtuallypotato.com runtimeterror.dev
rm -rf virtuallypotato.com
ln -s virtuallypotato.com runtimeterror.dev

View file

@ -94,15 +94,14 @@ Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `
After logging in to the VM, I entered the router's configuration mode:
```shell
vyos@vyos:~$ configure
```command-session
configure
[edit]
vyos@vyos#
```
I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
```shell
```commandroot
set interfaces ethernet eth0 address '192.168.1.8/24'
set interfaces ethernet eth0 description 'Outside'
set interfaces ethernet eth1 mtu '9000'
@ -123,7 +122,7 @@ set interfaces ethernet eth1 vif 1699 mtu '9000'
I also set up NAT for the networks that should be routable:
```shell
```commandroot
set nat source rule 10 outbound-interface 'eth0'
set nat source rule 10 source address '172.16.10.0/24'
set nat source rule 10 translation address 'masquerade'
@ -140,7 +139,7 @@ set protocols static route 0.0.0.0/0 next-hop 192.168.1.1
And I configured DNS forwarding:
```shell
```commandroot
set service dns forwarding allow-from '0.0.0.0/0'
set service dns forwarding domain 10.16.172.in-addr.arpa. server '192.168.1.5'
set service dns forwarding domain 20.16.172.in-addr.arpa. server '192.168.1.5'
@ -154,7 +153,7 @@ set service dns forwarding name-server '192.168.1.1'
Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
```shell
```commandroot
set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router '172.16.10.1'
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server '192.168.1.5'
@ -212,8 +211,8 @@ I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created ne
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
```shell
[root@esxi01:~] vmkping -I vmk1 172.16.98.22
```commandroot-session
vmkping -I vmk1 172.16.98.22
PING 172.16.98.22 (172.16.98.22): 56 data bytes
64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms
64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms
@ -222,8 +221,9 @@ PING 172.16.98.22 (172.16.98.22): 56 data bytes
--- 172.16.98.22 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.243/0.255/0.262 ms
[root@esxi01:~] vmkping -I vmk2 172.16.99.22 -S vmotion
```
```commandroot-session
vmkping -I vmk2 172.16.99.22 -S vmotion
PING 172.16.99.22 (172.16.99.22): 56 data bytes
64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms
64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms

View file

@ -25,7 +25,7 @@ So this will generate a name that looks something like `[user]_[catalog_item]_[s
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
So I hop over to vRO and create a new action, which I call `getTimestamp`. It doesn't require any inputs, and returns a single string. Here's the code:
```js
```js {linenos=true}
// JavaScript: getTimestamp action
// Inputs: None
// Returns: result (String)

View file

@ -58,7 +58,7 @@ I also went ahead and specified that the action will return a String.
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
```js
```js {linenos=true}
// JavaScript: createDeploymentName
// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
// envCode (String), functionCode (String), appCode (String)
@ -126,7 +126,7 @@ This gets filed under the existing `CustomProvisioning` folder, and I name it `n
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
![getNetworksForSite action](IdrT-Un8H1.png)
```js
```js {linenos=true}
// JavaScript: getNetworksForSite
// Inputs: siteCode (String)
// Returns: site.value (Array/String)
@ -163,7 +163,7 @@ inputs:
and update the resource configuration for the network entity to constrain it based on `input.network` instead of `input.site` as before:
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine

View file

@ -57,7 +57,7 @@ Now it's time to leave the Infrastructure tab and visit the Design one, where I'
![My first Cloud Template!](RtMljqM9x.png)
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
# Image Mapping
@ -73,7 +73,7 @@ inputs:
The first input is going to ask the user to select the desired Operating System for this deployment. The `oneOf` type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly "Windows Server 2019" `title` which is tied to the `ws2019` `const` value. For now, I'll also set the `default` value of the field so I don't have to actually click the dropdown each time I test the deployment.
```yaml
```yaml {linenos=true}
# Flavor Mapping
size:
title: Resource Size
@ -92,7 +92,7 @@ Now I'm asking the user to pick the t-shirt size of the VM. These will correspon
The `resources` section is where the data from the inputs gets applied to the deployment:
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@ -112,7 +112,7 @@ So I'm connecting the selected `input.image` to the Image Mapping configured in
All together now:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
# Image Mapping
@ -188,7 +188,7 @@ I'll also use the `net:bow` and `net:dre` tags to logically divide up the networ
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
```yaml
```yaml {linenos=true}
inputs:
# Datacenter location
site:
@ -204,7 +204,7 @@ I'm using the `enum` option now instead of `oneOf` since the site names shouldn'
And then I'll add some `constraints` to the `resources` section, making use of the `to_lower` function from the [cloud template expression syntax](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html) to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine

View file

@ -40,7 +40,7 @@ Since I try to keep things modular, I'm going to write a new vRO action within t
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
```js
```js {linenos=true}
// JavaScript: checkForAdConflict action
// Inputs: computerName (String)
// Outputs: (Boolean)
@ -65,7 +65,7 @@ Now I can pop back over to my massive `Generate unique hostname` workflow and dr
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
```js
```js {linenos=true}
// JavaScript: check for AD conflict task
// Inputs: candidateVmName (String), conflict (Boolean)
// Outputs: conflict (Boolean)
@ -103,22 +103,22 @@ Luckily, vRO does provide a way to import scripts bundled with their required mo
I start by creating a folder to store the script and needed module, and then I create the required `handler.ps1` file.
```shell
mkdir checkDnsConflicts
cd checkDnsConflicts
touch handler.ps1
```command
mkdir checkDnsConflicts
cd checkDnsConflicts
touch handler.ps1
```
I then create a `Modules` folder and install the DnsClient-PS module:
```shell
mkdir Modules
pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
```command
mkdir Modules
pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
```
And then it's time to write the PowerShell script in `handler.ps1`:
```powershell
```powershell {linenos=true}
# PowerShell: checkForDnsConflict script
# Inputs: $inputs.hostname (String), $inputs.domain (String)
# Outputs: $queryresult (String)
@ -147,8 +147,8 @@ function handler {
Now to package it up in a `.zip` which I can then import into vRO:
```shell
zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
```command-session
zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
adding: Modules/ (stored 0%)
adding: Modules/DnsClient-PS/ (stored 0%)
adding: Modules/DnsClient-PS/1.0.0/ (stored 0%)
@ -170,7 +170,9 @@ Now to package it up in a `.zip` which I can then import into vRO:
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%)
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%)
adding: handler.ps1 (deflated 49%)
ls
```
```command-session
ls
checkDnsConflicts.zip handler.ps1 Modules
```
@ -188,7 +190,7 @@ Just like with the `check for AD conflict` action, I'll add this onto the workfl
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
```js
```js {linenos=true}
// JavaScript: check for DNS conflict
// Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties)
// Outputs: conflict (Boolean)

View file

@ -37,7 +37,7 @@ I'll start by adding those fields as inputs on my cloud template.
I already have a `site` input at the top of the template, used for selecting the deployment location. I'll leave that there:
```yaml
```yaml {linenos=true}
inputs:
site:
type: string
@ -49,7 +49,7 @@ inputs:
I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
```yaml
```yaml {linenos=true}
environment:
type: string
title: Environment
@ -62,7 +62,7 @@ I'll add the rest of the naming components below the prompts for image selection
And a dropdown for those function options:
```yaml
```yaml {linenos=true}
function:
type: string
title: Function Code
@ -82,7 +82,7 @@ And a dropdown for those function options:
And finally a text entry field for the application descriptor. Note that this one includes the `minLength` and `maxLength` constraints to enforce the three-character format.
```yaml
```yaml {linenos=true}
app:
type: string
title: Application Code
@ -95,7 +95,7 @@ And finally a text entry field for the application descriptor. Note that this on
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for `environment` since I only want the first letter. I use the `substring()` function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a `dnsDomain` property that will be useful later when I need to query for DNS conflicts.
```yaml
```yaml {linenos=true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@ -111,7 +111,7 @@ resources:
So here's the complete template:
```yaml
```yaml {linenos=true}
formatVersion: 1
inputs:
site:
@ -228,7 +228,7 @@ The first thing I'll want this workflow to do (particularly for testing) is to t
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
```js
```js {linenos=true}
// JavaScript: logPayloadProperties
// Inputs: payload (Properties)
// Outputs: none
@ -291,7 +291,7 @@ Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing
The script for this is pretty straight-forward:
```js
```js {linenos=true}
// JavaScript: parse payload
// Inputs: inputProperties (Properties)
// Outputs: requestProperties (Properties), originalNames (Array/string)
@ -333,7 +333,7 @@ Select **Output** at the top of the *New Variable* dialog and the complete the f
And here's the script for that task:
```js
```js {linenos=true}
// JavaScript: Apply new names
// Inputs: inputProperties (Properties), newNames (Array/string)
// Outputs: resourceNames (Array/string)
@ -363,7 +363,7 @@ Okay, on to the schema. This workflow may take a little while to execute, and it
The script is very short:
```js
```js {linenos=true}
// JavaScript: create lock
// Inputs: lockOwner (String), lockId (String)
// Outputs: none
@ -377,7 +377,7 @@ We're getting to the meat of the operation now - another scriptable task named `
![Task: generate hostnameBase](XATryy20y.png)
```js
```js {linenos=true}
// JavaScript: generate hostnameBase
// Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String)
// Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number)
@ -415,7 +415,7 @@ I've only got the one vCenter in my lab. At work, I've got multiple vCenters so
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
![Task: prepare vCenter SDK connection](ByIWO66PC.png)
```js
```js {linenos=true}
// JavaScript: prepare vCenter SDK connection
// Inputs: none
// Outputs: sdkConnections (Array/VC:SdkConnection)
@ -432,7 +432,7 @@ Next, I'm going to drop another ForEach element onto the canvas. For each vCente
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
![Task: unpack results for all hosts](gIEFRnilq.png)
```js
```js {linenos=true}
// JavaScript: unpack results for all hosts
// Inputs: vmsByHost (Array/Array)
// Outputs: vmNames (Array/string)
@ -453,7 +453,7 @@ vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()})
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
![Task: generate hostnameSeq & candidateVmName](fWlSrD56N.png)
```js
```js {linenos=true}
// JavaScript: generate hostnameSeq & candidateVmName
// Inputs: hostnameBase (String), digitCount (Number)
// Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String)
@ -500,7 +500,7 @@ System.log("Proposed VM name: " + candidateVmName)
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
![Task: check for VM name conflicts](qmHszypww.png)
```js
```js {linenos=true}
// JavaScript: check for VM name conflicts
// Inputs: candidateVmName (String), vmNames (Array/string)
// Outputs: conflict (Boolean)
@ -527,7 +527,7 @@ I can then drag the new element away from the "everything is fine" flow, and con
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
```js
```js {linenos=true}
// JavaScript: conflict resolution
// Inputs: none
// Outputs: conflict (Boolean)
@ -542,7 +542,7 @@ So if `check VM name conflict` encounters a collision with an existing VM name i
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
![Task: return nextVmName](5QFTPHp5H.png)
```js
```js {linenos=true}
// JavaScript: return nextVmName
// Inputs: candidateVmName (String)
// Outputs: nextVmName (String)
@ -555,7 +555,7 @@ System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
And we should also remove that lock that we created at the start of this workflow.
![Task: remove lock](BhBnBh8VB.png)
```js
```js {linenos=true}
// JavaScript remove lock
// Inputs: lockId (String), lockOwner (String)
// Outputs: none