mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-11-29 18:02:18 +00:00
update posts for torchlight
This commit is contained in:
parent
4567ca7101
commit
5e383bffcd
32 changed files with 770 additions and 693 deletions
|
@ -52,7 +52,8 @@ Now to reference these specs from a cloud template...
|
||||||
### Cloud template
|
### Cloud template
|
||||||
I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template:
|
I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
inputs:
|
inputs:
|
||||||
[...]
|
[...]
|
||||||
adJoin:
|
adJoin:
|
||||||
|
@ -66,7 +67,8 @@ This new `adJoin` input is a boolean so it will appear on the request form as a
|
||||||
|
|
||||||
In the `resources` section of the template, I'll set a new property called `ignoreActiveDirectory` to be the inverse of the `adJoin` input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use `activeDirectory: relativeDN` to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the `customizationSpec` and use [cloud template conditional syntax](https://docs.vmware.com/en/vRealize-Automation/8.4/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html#conditions-4) to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern `'${conditional-expresion ? true-value : false-value}'`).
|
In the `resources` section of the template, I'll set a new property called `ignoreActiveDirectory` to be the inverse of the `adJoin` input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use `activeDirectory: relativeDN` to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the `customizationSpec` and use [cloud template conditional syntax](https://docs.vmware.com/en/vRealize-Automation/8.4/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html#conditions-4) to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern `'${conditional-expresion ? true-value : false-value}'`).
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
type: Cloud.vSphere.Machine
|
type: Cloud.vSphere.Machine
|
||||||
|
@ -81,7 +83,8 @@ resources:
|
||||||
|
|
||||||
Here's the current cloud template in its entirety:
|
Here's the current cloud template in its entirety:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
site:
|
site:
|
||||||
|
|
|
@ -54,8 +54,8 @@ Sounds pretty cool, right? I'm not going to go too deep into "how to Packer" in
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
### Install Packer
|
### Install Packer
|
||||||
Before being able to *use* Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package:
|
Before being able to *use* Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package:
|
||||||
```command
|
```shell
|
||||||
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
|
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - # [tl! .cmd:2]
|
||||||
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
|
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
|
||||||
sudo apt-get update && sudo apt-get install packer
|
sudo apt-get update && sudo apt-get install packer
|
||||||
```
|
```
|
||||||
|
@ -113,7 +113,8 @@ Let's quickly run through that build process, and then I'll back up and examine
|
||||||
### `ubuntu-k8s.pkr.hcl`
|
### `ubuntu-k8s.pkr.hcl`
|
||||||
#### `packer` block
|
#### `packer` block
|
||||||
The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build:
|
The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build:
|
||||||
``` {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// BLOCK: packer
|
// BLOCK: packer
|
||||||
// The Packer configuration.
|
// The Packer configuration.
|
||||||
packer {
|
packer {
|
||||||
|
@ -134,7 +135,8 @@ As I mentioned above, I'll be using the official [`vsphere` plugin](https://gith
|
||||||
|
|
||||||
#### `data` block
|
#### `data` block
|
||||||
This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above).
|
This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above).
|
||||||
``` {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// BLOCK: data
|
// BLOCK: data
|
||||||
// Defines data sources.
|
// Defines data sources.
|
||||||
data "sshkey" "install" {
|
data "sshkey" "install" {
|
||||||
|
@ -147,7 +149,8 @@ This will generate an ECDSA keypair, and the public key will include the identif
|
||||||
|
|
||||||
#### `locals` block
|
#### `locals` block
|
||||||
Locals are a type of Packer variable which aren't explicitly declared in the `variables.pkr.hcl` file. They only exist within the context of a single build (hence the "local" name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines):
|
Locals are a type of Packer variable which aren't explicitly declared in the `variables.pkr.hcl` file. They only exist within the context of a single build (hence the "local" name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines):
|
||||||
```text {linenos=true,hl_lines=[10,17]}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// BLOCK: locals
|
// BLOCK: locals
|
||||||
// Defines local variables.
|
// Defines local variables.
|
||||||
locals {
|
locals {
|
||||||
|
@ -182,7 +185,8 @@ The `source` block tells the `vsphere-iso` builder how to connect to vSphere, wh
|
||||||
|
|
||||||
You'll notice that most of this is just mapping user-defined variables (with the `var.` prefix) to properties used by `vsphere-iso`:
|
You'll notice that most of this is just mapping user-defined variables (with the `var.` prefix) to properties used by `vsphere-iso`:
|
||||||
|
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// BLOCK: source
|
// BLOCK: source
|
||||||
// Defines the builder configuration blocks.
|
// Defines the builder configuration blocks.
|
||||||
source "vsphere-iso" "ubuntu-k8s" {
|
source "vsphere-iso" "ubuntu-k8s" {
|
||||||
|
@ -284,7 +288,8 @@ source "vsphere-iso" "ubuntu-k8s" {
|
||||||
#### `build` block
|
#### `build` block
|
||||||
This block brings everything together and executes the build. It calls the `source.vsphere-iso.ubuntu-k8s` block defined above, and also ties in a `file` and a few `shell` provisioners. `file` provisioners are used to copy files (like SSL CA certificates) into the VM, while the `shell` provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages.
|
This block brings everything together and executes the build. It calls the `source.vsphere-iso.ubuntu-k8s` block defined above, and also ties in a `file` and a few `shell` provisioners. `file` provisioners are used to copy files (like SSL CA certificates) into the VM, while the `shell` provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages.
|
||||||
|
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// BLOCK: build
|
// BLOCK: build
|
||||||
// Defines the builders to run, provisioners, and post-processors.
|
// Defines the builders to run, provisioners, and post-processors.
|
||||||
build {
|
build {
|
||||||
|
@ -323,7 +328,8 @@ Before looking at the build-specific variable definitions, let's take a quick lo
|
||||||
|
|
||||||
Most of these carry descriptions with them so I won't restate them outside of the code block here:
|
Most of these carry descriptions with them so I won't restate them outside of the code block here:
|
||||||
|
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
/*
|
/*
|
||||||
DESCRIPTION:
|
DESCRIPTION:
|
||||||
Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso).
|
Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso).
|
||||||
|
@ -724,7 +730,8 @@ The full `variables.pkr.hcl` can be viewed [here](https://github.com/jbowdre/vsp
|
||||||
Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values.
|
Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values.
|
||||||
|
|
||||||
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
|
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
/*
|
/*
|
||||||
DESCRIPTION:
|
DESCRIPTION:
|
||||||
Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso).
|
Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso).
|
||||||
|
@ -745,7 +752,8 @@ vsphere_folder = "_Templates"
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll then describe the properties of the VM itself:
|
I'll then describe the properties of the VM itself:
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Guest Operating System Settings
|
// Guest Operating System Settings
|
||||||
vm_guest_os_language = "en_US"
|
vm_guest_os_language = "en_US"
|
||||||
vm_guest_os_keyboard = "us"
|
vm_guest_os_keyboard = "us"
|
||||||
|
@ -771,7 +779,8 @@ common_remove_cdrom = true
|
||||||
```
|
```
|
||||||
|
|
||||||
Then I'll configure Packer to convert the VM to a template once the build is finished:
|
Then I'll configure Packer to convert the VM to a template once the build is finished:
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Template and Content Library Settings
|
// Template and Content Library Settings
|
||||||
common_template_conversion = true
|
common_template_conversion = true
|
||||||
common_content_library_name = null
|
common_content_library_name = null
|
||||||
|
@ -786,7 +795,8 @@ common_ovf_export_path = ""
|
||||||
```
|
```
|
||||||
|
|
||||||
Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity:
|
Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity:
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Removable Media Settings
|
// Removable Media Settings
|
||||||
common_iso_datastore = "nuchost-local"
|
common_iso_datastore = "nuchost-local"
|
||||||
iso_url = null
|
iso_url = null
|
||||||
|
@ -797,7 +807,8 @@ iso_checksum_value = "5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74
|
||||||
```
|
```
|
||||||
|
|
||||||
And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the `cloud-init` coniguration into the Ubuntu installer:
|
And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the `cloud-init` coniguration into the Ubuntu installer:
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Boot Settings
|
// Boot Settings
|
||||||
vm_boot_order = "disk,cdrom"
|
vm_boot_order = "disk,cdrom"
|
||||||
vm_boot_wait = "4s"
|
vm_boot_wait = "4s"
|
||||||
|
@ -814,7 +825,8 @@ vm_boot_command = [
|
||||||
|
|
||||||
Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the `cloud-init` configuration in just a minute...)
|
Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the `cloud-init` configuration in just a minute...)
|
||||||
|
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Communicator Settings
|
// Communicator Settings
|
||||||
communicator_port = 22
|
communicator_port = 22
|
||||||
communicator_timeout = "20m"
|
communicator_timeout = "20m"
|
||||||
|
@ -832,7 +844,8 @@ ssh_keys = [
|
||||||
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template.
|
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template.
|
||||||
|
|
||||||
The last bit of this file also designates the desired version of Kubernetes to be installed.
|
The last bit of this file also designates the desired version of Kubernetes to be installed.
|
||||||
```text {linenos=true}
|
```hcl
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// Provisioner Settings
|
// Provisioner Settings
|
||||||
post_install_scripts = [
|
post_install_scripts = [
|
||||||
"scripts/wait-for-cloud-init.sh",
|
"scripts/wait-for-cloud-init.sh",
|
||||||
|
@ -864,7 +877,8 @@ Okay, so we've covered the Packer framework that creates the VM; now let's take
|
||||||
|
|
||||||
See the bits that look `${ like_this }`? Those place-holders will take input from the [`locals` block of `ubuntu-k8s.pkr.hcl`](#locals-block) mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
|
See the bits that look `${ like_this }`? Those place-holders will take input from the [`locals` block of `ubuntu-k8s.pkr.hcl`](#locals-block) mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#cloud-config
|
#cloud-config
|
||||||
autoinstall:
|
autoinstall:
|
||||||
version: 1
|
version: 1
|
||||||
|
@ -899,7 +913,7 @@ autoinstall:
|
||||||
%{ endfor ~}
|
%{ endfor ~}
|
||||||
%{ endif ~}
|
%{ endif ~}
|
||||||
storage:
|
storage:
|
||||||
config:
|
config: # [tl! collapse:start]
|
||||||
- ptable: gpt
|
- ptable: gpt
|
||||||
path: /dev/sda
|
path: /dev/sda
|
||||||
wipe: superblock
|
wipe: superblock
|
||||||
|
@ -1037,7 +1051,7 @@ autoinstall:
|
||||||
- path: /var/log/audit
|
- path: /var/log/audit
|
||||||
device: format-audit
|
device: format-audit
|
||||||
type: mount
|
type: mount
|
||||||
id: mount-audit
|
id: mount-audit # [tl! collapse:end]
|
||||||
user-data:
|
user-data:
|
||||||
package_upgrade: true
|
package_upgrade: true
|
||||||
disable_root: true
|
disable_root: true
|
||||||
|
@ -1068,7 +1082,8 @@ You can find all of the scripts [here](https://github.com/jbowdre/vsphere-k8s/tr
|
||||||
|
|
||||||
#### `wait-for-cloud-init.sh`
|
#### `wait-for-cloud-init.sh`
|
||||||
This simply holds up the process until the `/var/lib/cloud//instance/boot-finished` file has been created, signifying the completion of the `cloud-init` process:
|
This simply holds up the process until the `/var/lib/cloud//instance/boot-finished` file has been created, signifying the completion of the `cloud-init` process:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Waiting for cloud-init...'
|
echo '>> Waiting for cloud-init...'
|
||||||
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
|
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
|
||||||
|
@ -1078,7 +1093,8 @@ done
|
||||||
|
|
||||||
#### `cleanup-subiquity.sh`
|
#### `cleanup-subiquity.sh`
|
||||||
Next I clean up any network configs that may have been created during the install process:
|
Next I clean up any network configs that may have been created during the install process:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
|
if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
|
||||||
sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg
|
sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg
|
||||||
|
@ -1093,7 +1109,8 @@ fi
|
||||||
|
|
||||||
#### `install-ca-certs.sh`
|
#### `install-ca-certs.sh`
|
||||||
The [`file` provisioner](#build-block) mentioned above helpfully copied my custom CA certs to the `/tmp/certs/` folder on the VM; this script will install them into the certificate store:
|
The [`file` provisioner](#build-block) mentioned above helpfully copied my custom CA certs to the `/tmp/certs/` folder on the VM; this script will install them into the certificate store:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Installing custom certificates...'
|
echo '>> Installing custom certificates...'
|
||||||
sudo cp /tmp/certs/* /usr/local/share/ca-certificates/
|
sudo cp /tmp/certs/* /usr/local/share/ca-certificates/
|
||||||
|
@ -1106,7 +1123,8 @@ sudo /usr/sbin/update-ca-certificates
|
||||||
|
|
||||||
#### `disable-multipathd.sh`
|
#### `disable-multipathd.sh`
|
||||||
This disables `multipathd`:
|
This disables `multipathd`:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
sudo systemctl disable multipathd
|
sudo systemctl disable multipathd
|
||||||
echo 'Disabling multipathd'
|
echo 'Disabling multipathd'
|
||||||
|
@ -1114,7 +1132,8 @@ echo 'Disabling multipathd'
|
||||||
|
|
||||||
#### `disable-release-upgrade-motd.sh`
|
#### `disable-release-upgrade-motd.sh`
|
||||||
And this one disable the release upgrade notices that would otherwise be displayed upon each login:
|
And this one disable the release upgrade notices that would otherwise be displayed upon each login:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Disabling release update MOTD...'
|
echo '>> Disabling release update MOTD...'
|
||||||
sudo chmod -x /etc/update-motd.d/91-release-upgrade
|
sudo chmod -x /etc/update-motd.d/91-release-upgrade
|
||||||
|
@ -1122,7 +1141,8 @@ sudo chmod -x /etc/update-motd.d/91-release-upgrade
|
||||||
|
|
||||||
#### `persist-cloud-init-net.sh`
|
#### `persist-cloud-init-net.sh`
|
||||||
I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick `cloud-init` option to help make sure that happens:
|
I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick `cloud-init` option to help make sure that happens:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/sh -eu
|
#!/bin/sh -eu
|
||||||
echo '>> Preserving network settings...'
|
echo '>> Preserving network settings...'
|
||||||
echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
|
echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
|
||||||
|
@ -1131,7 +1151,8 @@ echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg
|
||||||
#### `configure-sshd.sh`
|
#### `configure-sshd.sh`
|
||||||
Then I just set a few options for the `sshd` configuration, like disabling root login:
|
Then I just set a few options for the `sshd` configuration, like disabling root login:
|
||||||
|
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Configuring SSH'
|
echo '>> Configuring SSH'
|
||||||
sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
|
sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
|
||||||
|
@ -1143,7 +1164,8 @@ sudo sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/
|
||||||
This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM.
|
This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM.
|
||||||
|
|
||||||
First I enable the required `overlay` and `br_netfilter` modules:
|
First I enable the required `overlay` and `br_netfilter` modules:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo ">> Installing Kubernetes components..."
|
echo ">> Installing Kubernetes components..."
|
||||||
|
|
||||||
|
@ -1159,7 +1181,8 @@ sudo modprobe br_netfilter
|
||||||
```
|
```
|
||||||
|
|
||||||
Then I'll make some networking tweaks to enable forwarding and bridging:
|
Then I'll make some networking tweaks to enable forwarding and bridging:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# Configure networking
|
# Configure networking
|
||||||
echo ".. configure networking"
|
echo ".. configure networking"
|
||||||
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
|
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
|
||||||
|
@ -1172,7 +1195,8 @@ sudo sysctl --system
|
||||||
```
|
```
|
||||||
|
|
||||||
Next, set up `containerd` as the container runtime:
|
Next, set up `containerd` as the container runtime:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# Setup containerd
|
# Setup containerd
|
||||||
echo ".. setup containerd"
|
echo ".. setup containerd"
|
||||||
sudo apt-get update && sudo apt-get install -y containerd apt-transport-https jq
|
sudo apt-get update && sudo apt-get install -y containerd apt-transport-https jq
|
||||||
|
@ -1182,7 +1206,8 @@ sudo systemctl restart containerd
|
||||||
```
|
```
|
||||||
|
|
||||||
Then disable swap:
|
Then disable swap:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# Disable swap
|
# Disable swap
|
||||||
echo ".. disable swap"
|
echo ".. disable swap"
|
||||||
sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^\(.*\)$/#\1/g' /etc/fstab
|
sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^\(.*\)$/#\1/g' /etc/fstab
|
||||||
|
@ -1190,7 +1215,8 @@ sudo swapoff -a
|
||||||
```
|
```
|
||||||
|
|
||||||
Next I'll install the Kubernetes components and (crucially) `apt-mark hold` them so they won't be automatically upgraded without it being a coordinated change:
|
Next I'll install the Kubernetes components and (crucially) `apt-mark hold` them so they won't be automatically upgraded without it being a coordinated change:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# Install Kubernetes
|
# Install Kubernetes
|
||||||
echo ".. install kubernetes version ${KUBEVERSION}"
|
echo ".. install kubernetes version ${KUBEVERSION}"
|
||||||
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||||
|
@ -1201,7 +1227,8 @@ sudo apt-mark hold kubelet kubeadm kubectl
|
||||||
|
|
||||||
#### `update-packages.sh`
|
#### `update-packages.sh`
|
||||||
Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded:
|
Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Checking for and installing updates...'
|
echo '>> Checking for and installing updates...'
|
||||||
sudo apt-get update && sudo apt-get -y upgrade
|
sudo apt-get update && sudo apt-get -y upgrade
|
||||||
|
@ -1214,7 +1241,8 @@ After the reboot, all that's left are some cleanup tasks to get the VM ready to
|
||||||
|
|
||||||
#### `cleanup-cloud-init.sh`
|
#### `cleanup-cloud-init.sh`
|
||||||
I'll start with cleaning up the `cloud-init` state:
|
I'll start with cleaning up the `cloud-init` state:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Cleaning up cloud-init state...'
|
echo '>> Cleaning up cloud-init state...'
|
||||||
sudo cloud-init clean -l
|
sudo cloud-init clean -l
|
||||||
|
@ -1222,7 +1250,8 @@ sudo cloud-init clean -l
|
||||||
|
|
||||||
#### `enable-vmware-customization.sh`
|
#### `enable-vmware-customization.sh`
|
||||||
And then be (re)enable the ability for VMware to be able to customize the guest successfully:
|
And then be (re)enable the ability for VMware to be able to customize the guest successfully:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Enabling legacy VMware Guest Customization...'
|
echo '>> Enabling legacy VMware Guest Customization...'
|
||||||
echo 'disable_vmware_customization: true' | sudo tee -a /etc/cloud/cloud.cfg
|
echo 'disable_vmware_customization: true' | sudo tee -a /etc/cloud/cloud.cfg
|
||||||
|
@ -1231,7 +1260,8 @@ sudo vmware-toolbox-cmd config set deployPkg enable-custom-scripts true
|
||||||
|
|
||||||
#### `zero-disk.sh`
|
#### `zero-disk.sh`
|
||||||
I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file:
|
I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file:
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
echo '>> Zeroing free space to reduce disk size'
|
echo '>> Zeroing free space to reduce disk size'
|
||||||
sudo sh -c 'dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync'
|
sudo sh -c 'dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync'
|
||||||
|
@ -1240,7 +1270,8 @@ sudo sh -c 'rm -f /EMPTY; sync; sleep 1; sync'
|
||||||
|
|
||||||
#### `generalize.sh`
|
#### `generalize.sh`
|
||||||
Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the `packer_key` identifier since that won't be needed anymore.
|
Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the `packer_key` identifier since that won't be needed anymore.
|
||||||
```shell {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash -eu
|
#!/bin/bash -eu
|
||||||
# Prepare a VM to become a template.
|
# Prepare a VM to become a template.
|
||||||
|
|
||||||
|
@ -1293,8 +1324,8 @@ sudo rm -f /root/.bash_history
|
||||||
### Kick out the jams (or at least the build)
|
### Kick out the jams (or at least the build)
|
||||||
Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the `.pkr.hcl` files, and then run the Packer build command:
|
Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the `.pkr.hcl` files, and then run the Packer build command:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
packer packer build -on-error=abort -force .
|
packer packer build -on-error=abort -force . # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% notice note "Flags" %}}
|
{{% notice note "Flags" %}}
|
||||||
|
|
|
@ -77,6 +77,7 @@ I can then click through the rest of the wizard but (as before) I'll stop on the
|
||||||
#### Editing the cluster spec
|
#### Editing the cluster spec
|
||||||
Remember that awkward `member:1.2.840.113556.1.4.1941:` attribute from earlier? Here's how it looks within the TCE cluster-defining YAML:
|
Remember that awkward `member:1.2.840.113556.1.4.1941:` attribute from earlier? Here's how it looks within the TCE cluster-defining YAML:
|
||||||
```yaml
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
||||||
LDAP_GROUP_SEARCH_FILTER: (objectClass=group)
|
LDAP_GROUP_SEARCH_FILTER: (objectClass=group)
|
||||||
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: 'member:1.2.840.113556.1.4.1941:'
|
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: 'member:1.2.840.113556.1.4.1941:'
|
||||||
|
@ -86,24 +87,27 @@ LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
|
||||||
|
|
||||||
That `:` at the end of the line will cause problems down the road - specifically when the deployment process creates the `dex` app which handles the actual LDAPS authentication piece. Cumulative hours of [troubleshooting](#troubleshooting-notes) (and learning!) eventually revealed to me that something along the way had choked on that trailing colon and inserted this into the `dex` configuration:
|
That `:` at the end of the line will cause problems down the road - specifically when the deployment process creates the `dex` app which handles the actual LDAPS authentication piece. Cumulative hours of [troubleshooting](#troubleshooting-notes) (and learning!) eventually revealed to me that something along the way had choked on that trailing colon and inserted this into the `dex` configuration:
|
||||||
```yaml
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
userMatchers:
|
userMatchers:
|
||||||
- userAttr: DN
|
- userAttr: DN
|
||||||
groupAttr:
|
groupAttr:
|
||||||
member:1.2.840.113556.1.4.1941: null
|
member:1.2.840.113556.1.4.1941: null # [tl! focus]
|
||||||
```
|
```
|
||||||
|
|
||||||
It *should* look like this instead:
|
It *should* look like this instead:
|
||||||
```yaml
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
userMatchers:
|
userMatchers:
|
||||||
- userAttr: DN
|
- userAttr: DN
|
||||||
groupAttr: 'member:1.2.840.113556.1.4.1941:'
|
groupAttr: 'member:1.2.840.113556.1.4.1941:' # [tl! focus]
|
||||||
```
|
```
|
||||||
|
|
||||||
That error prevents `dex` from starting correctly so the authentication would never work. I eventually figured out that using the `|` character to define the attribute as a [literal scalar](https://yaml.org/spec/1.2.2/#812-literal-style) would help to get around this issue so I changed the cluster YAML to look like this:
|
That error prevents `dex` from starting correctly so the authentication would never work. I eventually figured out that using the `|` character to define the attribute as a [literal scalar](https://yaml.org/spec/1.2.2/#812-literal-style) would help to get around this issue so I changed the cluster YAML to look like this:
|
||||||
```yaml
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
LDAP_GROUP_SEARCH_BASE_DN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
||||||
LDAP_GROUP_SEARCH_FILTER: (objectClass=group)
|
LDAP_GROUP_SEARCH_FILTER: (objectClass=group)
|
||||||
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: |
|
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: | # [tl! focus:1]
|
||||||
'member:1.2.840.113556.1.4.1941:'
|
'member:1.2.840.113556.1.4.1941:'
|
||||||
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
|
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
|
||||||
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
|
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
|
||||||
|
@ -113,8 +117,8 @@ LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
|
||||||
|
|
||||||
#### Deploying the cluster
|
#### Deploying the cluster
|
||||||
That's the only thing I need to manually edit so now I can go ahead and create the cluster with:
|
That's the only thing I need to manually edit so now I can go ahead and create the cluster with:
|
||||||
```command
|
```shell
|
||||||
tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml
|
tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
This will probably take 10-15 minutes to deploy so it's a great time to go top off my coffee.
|
This will probably take 10-15 minutes to deploy so it's a great time to go top off my coffee.
|
||||||
|
@ -136,20 +140,19 @@ Some addons might be getting installed! Check their status by running the follow
|
||||||
```
|
```
|
||||||
|
|
||||||
I obediently follow the instructions to switch to the correct context and verify that the addons are all running:
|
I obediently follow the instructions to switch to the correct context and verify that the addons are all running:
|
||||||
```command-session
|
```shell
|
||||||
kubectl config use-context tce-mgmt-admin@tce-mgmt
|
kubectl config use-context tce-mgmt-admin@tce-mgmt # [tl! .cmd]
|
||||||
Switched to context "tce-mgmt-admin@tce-mgmt".
|
Switched to context "tce-mgmt-admin@tce-mgmt". # [tl! .nocopy:1]
|
||||||
```
|
|
||||||
```command-session
|
kubectl get apps -A # [tl! .cmd]
|
||||||
kubectl get apps -A
|
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE # [tl! .nocopy:start]
|
||||||
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
|
|
||||||
tkg-system antrea Reconcile succeeded 5m2s 11m
|
tkg-system antrea Reconcile succeeded 5m2s 11m
|
||||||
tkg-system metrics-server Reconcile succeeded 39s 11m
|
tkg-system metrics-server Reconcile succeeded 39s 11m
|
||||||
tkg-system pinniped Reconcile succeeded 4m55s 11m
|
tkg-system pinniped Reconcile succeeded 4m55s 11m
|
||||||
tkg-system secretgen-controller Reconcile succeeded 65s 11m
|
tkg-system secretgen-controller Reconcile succeeded 65s 11m
|
||||||
tkg-system tanzu-addons-manager Reconcile succeeded 70s 11m
|
tkg-system tanzu-addons-manager Reconcile succeeded 70s 11m
|
||||||
tkg-system vsphere-cpi Reconcile succeeded 32s 11m
|
tkg-system vsphere-cpi Reconcile succeeded 32s 11m
|
||||||
tkg-system vsphere-csi Reconcile succeeded 66s 11m
|
tkg-system vsphere-csi Reconcile succeeded 66s 11m # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Post-deployment tasks
|
### Post-deployment tasks
|
||||||
|
@ -159,25 +162,24 @@ I've got a TCE cluster now but it's not quite ready for me to authenticate with
|
||||||
#### Load Balancer deployment
|
#### Load Balancer deployment
|
||||||
The [guide I'm following from the TCE site](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/) assumes that I'm using NSX-ALB in my environment, but I'm not. So, [as before](/tanzu-community-edition-k8s-homelab/#deploying-kube-vip-as-a-load-balancer), I'll need to deploy [Scott Rosenberg's `kube-vip` Carvel package](https://github.com/vrabbi/tkgm-customizations):
|
The [guide I'm following from the TCE site](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/) assumes that I'm using NSX-ALB in my environment, but I'm not. So, [as before](/tanzu-community-edition-k8s-homelab/#deploying-kube-vip-as-a-load-balancer), I'll need to deploy [Scott Rosenberg's `kube-vip` Carvel package](https://github.com/vrabbi/tkgm-customizations):
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
git clone https://github.com/vrabbi/tkgm-customizations.git
|
git clone https://github.com/vrabbi/tkgm-customizations.git # [tl! .cmd:3]
|
||||||
cd tkgm-customizations/carvel-packages/kube-vip-package
|
cd tkgm-customizations/carvel-packages/kube-vip-package
|
||||||
kubectl apply -n tanzu-package-repo-global -f metadata.yml
|
kubectl apply -n tanzu-package-repo-global -f metadata.yml
|
||||||
kubectl apply -n tanzu-package-repo-global -f package.yaml
|
kubectl apply -n tanzu-package-repo-global -f package.yaml
|
||||||
```
|
|
||||||
```command-session
|
cat << EOF > values.yaml # [tl! .cmd]
|
||||||
cat << EOF > values.yaml
|
|
||||||
vip_range: 192.168.1.64-192.168.1.70
|
vip_range: 192.168.1.64-192.168.1.70
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
```command
|
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml # [tl! .cmd]
|
||||||
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Modifying services to use the Load Balancer
|
#### Modifying services to use the Load Balancer
|
||||||
With the load balancer in place, I can follow the TCE instruction to modify the Pinniped and Dex services to switch from the `NodePort` type to the `LoadBalancer` type so they can be easily accessed from outside of the cluster. This process starts by creating a file called `pinniped-supervisor-svc-overlay.yaml` and pasting in the following overlay manifest:
|
With the load balancer in place, I can follow the TCE instruction to modify the Pinniped and Dex services to switch from the `NodePort` type to the `LoadBalancer` type so they can be easily accessed from outside of the cluster. This process starts by creating a file called `pinniped-supervisor-svc-overlay.yaml` and pasting in the following overlay manifest:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#@ load("@ytt:overlay", "overlay")
|
#@ load("@ytt:overlay", "overlay")
|
||||||
#@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
|
#@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
|
||||||
---
|
---
|
||||||
|
@ -208,42 +210,42 @@ spec:
|
||||||
```
|
```
|
||||||
|
|
||||||
This overlay will need to be inserted into the `pinniped-addon` secret which means that the contents need to be converted to a base64-encoded string:
|
This overlay will need to be inserted into the `pinniped-addon` secret which means that the contents need to be converted to a base64-encoded string:
|
||||||
```command-session
|
```shell
|
||||||
base64 -w 0 pinniped-supervisor-svc-overlay.yaml
|
base64 -w 0 pinniped-supervisor-svc-overlay.yaml # [tl! .cmd]
|
||||||
I0AgbG9hZCgi[...]==
|
I0AgbG9hZCgi[...]== # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
{{% notice note "Avoid newlines" %}}
|
{{% notice note "Avoid newlines" %}}
|
||||||
The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines after a certain number of characters. If you leave this off, the string will get a newline inserted every 76 characters, and those linebreaks would make the string a bit more tricky to work with. Avoid having to clean up the output afterwards by being more specific with the request up front!
|
The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines after a certain number of characters. If you leave this off, the string will get a newline inserted every 76 characters, and those linebreaks would make the string a bit more tricky to work with. Avoid having to clean up the output afterwards by being more specific with the request up front!
|
||||||
{{% /notice %}}
|
{{% /notice %}}
|
||||||
|
|
||||||
I'll copy the resulting base64 string (which is much longer than the truncated form I'm using here), and paste it into the following command to patch the secret (which will be named after the management cluster name so replace the `tce-mgmt` part as appropriate):
|
I'll copy the resulting base64 string (which is much longer than the truncated form I'm using here), and paste it into the following command to patch the secret (which will be named after the management cluster name so replace the `tce-mgmt` part as appropriate):
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}'
|
kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}' # [tl! .cmd]
|
||||||
secret/tce-mgmt-pinniped-addon patched
|
secret/tce-mgmt-pinniped-addon patched # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I can watch as the `pinniped-supervisor` and `dexsvc` services get updated with the new service type:
|
I can watch as the `pinniped-supervisor` and `dexsvc` services get updated with the new service type:
|
||||||
```command-session
|
```shell
|
||||||
kubectl get svc -A -w
|
kubectl get svc -A -w # [tl! .cmd]
|
||||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) # [tl! .nocopy:start]
|
||||||
pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 <none> 443:31234/TCP
|
pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 <none> 443:31234/TCP
|
||||||
tanzu-system-auth dexsvc NodePort 100.70.238.106 <none> 5556:30167/TCP
|
tanzu-system-auth dexsvc NodePort 100.70.238.106 <none> 5556:30167/TCP
|
||||||
tkg-system packaging-api ClusterIP 100.65.185.94 <none> 443/TCP
|
tkg-system packaging-api ClusterIP 100.65.185.94 <none> 443/TCP
|
||||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 <pending> 443:30167/TCP
|
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 <pending> 443:30167/TCP
|
||||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 <pending> 443:31234/TCP
|
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 <pending> 443:31234/TCP
|
||||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 192.168.1.70 443:31234/TCP
|
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 192.168.1.70 443:31234/TCP
|
||||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 192.168.1.64 443:30167/TCP
|
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 192.168.1.64 443:30167/TCP # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll also need to restart the `pinniped-post-deploy-job` job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created:
|
I'll also need to restart the `pinniped-post-deploy-job` job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created:
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job
|
kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job # [tl! .cmd]
|
||||||
job.batch "pinniped-post-deploy-job" deleted
|
job.batch "pinniped-post-deploy-job" deleted # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl get jobs -A -w
|
kubectl get jobs -A -w # [tl! cmd]
|
||||||
NAMESPACE NAME COMPLETIONS DURATION AGE
|
NAMESPACE NAME COMPLETIONS DURATION AGE # [tl! .nocopy:4]
|
||||||
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
|
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
|
||||||
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
|
pinniped-supervisor pinniped-post-deploy-job 0/1 0s
|
||||||
pinniped-supervisor pinniped-post-deploy-job 0/1 0s 0s
|
pinniped-supervisor pinniped-post-deploy-job 0/1 0s 0s
|
||||||
|
@ -254,7 +256,8 @@ pinniped-supervisor pinniped-post-deploy-job 1/1 9s 9s
|
||||||
Right now, I've got all the necessary components to support LDAPS authentication with my TCE management cluster but I haven't done anything yet to actually define who should have what level of access. To do that, I'll create a `ClusterRoleBinding`.
|
Right now, I've got all the necessary components to support LDAPS authentication with my TCE management cluster but I haven't done anything yet to actually define who should have what level of access. To do that, I'll create a `ClusterRoleBinding`.
|
||||||
|
|
||||||
I'll toss this into a file I'll call `tanzu-admins-crb.yaml`:
|
I'll toss this into a file I'll call `tanzu-admins-crb.yaml`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -274,25 +277,24 @@ I have a group in Active Directory called `Tanzu-Admins` which contains a group
|
||||||
Once applied, users within that group will be granted the `cluster-admin` role[^roles].
|
Once applied, users within that group will be granted the `cluster-admin` role[^roles].
|
||||||
|
|
||||||
Let's do it:
|
Let's do it:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f tanzu-admins-crb.yaml
|
kubectl apply -f tanzu-admins-crb.yaml # [tl! .cmd]
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created
|
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
Thus far, I've been using the default administrator context to interact with the cluster. Now it's time to switch to the non-admin context:
|
Thus far, I've been using the default administrator context to interact with the cluster. Now it's time to switch to the non-admin context:
|
||||||
```command-session
|
```shell
|
||||||
tanzu management-cluster kubeconfig get
|
tanzu management-cluster kubeconfig get # [tl! .cmd]
|
||||||
You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt'
|
You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt' # [tl! .nocopy:1]
|
||||||
```
|
|
||||||
```command-session
|
kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt # [tl! .cmd]
|
||||||
kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt
|
Switched to context "tanzu-cli-tce-mgmt@tce-mgmt". # [tl! .nocopy]
|
||||||
Switched to context "tanzu-cli-tce-mgmt@tce-mgmt".
|
|
||||||
```
|
```
|
||||||
|
|
||||||
After assuming the non-admin context, the next time I try to interact with the cluster it should kick off the LDAPS authentication process. It won't look like anything is happening in the terminal:
|
After assuming the non-admin context, the next time I try to interact with the cluster it should kick off the LDAPS authentication process. It won't look like anything is happening in the terminal:
|
||||||
```command-session
|
```shell
|
||||||
kubectl get nodes
|
kubectl get nodes # [tl! .cmd]
|
||||||
|
# [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
But it will shortly spawn a browser page prompting me to log in:
|
But it will shortly spawn a browser page prompting me to log in:
|
||||||
|
@ -302,9 +304,9 @@ Doing so successfully will yield:
|
||||||
![Dex login success!](dex_login_success.png)
|
![Dex login success!](dex_login_success.png)
|
||||||
|
|
||||||
And the `kubectl` command will return the expected details:
|
And the `kubectl` command will return the expected details:
|
||||||
```command-session
|
```shell
|
||||||
kubectl get nodes
|
kubectl get nodes # [tl! .cmd]
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
|
||||||
tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vmware.1
|
tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vmware.1
|
||||||
tce-mgmt-md-0-847db9ddc-5bwjs Ready <none> 28h v1.21.5+vmware.1
|
tce-mgmt-md-0-847db9ddc-5bwjs Ready <none> 28h v1.21.5+vmware.1
|
||||||
```
|
```
|
||||||
|
@ -326,9 +328,9 @@ Other users hoping to work with a Tanzu Community Edition cluster will also need
|
||||||
At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one.
|
At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one.
|
||||||
|
|
||||||
I was then able to deploy the workload cluster with:
|
I was then able to deploy the workload cluster with:
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster create --file tce-work-deploy.yaml
|
tanzu cluster create --file tce-work-deploy.yaml # [tl! .cmd]
|
||||||
Validating configuration...
|
Validating configuration... # [tl! .nocopy:start]
|
||||||
Creating workload cluster 'tce-work'...
|
Creating workload cluster 'tce-work'...
|
||||||
Waiting for cluster to be initialized...
|
Waiting for cluster to be initialized...
|
||||||
cluster control plane is still being initialized: WaitingForControlPlane
|
cluster control plane is still being initialized: WaitingForControlPlane
|
||||||
|
@ -337,38 +339,35 @@ Waiting for cluster nodes to be available...
|
||||||
Waiting for addons installation...
|
Waiting for addons installation...
|
||||||
Waiting for packages to be up and running...
|
Waiting for packages to be up and running...
|
||||||
|
|
||||||
Workload cluster 'tce-work' created
|
Workload cluster 'tce-work' created # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Access the admin context:
|
Access the admin context:
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster kubeconfig get --admin tce-work
|
tanzu cluster kubeconfig get --admin tce-work # [tl! .cmd]
|
||||||
Credentials of cluster 'tce-work' have been saved
|
Credentials of cluster 'tce-work' have been saved # [tl! .nocopy:2]
|
||||||
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
||||||
```
|
|
||||||
```command-session
|
kubectl config use-context tce-work-admin@tce-work # [tl! .cmd]
|
||||||
kubectl config use-context tce-work-admin@tce-work
|
Switched to context "tce-work-admin@tce-work". # [tl! .nocopy]
|
||||||
Switched to context "tce-work-admin@tce-work".
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Apply the same ClusterRoleBinding from before[^crb]:
|
Apply the same ClusterRoleBinding from before[^crb]:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f tanzu-admins-crb.yaml
|
kubectl apply -f tanzu-admins-crb.yaml # [tl! .cmd]
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created
|
clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
And finally switch to the non-admin context and log in with my AD account:
|
And finally switch to the non-admin context and log in with my AD account:
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster kubeconfig get tce-work
|
tanzu cluster kubeconfig get tce-work # [tl! .cmd]
|
||||||
ℹ You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-work@tce-work'
|
ℹ You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-work@tce-work' # [tl! .nocopy:1]
|
||||||
```
|
|
||||||
```command-session
|
kubectl config use-context tanzu-cli-tce-work@tce-work # [tl! .cmd]
|
||||||
kubectl config use-context tanzu-cli-tce-work@tce-work
|
Switched to context "tanzu-cli-tce-work@tce-work". # [tl! .nocopy:1]
|
||||||
Switched to context "tanzu-cli-tce-work@tce-work".
|
|
||||||
```
|
kubectl get nodes # [tl! .cmd]
|
||||||
```command-session
|
NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
|
||||||
kubectl get nodes
|
|
||||||
NAME STATUS ROLES AGE VERSION
|
|
||||||
tce-work-control-plane-zts6r Ready control-plane,master 12m v1.21.5+vmware.1
|
tce-work-control-plane-zts6r Ready control-plane,master 12m v1.21.5+vmware.1
|
||||||
tce-work-md-0-bcfdc4d79-vn9xb Ready <none> 11m v1.21.5+vmware.1
|
tce-work-md-0-bcfdc4d79-vn9xb Ready <none> 11m v1.21.5+vmware.1
|
||||||
```
|
```
|
||||||
|
@ -387,9 +386,9 @@ It took me quite a bit of trial and error to get this far and (being a k8s novic
|
||||||
#### Checking and modifying `dex` configuration
|
#### Checking and modifying `dex` configuration
|
||||||
I had a lot of trouble figuring out how to correctly format the `member:1.2.840.113556.1.4.1941:` attribute in the LDAPS config so that it wouldn't get split into multiple attributes due to the trailing colon - and it took me forever to discover that was even the issue. What eventually did the trick for me was learning that I could look at (and modify!) the configuration for the `dex` app with:
|
I had a lot of trouble figuring out how to correctly format the `member:1.2.840.113556.1.4.1941:` attribute in the LDAPS config so that it wouldn't get split into multiple attributes due to the trailing colon - and it took me forever to discover that was even the issue. What eventually did the trick for me was learning that I could look at (and modify!) the configuration for the `dex` app with:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n tanzu-system-auth edit configmaps dex
|
kubectl -n tanzu-system-auth edit configmaps dex # [tl! .cmd]
|
||||||
[...]
|
[...] # [tl! .nocopy:start]
|
||||||
groupSearch:
|
groupSearch:
|
||||||
baseDN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
baseDN: OU=LAB,DC=lab,DC=bowdre,DC=net
|
||||||
filter: (objectClass=group)
|
filter: (objectClass=group)
|
||||||
|
@ -399,7 +398,7 @@ kubectl -n tanzu-system-auth edit configmaps dex
|
||||||
- userAttr: DN
|
- userAttr: DN
|
||||||
groupAttr: 'member:1.2.840.113556.1.4.1941:'
|
groupAttr: 'member:1.2.840.113556.1.4.1941:'
|
||||||
host: win01.lab.bowdre.net:636
|
host: win01.lab.bowdre.net:636
|
||||||
[...]
|
[...] # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
This let me make changes on the fly until I got a working configuration and then work backwards from there to format the initial input correctly.
|
This let me make changes on the fly until I got a working configuration and then work backwards from there to format the initial input correctly.
|
||||||
|
@ -407,14 +406,13 @@ This let me make changes on the fly until I got a working configuration and then
|
||||||
#### Reviewing `dex` logs
|
#### Reviewing `dex` logs
|
||||||
Authentication attempts (at least on the LDAPS side of things) will show up in the logs for the `dex` pod running in the `tanzu-system-auth` namespace. This is a great place to look to see if the user isn't being found, credentials are invalid, or the groups aren't being enumerated correctly:
|
Authentication attempts (at least on the LDAPS side of things) will show up in the logs for the `dex` pod running in the `tanzu-system-auth` namespace. This is a great place to look to see if the user isn't being found, credentials are invalid, or the groups aren't being enumerated correctly:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n tanzu-system-auth get pods
|
kubectl -n tanzu-system-auth get pods # [tl! .cmd]
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE # [tl! .nocopy:2]
|
||||||
dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h
|
dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h
|
||||||
```
|
|
||||||
```command-session
|
kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl # [tl! .cmd]
|
||||||
kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl
|
# no such user # [tl! .nocopy:start]
|
||||||
# no such user
|
|
||||||
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=johnny))","time":"2022-03-06T22:29:57Z"}
|
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=johnny))","time":"2022-03-06T22:29:57Z"}
|
||||||
{"level":"error","msg":"ldap: no results returned for filter: \"(\u0026(objectClass=person)(sAMAccountName=johnny))\"","time":"2022-03-06T22:29:57Z"}
|
{"level":"error","msg":"ldap: no results returned for filter: \"(\u0026(objectClass=person)(sAMAccountName=johnny))\"","time":"2022-03-06T22:29:57Z"}
|
||||||
#invalid password
|
#invalid password
|
||||||
|
@ -425,15 +423,15 @@ kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl
|
||||||
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=john))","time":"2022-03-06T22:31:21Z"}
|
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=john))","time":"2022-03-06T22:31:21Z"}
|
||||||
{"level":"info","msg":"username \"john\" mapped to entry CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net","time":"2022-03-06T22:31:21Z"}
|
{"level":"info","msg":"username \"john\" mapped to entry CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net","time":"2022-03-06T22:31:21Z"}
|
||||||
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net))","time":"2022-03-06T22:31:21Z"}
|
{"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net))","time":"2022-03-06T22:31:21Z"}
|
||||||
{"level":"info","msg":"login successful: connector \"ldap\", username=\"john\", preferred_username=\"\", email=\"CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net\", groups=[\"vRA-Admins\" \"Tanzu-Admins\"]","time":"2022-03-06T22:31:21Z"}
|
{"level":"info","msg":"login successful: connector \"ldap\", username=\"john\", preferred_username=\"\", email=\"CN=John Bowdre,OU=Users,OU=BOW,OU=LAB,DC=lab,DC=bowdre,DC=net\", groups=[\"vRA-Admins\" \"Tanzu-Admins\"]","time":"2022-03-06T22:31:21Z"} # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Clearing pinniped sessions
|
#### Clearing pinniped sessions
|
||||||
I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to.
|
I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to.
|
||||||
|
|
||||||
So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file:
|
So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file:
|
||||||
```command
|
```shell
|
||||||
rm ~/.config/tanzu/pinniped/sessions.yaml
|
rm ~/.config/tanzu/pinniped/sessions.yaml # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
That let me use `kubectl get nodes` to trigger the authentication prompt again.
|
That let me use `kubectl get nodes` to trigger the authentication prompt again.
|
||||||
|
|
|
@ -24,17 +24,18 @@ comment: true # Disable comment if false.
|
||||||
When I [set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the `kind` bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?
|
When I [set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the `kind` bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook?
|
||||||
|
|
||||||
The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the `tanzu management-cluster kubeconfig get` command on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) cluster to a file:
|
The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the `tanzu management-cluster kubeconfig get` command on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) cluster to a file:
|
||||||
```command
|
```shell
|
||||||
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml
|
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
I then used `scp` to pull the file from the VM into my local Linux environment, and proceeded to [install `kubectl`](/tanzu-community-edition-k8s-homelab/#kubectl-binary) and the [`tanzu` CLI](/tanzu-community-edition-k8s-homelab/#tanzu-cli) (making sure to also [enable shell auto-completion](/enable-tanzu-cli-auto-completion-bash-zsh/) along the way!).
|
I then used `scp` to pull the file from the VM into my local Linux environment, and proceeded to [install `kubectl`](/tanzu-community-edition-k8s-homelab/#kubectl-binary) and the [`tanzu` CLI](/tanzu-community-edition-k8s-homelab/#tanzu-cli) (making sure to also [enable shell auto-completion](/enable-tanzu-cli-auto-completion-bash-zsh/) along the way!).
|
||||||
|
|
||||||
Now I'm ready to import the configuration locally with `tanzu login` on my Chromebook:
|
Now I'm ready to import the configuration locally with `tanzu login` on my Chromebook:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
|
tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml \ # [tl! .cmd]
|
||||||
✔ successfully logged in to management cluster using the kubeconfig tce-mgmt
|
--context tce-mgmt-admin@tce-mgmt --name tce-mgmt
|
||||||
|
✔ successfully logged in to management cluster using the kubeconfig tce-mgmt # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% notice tip "Use the absolute path" %}}
|
{{% notice tip "Use the absolute path" %}}
|
||||||
|
@ -42,14 +43,13 @@ Pass in the full path to the exported kubeconfig file. This will help the Tanzu
|
||||||
{{% /notice %}}
|
{{% /notice %}}
|
||||||
|
|
||||||
Even though that's just importing the management cluster it actually grants access to both the management and workload clusters:
|
Even though that's just importing the management cluster it actually grants access to both the management and workload clusters:
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster list
|
tanzu cluster list # [tl! .cmd]
|
||||||
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
|
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN # [tl! .nocopy:2]
|
||||||
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none> dev
|
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none> dev
|
||||||
```
|
|
||||||
```command-session
|
tanzu cluster get tce-work # [tl! .cmd]
|
||||||
tanzu cluster get tce-work
|
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
|
||||||
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
|
|
||||||
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
|
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
|
||||||
ℹ
|
ℹ
|
||||||
|
|
||||||
|
@ -63,10 +63,9 @@ NAME READY SEVERITY RE
|
||||||
└─Workers
|
└─Workers
|
||||||
└─MachineDeployment/tce-work-md-0
|
└─MachineDeployment/tce-work-md-0
|
||||||
└─Machine/tce-work-md-0-687444b744-crc9q True 24h
|
└─Machine/tce-work-md-0-687444b744-crc9q True 24h
|
||||||
```
|
# [tl! .nocopy:end]
|
||||||
```command-session
|
tanzu management-cluster get # [tl! .cmd]
|
||||||
tanzu management-cluster get
|
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
|
||||||
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
|
|
||||||
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
|
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
|
||||||
|
|
||||||
|
|
||||||
|
@ -88,31 +87,29 @@ Providers:
|
||||||
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
|
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
|
||||||
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
|
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
|
||||||
capi-system cluster-api CoreProvider cluster-api v0.3.23
|
capi-system cluster-api CoreProvider cluster-api v0.3.23
|
||||||
capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10
|
capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10 # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can then tell `kubectl` about the two clusters:
|
And I can then tell `kubectl` about the two clusters:
|
||||||
```command-session
|
```shell
|
||||||
tanzu management-cluster kubeconfig get tce-mgmt --admin
|
tanzu management-cluster kubeconfig get tce-mgmt --admin # [tl! .cmd]
|
||||||
Credentials of cluster 'tce-mgmt' have been saved
|
Credentials of cluster 'tce-mgmt' have been saved # [tl! .nocopy:2]
|
||||||
You can now access the cluster by running 'kubectl config use-context tce-mgmt-admin@tce-mgmt'
|
You can now access the cluster by running 'kubectl config use-context tce-mgmt-admin@tce-mgmt'
|
||||||
```
|
|
||||||
```command-session
|
tanzu cluster kubeconfig get tce-work --admin # [tl! .cmd]
|
||||||
tanzu cluster kubeconfig get tce-work --admin
|
Credentials of cluster 'tce-work' have been saved # [tl! .nocopy:1]
|
||||||
Credentials of cluster 'tce-work' have been saved
|
|
||||||
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
||||||
```
|
```
|
||||||
|
|
||||||
And sure enough, there are my contexts:
|
And sure enough, there are my contexts:
|
||||||
```command-session
|
```shell
|
||||||
kubectl config get-contexts
|
kubectl config get-contexts # [tl! .cmd]
|
||||||
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
|
CURRENT NAME CLUSTER AUTHINFO NAMESPACE # [tl! .nocopy:3]
|
||||||
tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
|
tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
|
||||||
* tce-work-admin@tce-work tce-work tce-work-admin
|
* tce-work-admin@tce-work tce-work tce-work-admin
|
||||||
```
|
|
||||||
```command-session
|
kubectl get nodes -o wide # [tl! .cmd]
|
||||||
kubectl get nodes -o wide
|
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME # [tl! .nocopy:2]
|
||||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
|
||||||
tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
|
tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
|
||||||
tce-work-md-0-687444b744-crc9q Ready <none> 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
|
tce-work-md-0-687444b744-crc9q Ready <none> 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6
|
||||||
```
|
```
|
||||||
|
|
|
@ -17,7 +17,8 @@ I can, and here's how I do it.
|
||||||
|
|
||||||
### The Script
|
### The Script
|
||||||
The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
|
The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# PowerCLI_Custom_Functions.ps1
|
# PowerCLI_Custom_Functions.ps1
|
||||||
# Usage:
|
# Usage:
|
||||||
# 0) Edit $vCenterList to reference the vCenters in your environment.
|
# 0) Edit $vCenterList to reference the vCenters in your environment.
|
||||||
|
|
|
@ -28,11 +28,11 @@ Now that VMware [has released](https://blogs.vmware.com/vsphere/2022/01/announci
|
||||||
I start off by heading to [tenable.com/products/nessus/nessus-essentials](https://www.tenable.com/products/nessus/nessus-essentials) to register for a (free!) license key which will let me scan up to 16 hosts. I'll receive the key and download link in an email, but I'm not actually going to use that link to download the Nessus binary. I've got this shiny-and-new [Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/) that could use some more real workloads so I'll instead opt for the [Docker version](https://hub.docker.com/r/tenableofficial/nessus).
|
I start off by heading to [tenable.com/products/nessus/nessus-essentials](https://www.tenable.com/products/nessus/nessus-essentials) to register for a (free!) license key which will let me scan up to 16 hosts. I'll receive the key and download link in an email, but I'm not actually going to use that link to download the Nessus binary. I've got this shiny-and-new [Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/) that could use some more real workloads so I'll instead opt for the [Docker version](https://hub.docker.com/r/tenableofficial/nessus).
|
||||||
|
|
||||||
Tenable provides an [example `docker-compose.yml`](https://community.tenable.com/s/article/Deploy-Nessus-docker-image-with-docker-compose) to make it easy to get started:
|
Tenable provides an [example `docker-compose.yml`](https://community.tenable.com/s/article/Deploy-Nessus-docker-image-with-docker-compose) to make it easy to get started:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
version: '3.1'
|
version: '3.1'
|
||||||
|
|
||||||
services:
|
services:
|
||||||
|
|
||||||
nessus:
|
nessus:
|
||||||
image: tenableofficial/nessus
|
image: tenableofficial/nessus
|
||||||
restart: always
|
restart: always
|
||||||
|
@ -46,7 +46,8 @@ services:
|
||||||
```
|
```
|
||||||
|
|
||||||
I can use that knowledge to craft something I can deploy on Kubernetes:
|
I can use that knowledge to craft something I can deploy on Kubernetes:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -95,16 +96,16 @@ spec:
|
||||||
Note that I'm configuring the `LoadBalancer` to listen on port `443` and route traffic to the pod on port `8834` so that I don't have to remember to enter an oddball port number when I want to connect to the web interface.
|
Note that I'm configuring the `LoadBalancer` to listen on port `443` and route traffic to the pod on port `8834` so that I don't have to remember to enter an oddball port number when I want to connect to the web interface.
|
||||||
|
|
||||||
And now I can just apply the file:
|
And now I can just apply the file:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f nessus.yaml
|
kubectl apply -f nessus.yaml # [tl! .cmd]
|
||||||
service/nessus created
|
service/nessus created # [tl! .nocopy:1]
|
||||||
deployment.apps/nessus created
|
deployment.apps/nessus created
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll give it a moment or two to deploy and then check on the service to figure out what IP I need to use to connect:
|
I'll give it a moment or two to deploy and then check on the service to figure out what IP I need to use to connect:
|
||||||
```command-session
|
```shell
|
||||||
kubectl get svc/nessus
|
kubectl get svc/nessus # [tl! .cmd]
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
|
||||||
nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s
|
nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -28,7 +28,8 @@ I've got a [`Connect-vCenters` function](/logging-in-to-multiple-vcenter-servers
|
||||||
|
|
||||||
What I came up with is using `Get-Datacenter` to enumerate each virtual datacenter, and then list the VMs matching my query within:
|
What I came up with is using `Get-Datacenter` to enumerate each virtual datacenter, and then list the VMs matching my query within:
|
||||||
|
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
$linuxVms = foreach( $datacenter in ( Get-Datacenter )) {
|
$linuxVms = foreach( $datacenter in ( Get-Datacenter )) {
|
||||||
Get-Datacenter $datacenter | Get-VM | Where { $_.ExtensionData.Config.GuestFullName -notmatch "win" -and $_.Name -notmatch "vcls" } | `
|
Get-Datacenter $datacenter | Get-VM | Where { $_.ExtensionData.Config.GuestFullName -notmatch "win" -and $_.Name -notmatch "vcls" } | `
|
||||||
Select @{ N="Datacenter";E={ $datacenter.Name }},
|
Select @{ N="Datacenter";E={ $datacenter.Name }},
|
||||||
|
|
|
@ -23,7 +23,8 @@ comment: true # Disable comment if false.
|
||||||
We've been working lately to use [HashiCorp Packer](https://www.packer.io/) to standardize and automate our VM template builds, and we found a need to pull in all of the contents of a specific directory on an internal web server. This would be pretty simple for Linux systems using `wget -r`, but we needed to find another solution for our Windows builds.
|
We've been working lately to use [HashiCorp Packer](https://www.packer.io/) to standardize and automate our VM template builds, and we found a need to pull in all of the contents of a specific directory on an internal web server. This would be pretty simple for Linux systems using `wget -r`, but we needed to find another solution for our Windows builds.
|
||||||
|
|
||||||
A coworker and I cobbled together a quick PowerShell solution which will download the files within a specified web URL to a designated directory (without recreating the nested folder structure):
|
A coworker and I cobbled together a quick PowerShell solution which will download the files within a specified web URL to a designated directory (without recreating the nested folder structure):
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
$outputdir = 'C:\Scripts\Download\'
|
$outputdir = 'C:\Scripts\Download\'
|
||||||
$url = 'https://win01.lab.bowdre.net/stuff/files/'
|
$url = 'https://win01.lab.bowdre.net/stuff/files/'
|
||||||
|
|
||||||
|
|
|
@ -20,38 +20,37 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep
|
||||||
1. Open an SSH session to *all* the vCenters within the SSO domain.
|
1. Open an SSH session to *all* the vCenters within the SSO domain.
|
||||||
2. Log in and enter `shell` to access the shell on each vCenter.
|
2. Log in and enter `shell` to access the shell on each vCenter.
|
||||||
3. Verify that replication is healthy by running `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that `Partner is 0 changes behind.`:
|
3. Verify that replication is healthy by running `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that `Partner is 0 changes behind.`:
|
||||||
```commandroot-session
|
```shell
|
||||||
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
|
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass # [tl! .cmd]
|
||||||
Partner: vcsa2.lab.bowdre.net
|
Partner: vcsa2.lab.bowdre.net # [tl! .nocopy:6]
|
||||||
Host available: Yes
|
Host available: Yes
|
||||||
Status available: Yes
|
Status available: Yes
|
||||||
My last change number: 9346
|
My last change number: 9346
|
||||||
Partner has seen my change number: 9346
|
Partner has seen my change number: 9346
|
||||||
Partner is 0 changes behind.
|
Partner is 0 changes behind. # [tl! highlight]
|
||||||
```
|
|
||||||
```commandroot-session
|
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass # [tl! .cmd]
|
||||||
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
|
Partner: vcsa.lab.bowdre.net # [tl! .nocopy:6]
|
||||||
Partner: vcsa.lab.bowdre.net
|
|
||||||
Host available: Yes
|
Host available: Yes
|
||||||
Status available: Yes
|
Status available: Yes
|
||||||
My last change number: 9518
|
My last change number: 9518
|
||||||
Partner has seen my change number: 9518
|
Partner has seen my change number: 9518
|
||||||
Partner is 0 changes behind.
|
Partner is 0 changes behind. # [tl! highlight]
|
||||||
```
|
```
|
||||||
4. Stop `vmdird` on each vCenter by running `/bin/service-control --stop vmdird`:
|
4. Stop `vmdird` on each vCenter by running `/bin/service-control --stop vmdird`:
|
||||||
|
|
||||||
```commandroot-session
|
```shell
|
||||||
/bin/service-control --stop vmdird
|
/bin/service-control --stop vmdird # [tl! .cmd]
|
||||||
Operation not cancellable. Please wait for it to finish...
|
Operation not cancellable. Please wait for it to finish... # [tl! .nocopy:2]
|
||||||
Performing stop operation on service vmdird...
|
Performing stop operation on service vmdird...
|
||||||
Successfully stopped service vmdird
|
Successfully stopped service vmdird
|
||||||
```
|
```
|
||||||
5. Snapshot the vCenter appliance VMs.
|
5. Snapshot the vCenter appliance VMs.
|
||||||
6. Start replication on each server again with `/bin/service-control --start vmdird`:
|
6. Start replication on each server again with `/bin/service-control --start vmdird`:
|
||||||
|
|
||||||
```commandroot-session
|
```shell
|
||||||
/bin/service-control --start vmdird
|
/bin/service-control --start vmdird # [tl! .cmd]
|
||||||
Operation not cancellable. Please wait for it to finish...
|
Operation not cancellable. Please wait for it to finish... # [tl! .nocopy]
|
||||||
Performing start operation on service vmdird...
|
Performing start operation on service vmdird...
|
||||||
Successfully started service vmdird
|
Successfully started service vmdird
|
||||||
```
|
```
|
||||||
|
|
|
@ -37,7 +37,8 @@ So yeah. That's, uh, *not great.*
|
||||||
If you've got any **Windows Server 2022** VMs with **[Secure Boot](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-888353FE241C.html)** enabled on **ESXi 6.7/7.x**, you'll want to make sure they *do not* get **KB5022842** until this problem is resolved.
|
If you've got any **Windows Server 2022** VMs with **[Secure Boot](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-888353FE241C.html)** enabled on **ESXi 6.7/7.x**, you'll want to make sure they *do not* get **KB5022842** until this problem is resolved.
|
||||||
|
|
||||||
I put together a quick PowerCLI query to help identify impacted VMs in my environment:
|
I put together a quick PowerCLI query to help identify impacted VMs in my environment:
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
$secureBoot2022VMs = foreach($datacenter in (Get-Datacenter)) {
|
$secureBoot2022VMs = foreach($datacenter in (Get-Datacenter)) {
|
||||||
$datacenter | Get-VM |
|
$datacenter | Get-VM |
|
||||||
Where-Object {$_.Guest.OsFullName -Match 'Microsoft Windows Server 2022' -And $_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled} |
|
Where-Object {$_.Guest.OsFullName -Match 'Microsoft Windows Server 2022' -And $_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled} |
|
||||||
|
|
|
@ -18,7 +18,8 @@ The Jekyll theme I'm using ([Minimal Mistakes](https://github.com/mmistakes/mini
|
||||||
![Posts by category](20210724-posts-by-category.png)
|
![Posts by category](20210724-posts-by-category.png)
|
||||||
|
|
||||||
It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md):
|
It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md):
|
||||||
```markdown {linenos=true}
|
```markdown
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
---
|
---
|
||||||
title: "Posts by Category"
|
title: "Posts by Category"
|
||||||
layout: categories
|
layout: categories
|
||||||
|
@ -30,8 +31,9 @@ author_profile: true
|
||||||
The `title` indicates what's going to be written in bold text at the top of the page, the `permalink` says that it will be accessible at `http://localhost/categories/`, and the nice little `author_profile` sidebar will appear on the left.
|
The `title` indicates what's going to be written in bold text at the top of the page, the `permalink` says that it will be accessible at `http://localhost/categories/`, and the nice little `author_profile` sidebar will appear on the left.
|
||||||
|
|
||||||
This page then calls the `categories` layout, which is defined in [`_layouts/categories.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/categories.html):
|
This page then calls the `categories` layout, which is defined in [`_layouts/categories.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/categories.html):
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}---
|
# torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
layout: archive
|
layout: archive
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -81,39 +83,43 @@ I wanted my solution to preserve the formatting that's used by the theme elsewhe
|
||||||
### Defining a new layout
|
### Defining a new layout
|
||||||
I create a new file called `_layouts/series.html` which will define how these new series pages get rendered. It starts out just like the default `categories.html` one:
|
I create a new file called `_layouts/series.html` which will define how these new series pages get rendered. It starts out just like the default `categories.html` one:
|
||||||
|
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}---
|
# torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
layout: archive
|
layout: archive
|
||||||
---
|
---
|
||||||
|
|
||||||
{{ content }}{% endraw %}
|
{{ content }}
|
||||||
```
|
```
|
||||||
|
|
||||||
That `{{ content }}` block will let me define text to appear above the list of articles - very handy. Much of the original `categories.html` code has to do with iterating through the list of categories. I won't need that, though, so I'll jump straight to setting what layout the entries on this page will use:
|
That `{{ content }}` block will let me define text to appear above the list of articles - very handy. Much of the original `categories.html` code has to do with iterating through the list of categories. I won't need that, though, so I'll jump straight to setting what layout the entries on this page will use:
|
||||||
```liquid
|
```jinja-html
|
||||||
{% assign entries_layout = page.entries_layout | default: 'list' %}
|
{% assign entries_layout = page.entries_layout | default: 'list' %}
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll be including two custom variables in the [Front Matter](https://jekyllrb.com/docs/front-matter/) for my category pages: `tag` to specify what category to filter on, and `sort_order` which will be set to `reverse` if I want the older posts up top. I'll be able to access these in the layout as `page.tag` and `page.sort_order`, respectively. So I'll go ahead and grab all the posts which are categorized with `page.tag`, and then decide whether the posts will get sorted normally or in reverse:
|
I'll be including two custom variables in the [Front Matter](https://jekyllrb.com/docs/front-matter/) for my category pages: `tag` to specify what category to filter on, and `sort_order` which will be set to `reverse` if I want the older posts up top. I'll be able to access these in the layout as `page.tag` and `page.sort_order`, respectively. So I'll go ahead and grab all the posts which are categorized with `page.tag`, and then decide whether the posts will get sorted normally or in reverse:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}{% assign posts = site.categories[page.tag] %}
|
# torchlight! {"lineNumbers": true}
|
||||||
|
{% assign posts = site.categories[page.tag] %}
|
||||||
{% if page.sort_order == 'reverse' %}
|
{% if page.sort_order == 'reverse' %}
|
||||||
{% assign posts = posts | reverse %}
|
{% assign posts = posts | reverse %}
|
||||||
{% endif %}{% endraw %}
|
{% endif %}
|
||||||
```
|
```
|
||||||
|
|
||||||
And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page:
|
And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}<div class="entries-{{ entries_layout }}">
|
# torchlight! {"lineNumbers": true}
|
||||||
|
<div class="entries-{{ entries_layout }}">
|
||||||
{% for post in posts %}
|
{% for post in posts %}
|
||||||
{% include archive-single.html type=entries_layout %}
|
{% include archive-single.html type=entries_layout %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</div>{% endraw %}
|
</div>
|
||||||
```
|
```
|
||||||
|
|
||||||
Putting it all together now, here's my new `_layouts/series.html` file:
|
Putting it all together now, here's my new `_layouts/series.html` file:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}---
|
# torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
layout: archive
|
layout: archive
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -133,8 +139,9 @@ layout: archive
|
||||||
|
|
||||||
### Series pages
|
### Series pages
|
||||||
Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly:
|
Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly:
|
||||||
```markdown {linenos=true}
|
```markdown
|
||||||
{% raw %}---
|
// torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
title: "Adventures in vRealize Automation 8"
|
title: "Adventures in vRealize Automation 8"
|
||||||
layout: series
|
layout: series
|
||||||
permalink: "/series/vra8"
|
permalink: "/series/vra8"
|
||||||
|
@ -145,7 +152,7 @@ header:
|
||||||
teaser: assets/images/posts-2020/RtMljqM9x.png
|
teaser: assets/images/posts-2020/RtMljqM9x.png
|
||||||
---
|
---
|
||||||
|
|
||||||
*Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.*{% endraw %}
|
*Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.*
|
||||||
```
|
```
|
||||||
|
|
||||||
You can see that this page is referencing the series layout I just created, and it's going to live at `http://localhost/series/vra8` - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
|
You can see that this page is referencing the series layout I just created, and it's going to live at `http://localhost/series/vra8` - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
|
||||||
|
@ -154,8 +161,9 @@ Check it out [here](/series/vra8):
|
||||||
![vRA8 series](20210724-vra8-series.png)
|
![vRA8 series](20210724-vra8-series.png)
|
||||||
|
|
||||||
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
|
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
|
||||||
```markdown {linenos=true}
|
```markdown
|
||||||
{% raw %}---
|
// torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
title: "Tips & Tricks"
|
title: "Tips & Tricks"
|
||||||
layout: series
|
layout: series
|
||||||
permalink: "/series/tips"
|
permalink: "/series/tips"
|
||||||
|
@ -165,13 +173,14 @@ header:
|
||||||
teaser: assets/images/posts-2020/kJ_l7gPD2.png
|
teaser: assets/images/posts-2020/kJ_l7gPD2.png
|
||||||
---
|
---
|
||||||
|
|
||||||
*Useful tips and tricks I've stumbled upon.*{% endraw %}
|
*Useful tips and tricks I've stumbled upon.*
|
||||||
```
|
```
|
||||||
|
|
||||||
### Changing the category permalink
|
### Changing the category permalink
|
||||||
Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path:
|
Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
category_archive:
|
category_archive:
|
||||||
type: liquid
|
type: liquid
|
||||||
# path: /categories/
|
# path: /categories/
|
||||||
|
@ -182,13 +191,14 @@ tag_archive:
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink:
|
I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink:
|
||||||
```markdown {linenos=true}
|
```markdown
|
||||||
{% raw %}---
|
// torchlight! {"lineNumbers": true}
|
||||||
|
---
|
||||||
title: "Posts by Series"
|
title: "Posts by Series"
|
||||||
layout: categories
|
layout: categories
|
||||||
permalink: /series/
|
permalink: /series/
|
||||||
author_profile: true
|
author_profile: true
|
||||||
---{% endraw %}
|
---
|
||||||
```
|
```
|
||||||
|
|
||||||
### Fixing category links in posts
|
### Fixing category links in posts
|
||||||
|
@ -198,30 +208,33 @@ The bottom of each post has a section which lists the tags and categories to whi
|
||||||
That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
|
That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
|
||||||
|
|
||||||
I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
|
I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %} <footer class="page__meta">
|
# torchlight! {"lineNumbers": true}
|
||||||
|
<footer class="page__meta">
|
||||||
{% if site.data.ui-text[site.locale].meta_label %}
|
{% if site.data.ui-text[site.locale].meta_label %}
|
||||||
<h4 class="page__meta-title">{{ site.data.ui-text[site.locale].meta_label }}</h4>
|
<h4 class="page__meta-title">{{ site.data.ui-text[site.locale].meta_label }}</h4>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% include page__taxonomy.html %}
|
{% include page__taxonomy.html %}
|
||||||
{% include page__date.html %}
|
{% include page__date.html %}
|
||||||
</footer>{% endraw %}
|
</footer>
|
||||||
```
|
```
|
||||||
|
|
||||||
It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/page__taxonomy.html) is being used to display the tags and categories, so I then went to that file in the `_include` directory:
|
It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/page__taxonomy.html) is being used to display the tags and categories, so I then went to that file in the `_include` directory:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}{% if site.tag_archive.type and page.tags[0] %}
|
# torchlight! {"lineNumbers": true}
|
||||||
|
{% if site.tag_archive.type and page.tags[0] %}
|
||||||
{% include tag-list.html %}
|
{% include tag-list.html %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
{% if site.category_archive.type and page.categories[0] %}
|
{% if site.category_archive.type and page.categories[0] %}
|
||||||
{% include category-list.html %}
|
{% include category-list.html %}
|
||||||
{% endif %}{% endraw %}
|
{% endif %}
|
||||||
```
|
```
|
||||||
|
|
||||||
Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/category-list.html) is what I actually want. Here's that file:
|
Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/category-list.html) is what I actually want. Here's that file:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}{% case site.category_archive.type %}
|
# torchlight! {"lineNumbers": true}
|
||||||
|
{% case site.category_archive.type %}
|
||||||
{% when "liquid" %}
|
{% when "liquid" %}
|
||||||
{% assign path_type = "#" %}
|
{% assign path_type = "#" %}
|
||||||
{% when "jekyll-archives" %}
|
{% when "jekyll-archives" %}
|
||||||
|
@ -239,19 +252,21 @@ Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</span>
|
</span>
|
||||||
</p>
|
</p>
|
||||||
{% endif %}{% endraw %}
|
{% endif %}
|
||||||
```
|
```
|
||||||
|
|
||||||
I'm using the `liquid` archive approach since I can't use the `jekyll-archives` plugin, so I can see that it's setting the `path_type` to `"#"`. And near the bottom of the file, I can see that it's assembling the category link by slugifying the `category_word`, sticking the `path_type` in front of it, and then putting the `site.category_archive.path` (which I edited earlier in `_config.yml`) in front of that. So that's why my category links look like `/series/#category`. I can just edit the top of this file to statically set `path_type = nil` and that should clear this up in a jiffy:
|
I'm using the `liquid` archive approach since I can't use the `jekyll-archives` plugin, so I can see that it's setting the `path_type` to `"#"`. And near the bottom of the file, I can see that it's assembling the category link by slugifying the `category_word`, sticking the `path_type` in front of it, and then putting the `site.category_archive.path` (which I edited earlier in `_config.yml`) in front of that. So that's why my category links look like `/series/#category`. I can just edit the top of this file to statically set `path_type = nil` and that should clear this up in a jiffy:
|
||||||
```liquid {linenos=true}
|
```jinja-html
|
||||||
{% raw %}{% assign path_type = nil %}
|
# torchlight! {"lineNumbers": true}
|
||||||
|
{% assign path_type = nil %}
|
||||||
{% if site.category_archive.path %}
|
{% if site.category_archive.path %}
|
||||||
{% assign categories_sorted = page.categories | sort_natural %}
|
{% assign categories_sorted = page.categories | sort_natural %}
|
||||||
[...]{% endraw %}
|
[...]
|
||||||
```
|
```
|
||||||
|
|
||||||
To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](https://github.com/mmistakes/minimal-mistakes/blob/master/_data/ui-text.yml) to update the string used for `categories_label`:
|
To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](https://github.com/mmistakes/minimal-mistakes/blob/master/_data/ui-text.yml) to update the string used for `categories_label`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
meta_label :
|
meta_label :
|
||||||
tags_label : "Tags:"
|
tags_label : "Tags:"
|
||||||
categories_label : "Series:"
|
categories_label : "Series:"
|
||||||
|
@ -264,7 +279,8 @@ Much better!
|
||||||
|
|
||||||
### Updating the navigation header
|
### Updating the navigation header
|
||||||
And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit `_data/navigation.yml` with links to my new pages:
|
And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit `_data/navigation.yml` with links to my new pages:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
main:
|
main:
|
||||||
- title: "vRealize Automation 8"
|
- title: "vRealize Automation 8"
|
||||||
url: /series/vra8
|
url: /series/vra8
|
||||||
|
|
|
@ -29,13 +29,14 @@ I will also add some properties to tell PowerCLI (and the `Invoke-VmScript` cmdl
|
||||||
|
|
||||||
##### Inputs section
|
##### Inputs section
|
||||||
I'll kick this off by going into Cloud Assembly and editing the `WindowsDemo` template I've been working on for the past few eons. I'll add a `diskSize` input:
|
I'll kick this off by going into Cloud Assembly and editing the `WindowsDemo` template I've been working on for the past few eons. I'll add a `diskSize` input:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
site: [...]
|
site: [...]
|
||||||
image: [...]
|
image: [...]
|
||||||
size: [...]
|
size: [...]
|
||||||
diskSize:
|
diskSize: # [tl! focus:5]
|
||||||
title: 'System drive size'
|
title: 'System drive size'
|
||||||
default: 60
|
default: 60
|
||||||
type: integer
|
type: integer
|
||||||
|
@ -49,11 +50,12 @@ inputs:
|
||||||
The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
|
The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
|
||||||
|
|
||||||
I'll also drop in an `adminsList` input at the bottom of the section:
|
I'll also drop in an `adminsList` input at the bottom of the section:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
[...]
|
[...]
|
||||||
poc_email: [...]
|
poc_email: [...]
|
||||||
ticket: [...]
|
ticket: [...]
|
||||||
adminsList:
|
adminsList: # [tl! focus:4]
|
||||||
type: string
|
type: string
|
||||||
title: Administrators
|
title: Administrators
|
||||||
description: Comma-separated list of domain accounts/groups which need admin access to this server.
|
description: Comma-separated list of domain accounts/groups which need admin access to this server.
|
||||||
|
@ -71,7 +73,8 @@ In the Resources section of the cloud template, I'm going to add a few propertie
|
||||||
|
|
||||||
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
|
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
[...]
|
[...]
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
|
@ -80,7 +83,7 @@ resources:
|
||||||
image: '${input.image}'
|
image: '${input.image}'
|
||||||
flavor: '${input.size}'
|
flavor: '${input.size}'
|
||||||
site: '${input.site}'
|
site: '${input.site}'
|
||||||
vCenter: vcsa.lab.bowdre.net
|
vCenter: vcsa.lab.bowdre.net # [tl! focus:3]
|
||||||
vCenterUser: vra@lab.bowdre.net
|
vCenterUser: vra@lab.bowdre.net
|
||||||
templateUser: '${input.adJoin ? "vra@lab" : "Administrator"}'
|
templateUser: '${input.adJoin ? "vra@lab" : "Administrator"}'
|
||||||
adminsList: '${input.adminsList}'
|
adminsList: '${input.adminsList}'
|
||||||
|
@ -93,12 +96,13 @@ resources:
|
||||||
```
|
```
|
||||||
|
|
||||||
And I will add in a `storage` property as well which will automatically adjust the deployed VMDK size to match the specified input:
|
And I will add in a `storage` property as well which will automatically adjust the deployed VMDK size to match the specified input:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
[...]
|
[...]
|
||||||
description: '${input.description}'
|
description: '${input.description}'
|
||||||
networks: [...]
|
networks: [...]
|
||||||
constraints: [...]
|
constraints: [...]
|
||||||
storage:
|
storage: # [tl! focus:1]
|
||||||
bootDiskCapacityInGB: '${input.diskSize}'
|
bootDiskCapacityInGB: '${input.diskSize}'
|
||||||
Cloud_vSphere_Network_1:
|
Cloud_vSphere_Network_1:
|
||||||
type: Cloud.vSphere.Network
|
type: Cloud.vSphere.Network
|
||||||
|
@ -108,7 +112,8 @@ And I will add in a `storage` property as well which will automatically adjust t
|
||||||
|
|
||||||
##### Complete template
|
##### Complete template
|
||||||
Okay, all together now:
|
Okay, all together now:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
site:
|
site:
|
||||||
|
@ -196,7 +201,7 @@ inputs:
|
||||||
poc_email:
|
poc_email:
|
||||||
type: string
|
type: string
|
||||||
title: Point of Contact Email
|
title: Point of Contact Email
|
||||||
default: jack.shephard@virtuallypotato.com
|
default: jack.shephard@example.com
|
||||||
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
ticket:
|
ticket:
|
||||||
type: string
|
type: string
|
||||||
|
@ -296,7 +301,8 @@ And I'll pop over to the right side to map the Action Constants I created earlie
|
||||||
![Mapping constants in action](20210901_map_constants_to_action.png)
|
![Mapping constants in action](20210901_map_constants_to_action.png)
|
||||||
|
|
||||||
Now for The Script:
|
Now for The Script:
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
<# vRA 8.x ABX action to perform certain in-guest actions post-deploy:
|
<# vRA 8.x ABX action to perform certain in-guest actions post-deploy:
|
||||||
Windows:
|
Windows:
|
||||||
- auto-update VM tools
|
- auto-update VM tools
|
||||||
|
|
|
@ -90,7 +90,8 @@ Next it updates the links for any thumbnail images mentioned in the front matter
|
||||||
|
|
||||||
Lastly, it changes the `usePageBundles` flag from `false` to `true` so that Hugo knows what we've done.
|
Lastly, it changes the `usePageBundles` flag from `false` to `true` so that Hugo knows what we've done.
|
||||||
|
|
||||||
```bash {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Hasty script to convert a given standard Hugo post (where the post content and
|
# Hasty script to convert a given standard Hugo post (where the post content and
|
||||||
# images are stored separately) to a Page Bundle (where the content and images are
|
# images are stored separately) to a Page Bundle (where the content and images are
|
||||||
|
|
|
@ -17,14 +17,13 @@ I'm preparing to migrate this blog thingy from Hashnode (which has been great!)
|
||||||
Hashnode helpfully automatically backs up my posts in Markdown format to a private GitHub repo so it was easy to clone those into a local working directory, but all the embedded images were still hosted on Hashnode:
|
Hashnode helpfully automatically backs up my posts in Markdown format to a private GitHub repo so it was easy to clone those into a local working directory, but all the embedded images were still hosted on Hashnode:
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
|
|
||||||
![Clever image title](https://cdn.hashnode.com/res/hashnode/image/upload/v1600098180227/lhTnVwCO3.png)
|
![Clever image title](https://cdn.hashnode.com/res/hashnode/image/upload/v1600098180227/lhTnVwCO3.png)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
I wanted to download those images to `./assets/images/posts-2020/` within my local Jekyll working directory, and then update the `*.md` files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:
|
I wanted to download those images to `./assets/images/posts-2020/` within my local Jekyll working directory, and then update the `*.md` files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:
|
||||||
|
|
||||||
```bash {linenos=true}
|
```shell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# Hasty script to process a blog post markdown file, capture the URL for embedded images,
|
# Hasty script to process a blog post markdown file, capture the URL for embedded images,
|
||||||
# download the image locally, and modify the markdown file with the relative image path.
|
# download the image locally, and modify the markdown file with the relative image path.
|
||||||
|
@ -49,16 +48,14 @@ done
|
||||||
|
|
||||||
I could then run that against all of the Markdown posts under `./_posts/` with:
|
I could then run that against all of the Markdown posts under `./_posts/` with:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done
|
for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
And the image embeds in the local copy of my posts now all look like this:
|
And the image embeds in the local copy of my posts now all look like this:
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
|
|
||||||
![Clever image title](lhTnVwCO3.png)
|
![Clever image title](lhTnVwCO3.png)
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Brilliant!
|
Brilliant!
|
|
@ -54,8 +54,8 @@ The first step in getting up and running with Tailscale is to sign up at [https:
|
||||||
|
|
||||||
Once you have a Tailscale account, you're ready to install the Tailscale client. The [download page](https://tailscale.com/download) outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
|
Once you have a Tailscale account, you're ready to install the Tailscale client. The [download page](https://tailscale.com/download) outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
curl -fsSL https://tailscale.com/install.sh | sh
|
curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
After the install completes, it will tell you exactly what you need to do next:
|
After the install completes, it will tell you exactly what you need to do next:
|
||||||
|
@ -71,9 +71,9 @@ There are also Tailscale apps available for [iOS](https://tailscale.com/download
|
||||||
#### Basic `tailscale up`
|
#### Basic `tailscale up`
|
||||||
Running `sudo tailscale up` then reveals the next step:
|
Running `sudo tailscale up` then reveals the next step:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
sudo tailscale up
|
sudo tailscale up # [tl! .cmd]
|
||||||
|
# [tl! .nocopy:3]
|
||||||
To authenticate, visit:
|
To authenticate, visit:
|
||||||
|
|
||||||
https://login.tailscale.com/a/1872939939df
|
https://login.tailscale.com/a/1872939939df
|
||||||
|
@ -83,8 +83,8 @@ I can copy that address into a browser and I'll get prompted to log in to my Tai
|
||||||
|
|
||||||
That was pretty easy, right? But what about if I can't easily get to a web browser from the terminal session on a certain device? No worries, `tailscale up` has a flag for that:
|
That was pretty easy, right? But what about if I can't easily get to a web browser from the terminal session on a certain device? No worries, `tailscale up` has a flag for that:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --qr
|
sudo tailscale up --qr # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
That will convert the URL to a QR code that I can scan from my phone.
|
That will convert the URL to a QR code that I can scan from my phone.
|
||||||
|
@ -93,44 +93,44 @@ That will convert the URL to a QR code that I can scan from my phone.
|
||||||
There are a few additional flags that can be useful under certain situations:
|
There are a few additional flags that can be useful under certain situations:
|
||||||
|
|
||||||
- `--advertise-exit-node` to tell the tailnet that this could be used as an exit node for internet traffic
|
- `--advertise-exit-node` to tell the tailnet that this could be used as an exit node for internet traffic
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --advertise-exit-node
|
sudo tailscale up --advertise-exit-node # [tl! .cmd]
|
||||||
```
|
```
|
||||||
- `--advertise-routes` to let the node perform subnet routing functions to provide connectivity to specified local subnets
|
- `--advertise-routes` to let the node perform subnet routing functions to provide connectivity to specified local subnets
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --advertise-routes "192.168.1.0/24,172.16.0.0/16"
|
sudo tailscale up --advertise-routes "192.168.1.0/24,172.16.0.0/16" # [tl! .cmd]
|
||||||
```
|
```
|
||||||
- `--advertise-tags`[^tags] to associate the node with certain tags for ACL purposes (like `tag:home` to identify stuff in my home network and `tag:cloud` to label external cloud-hosted resources)
|
- `--advertise-tags`[^tags] to associate the node with certain tags for ACL purposes (like `tag:home` to identify stuff in my home network and `tag:cloud` to label external cloud-hosted resources)
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --advertise-tags "tag:cloud"
|
sudo tailscale up --advertise-tags "tag:cloud" # [tl! .cmd]
|
||||||
```
|
```
|
||||||
- `--hostname` to manually specific a hostname to use within the tailnet
|
- `--hostname` to manually specific a hostname to use within the tailnet
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --hostname "tailnode"
|
sudo tailscale up --hostname "tailnode" # [tl! .cmd]
|
||||||
```
|
```
|
||||||
- `--shields-up` to block incoming traffic
|
- `--shields-up` to block incoming traffic
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --shields-up
|
sudo tailscale up --shields-up # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
These flags can also be combined with each other:
|
These flags can also be combined with each other:
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr
|
sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
[^tags]: Before being able to assign tags at the command line, you must first define tag owners who can manage the tag. On a personal account, you've only got one user to worry with but you still have to set this up first. I'll go over this in a bit but here's [the documentation](https://tailscale.com/kb/1068/acl-tags/#defining-a-tag) if you want to skip ahead.
|
[^tags]: Before being able to assign tags at the command line, you must first define tag owners who can manage the tag. On a personal account, you've only got one user to worry with but you still have to set this up first. I'll go over this in a bit but here's [the documentation](https://tailscale.com/kb/1068/acl-tags/#defining-a-tag) if you want to skip ahead.
|
||||||
|
|
||||||
#### Sidebar: Tailscale on VyOS
|
#### Sidebar: Tailscale on VyOS
|
||||||
Getting Tailscale on [my VyOS virtual router](/vmware-home-lab-on-intel-nuc-9/#vyos) was unfortunately a little more involved than [leveraging the built-in WireGuard capability](/cloud-based-wireguard-vpn-remote-homelab-access/#configure-vyos-router-as-wireguard-peer). I found the [vyos-tailscale](https://github.com/DMarby/vyos-tailscale) project to help with building a customized VyOS installation ISO with the `tailscaled` daemon added in. I was then able to copy the ISO over to my VyOS instance and install it as if it were a [standard upgrade](https://docs.vyos.io/en/latest/installation/update.html). I could then bring up the interface, advertise my home networks, and make it available as an exit node with:
|
Getting Tailscale on [my VyOS virtual router](/vmware-home-lab-on-intel-nuc-9/#vyos) was unfortunately a little more involved than [leveraging the built-in WireGuard capability](/cloud-based-wireguard-vpn-remote-homelab-access/#configure-vyos-router-as-wireguard-peer). I found the [vyos-tailscale](https://github.com/DMarby/vyos-tailscale) project to help with building a customized VyOS installation ISO with the `tailscaled` daemon added in. I was then able to copy the ISO over to my VyOS instance and install it as if it were a [standard upgrade](https://docs.vyos.io/en/latest/installation/update.html). I could then bring up the interface, advertise my home networks, and make it available as an exit node with:
|
||||||
```command
|
```shell
|
||||||
sudo tailscale up --advertise-exit-node --advertise-routes "192.168.1.0/24,172.16.0.0/16"
|
sudo tailscale up --advertise-exit-node --advertise-routes "192.168.1.0/24,172.16.0.0/16" # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Other `tailscale` commands
|
#### Other `tailscale` commands
|
||||||
Once there are a few members, I can use the `tailscale status` command to see a quick overview of the tailnet:
|
Once there are a few members, I can use the `tailscale status` command to see a quick overview of the tailnet:
|
||||||
```command-session
|
```shell
|
||||||
tailscale status
|
tailscale status # [tl! .cmd]
|
||||||
100.115.115.39 deb01 john@ linux -
|
100.115.115.39 deb01 john@ linux - # [tl! .nocopy:start]
|
||||||
100.118.115.69 ipam john@ linux -
|
100.118.115.69 ipam john@ linux -
|
||||||
100.116.90.109 johns-iphone john@ iOS -
|
100.116.90.109 johns-iphone john@ iOS -
|
||||||
100.116.31.85 matrix john@ linux -
|
100.116.31.85 matrix john@ linux -
|
||||||
|
@ -138,16 +138,16 @@ tailscale status
|
||||||
100.94.127.1 pixelbook john@ android -
|
100.94.127.1 pixelbook john@ android -
|
||||||
100.75.110.50 snikket john@ linux -
|
100.75.110.50 snikket john@ linux -
|
||||||
100.96.24.81 vyos john@ linux -
|
100.96.24.81 vyos john@ linux -
|
||||||
100.124.116.125 win01 john@ windows -
|
100.124.116.125 win01 john@ windows - # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Without doing any other configuration beyond just installing Tailscale and connecting it to my account, I can now easily connect from any of these devices to any of the other devices using the listed Tailscale IP[^magicdns]. Entering `ssh 100.116.31.85` will connect me to my Matrix server.
|
Without doing any other configuration beyond just installing Tailscale and connecting it to my account, I can now easily connect from any of these devices to any of the other devices using the listed Tailscale IP[^magicdns]. Entering `ssh 100.116.31.85` will connect me to my Matrix server.
|
||||||
|
|
||||||
`tailscale ping` lets me check the latency between two Tailscale nodes at the Tailscale layer; the first couple of pings will likely be delivered through a nearby DERP server until the NAT traversal magic is able to kick in:
|
`tailscale ping` lets me check the latency between two Tailscale nodes at the Tailscale layer; the first couple of pings will likely be delivered through a nearby DERP server until the NAT traversal magic is able to kick in:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tailscale ping snikket
|
tailscale ping snikket # [tl! .cmd]
|
||||||
pong from snikket (100.75.110.50) via DERP(nyc) in 34ms
|
pong from snikket (100.75.110.50) via DERP(nyc) in 34ms # [tl! .nocopy:3]
|
||||||
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
|
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
|
||||||
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
|
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
|
||||||
pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
|
pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
|
||||||
|
@ -155,9 +155,9 @@ pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
|
||||||
|
|
||||||
The `tailscale netcheck` command will give me some details about my local Tailscale node, like whether it's able to pass UDP traffic, which DERP server is the closest, and the latency to all Tailscale DERP servers:
|
The `tailscale netcheck` command will give me some details about my local Tailscale node, like whether it's able to pass UDP traffic, which DERP server is the closest, and the latency to all Tailscale DERP servers:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tailscale netcheck
|
tailscale netcheck # [tl! .cmd]
|
||||||
|
# [tl! .nocopy:start]
|
||||||
Report:
|
Report:
|
||||||
* UDP: true
|
* UDP: true
|
||||||
* IPv4: yes, [LOCAL_PUBLIC_IP]:52661
|
* IPv4: yes, [LOCAL_PUBLIC_IP]:52661
|
||||||
|
@ -178,7 +178,7 @@ Report:
|
||||||
- tok: 154.9ms (Tokyo)
|
- tok: 154.9ms (Tokyo)
|
||||||
- syd: 215.3ms (Sydney)
|
- syd: 215.3ms (Sydney)
|
||||||
- sin: 243.7ms (Singapore)
|
- sin: 243.7ms (Singapore)
|
||||||
- blr: 244.6ms (Bangalore)
|
- blr: 244.6ms (Bangalore) # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
[^magicdns]: I could also connect using the Tailscale hostname, if [MagicDNS](https://tailscale.com/kb/1081/magicdns/) is enabled - but I'm getting ahead of myself.
|
[^magicdns]: I could also connect using the Tailscale hostname, if [MagicDNS](https://tailscale.com/kb/1081/magicdns/) is enabled - but I'm getting ahead of myself.
|
||||||
|
@ -244,7 +244,8 @@ This ACL file uses a format called [HuJSON](https://github.com/tailscale/hujson)
|
||||||
|
|
||||||
I'm going to start by creating a group called `admins` and add myself to that group. This isn't strictly necessary since I am the only user in the organization, but I feel like it's a nice practice anyway. Then I'll add the `tagOwners` section to map each tag to its owner, the new group I just created:
|
I'm going to start by creating a group called `admins` and add myself to that group. This isn't strictly necessary since I am the only user in the organization, but I feel like it's a nice practice anyway. Then I'll add the `tagOwners` section to map each tag to its owner, the new group I just created:
|
||||||
|
|
||||||
```json {linenos=true}
|
```json
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
{
|
{
|
||||||
"groups": {
|
"groups": {
|
||||||
"group:admins": ["john@example.com"],
|
"group:admins": ["john@example.com"],
|
||||||
|
@ -276,7 +277,8 @@ Each ACL rule consists of four named parts:
|
||||||
4. `ports` - a list of destinations (and optional ports).
|
4. `ports` - a list of destinations (and optional ports).
|
||||||
|
|
||||||
So I'll add this to the top of my policy file:
|
So I'll add this to the top of my policy file:
|
||||||
```json {linenos=true}
|
```json
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
{
|
{
|
||||||
"acls": [
|
"acls": [
|
||||||
{
|
{
|
||||||
|
@ -305,7 +307,8 @@ Earlier I configured Tailscale to force all nodes to use my home DNS server for
|
||||||
2. Add a new ACL rule to allow DNS traffic to reach the DNS server from the cloud.
|
2. Add a new ACL rule to allow DNS traffic to reach the DNS server from the cloud.
|
||||||
|
|
||||||
Option 2 sounds better to me so that's what I'm going to do. Instead of putting an IP address directly into the ACL rule I'd rather use a hostname, and unfortunately the Tailscale host names aren't available within ACL rule declarations. But I can define a host alias in the policy to map a friendly name to the IP:
|
Option 2 sounds better to me so that's what I'm going to do. Instead of putting an IP address directly into the ACL rule I'd rather use a hostname, and unfortunately the Tailscale host names aren't available within ACL rule declarations. But I can define a host alias in the policy to map a friendly name to the IP:
|
||||||
```json {linenos=true}
|
```json
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
{
|
{
|
||||||
"hosts": {
|
"hosts": {
|
||||||
"win01": "100.124.116.125"
|
"win01": "100.124.116.125"
|
||||||
|
@ -314,7 +317,8 @@ Option 2 sounds better to me so that's what I'm going to do. Instead of putting
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can then create a new rule for `"users": ["tag:cloud"]` to add an exception for `win01:53`:
|
And I can then create a new rule for `"users": ["tag:cloud"]` to add an exception for `win01:53`:
|
||||||
```json {linenos=true}
|
```json
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
{
|
{
|
||||||
"acls": [
|
"acls": [
|
||||||
{
|
{
|
||||||
|
@ -331,7 +335,8 @@ And I can then create a new rule for `"users": ["tag:cloud"]` to add an exceptio
|
||||||
|
|
||||||
And that gets DNS working again for my cloud servers while still serving the results from my NextDNS configuration. Here's the complete policy configuration:
|
And that gets DNS working again for my cloud servers while still serving the results from my NextDNS configuration. Here's the complete policy configuration:
|
||||||
|
|
||||||
```json {linenos=true}
|
```json
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
{
|
{
|
||||||
"acls": [
|
"acls": [
|
||||||
{
|
{
|
||||||
|
|
|
@ -37,40 +37,41 @@ You're ready to roll once the Terminal opens and gives you a prompt:
|
||||||
![Hello, Penguin!](0-h1flLZs.png)
|
![Hello, Penguin!](0-h1flLZs.png)
|
||||||
|
|
||||||
Your first action should be to go ahead and install any patches:
|
Your first action should be to go ahead and install any patches:
|
||||||
```command
|
```shell
|
||||||
sudo apt update
|
sudo apt update # [tl! .cmd:1]
|
||||||
sudo apt upgrade
|
sudo apt upgrade
|
||||||
```
|
```
|
||||||
|
|
||||||
### Zsh, Oh My Zsh, and powerlevel10k theme
|
### Zsh, Oh My Zsh, and powerlevel10k theme
|
||||||
I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting `zsh` is straight forward:
|
I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting `zsh` is straight forward:
|
||||||
```command
|
```shell
|
||||||
sudo apt install zsh
|
sudo apt install zsh # [tl! .cmd]
|
||||||
```
|
```
|
||||||
Go ahead and launch `zsh` (by typing '`zsh`') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to `vi`-style, and enable both `autocd` and `appendhistory`. Once you're back at the (new) `penguin%` prompt we can move on to installing the [Oh My Zsh plugin framework](https://github.com/ohmyzsh/ohmyzsh).
|
Go ahead and launch `zsh` (by typing '`zsh`') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to `vi`-style, and enable both `autocd` and `appendhistory`. Once you're back at the (new) `penguin%` prompt we can move on to installing the [Oh My Zsh plugin framework](https://github.com/ohmyzsh/ohmyzsh).
|
||||||
|
|
||||||
Just grab the installer script like so:
|
Just grab the installer script like so:
|
||||||
```command
|
```shell
|
||||||
wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
|
wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh # [tl! .cmd]
|
||||||
```
|
```
|
||||||
Review it if you'd like (and you should! *Always* review code before running it!!), and then execute it:
|
Review it if you'd like (and you should! *Always* review code before running it!!), and then execute it:
|
||||||
```command
|
```shell
|
||||||
sh install.sh
|
sh install.sh # [tl! .cmd]
|
||||||
```
|
```
|
||||||
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
|
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
|
||||||
![Oh my!](8q-WT0AyC.png)
|
![Oh my!](8q-WT0AyC.png)
|
||||||
|
|
||||||
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
|
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
|
||||||
```command
|
```shell
|
||||||
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
|
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git \ # [tl! .cmd]
|
||||||
|
${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
|
||||||
```
|
```
|
||||||
Now we just need to edit `~/.zshrc` to point to the new theme:
|
Now we just need to edit `~/.zshrc` to point to the new theme:
|
||||||
```command
|
```shell
|
||||||
sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc
|
sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc # [tl! .cmd]
|
||||||
```
|
```
|
||||||
We'll need to launch another instance of `zsh` for the theme change to take effect so first lets go ahead and manually set `zsh` as our default shell. We can use `sudo` to get around the whole "don't have a password set" inconvenience:
|
We'll need to launch another instance of `zsh` for the theme change to take effect so first lets go ahead and manually set `zsh` as our default shell. We can use `sudo` to get around the whole "don't have a password set" inconvenience:
|
||||||
```command
|
```shell
|
||||||
sudo chsh -s /bin/zsh [username]
|
sudo chsh -s /bin/zsh [username] # [tl! .cmd]
|
||||||
```
|
```
|
||||||
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
|
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
|
||||||
![powerlevel10k configurator](K1ScSuWcg.png)
|
![powerlevel10k configurator](K1ScSuWcg.png)
|
||||||
|
@ -82,8 +83,8 @@ Looking good!
|
||||||
|
|
||||||
### Visual Studio Code
|
### Visual Studio Code
|
||||||
I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer [here](https://code.visualstudio.com/Download#) or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
|
I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer [here](https://code.visualstudio.com/Download#) or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
|
||||||
```command
|
```shell
|
||||||
curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb
|
curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb # [tl! .cmd:1]
|
||||||
sudo apt install ./code_arm64.deb
|
sudo apt install ./code_arm64.deb
|
||||||
```
|
```
|
||||||
VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with `code [filename]`:
|
VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with `code [filename]`:
|
||||||
|
@ -104,8 +105,8 @@ Once you connect the phone to Linux, check the phone to approve the debugging co
|
||||||
I'm working on setting up a [VMware homelab on an Intel NUC 9](https://twitter.com/johndotbowdre/status/1317558182936563714) so being able to automate things with PowerCLI will be handy.
|
I'm working on setting up a [VMware homelab on an Intel NUC 9](https://twitter.com/johndotbowdre/status/1317558182936563714) so being able to automate things with PowerCLI will be handy.
|
||||||
|
|
||||||
PowerShell for ARM is still in an early stage so while [it is supported](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#support-for-arm-processors) it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives [here](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#linux), and I grabbed the latest `-linux-arm64.tar.gz` release I could find [here](https://github.com/PowerShell/PowerShell/releases).
|
PowerShell for ARM is still in an early stage so while [it is supported](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#support-for-arm-processors) it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives [here](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#linux), and I grabbed the latest `-linux-arm64.tar.gz` release I could find [here](https://github.com/PowerShell/PowerShell/releases).
|
||||||
```command
|
```shell
|
||||||
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz
|
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz # [tl! .cmd:4]
|
||||||
sudo mkdir -p /opt/microsoft/powershell/7
|
sudo mkdir -p /opt/microsoft/powershell/7
|
||||||
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
|
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
|
||||||
sudo chmod +x /opt/microsoft/powershell/7/pwsh
|
sudo chmod +x /opt/microsoft/powershell/7/pwsh
|
||||||
|
@ -124,8 +125,8 @@ Woot!
|
||||||
The Linux (Beta) environment consists of a hardened virtual machine (named `termina`) running an LXC Debian container (named `penguin`). Know what would be even more fun? Let's run some other containers inside our container!
|
The Linux (Beta) environment consists of a hardened virtual machine (named `termina`) running an LXC Debian container (named `penguin`). Know what would be even more fun? Let's run some other containers inside our container!
|
||||||
|
|
||||||
The docker installation has a few prerequisites:
|
The docker installation has a few prerequisites:
|
||||||
```command-session
|
```shell
|
||||||
sudo apt install \
|
sudo apt install \ # [tl! .cmd]
|
||||||
apt-transport-https \
|
apt-transport-https \
|
||||||
ca-certificates \
|
ca-certificates \
|
||||||
curl \
|
curl \
|
||||||
|
@ -133,19 +134,19 @@ sudo apt install \
|
||||||
software-properties-common
|
software-properties-common
|
||||||
```
|
```
|
||||||
Then we need to grab the Docker repo key:
|
Then we need to grab the Docker repo key:
|
||||||
```command
|
```shell
|
||||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
|
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # [tl! .cmd]
|
||||||
```
|
```
|
||||||
And then we can add the repo:
|
And then we can add the repo:
|
||||||
```command-session
|
```shell
|
||||||
sudo add-apt-repository \
|
sudo add-apt-repository \ # [tl! .cmd]
|
||||||
"deb [arch=arm64] https://download.docker.com/linux/debian \
|
"deb [arch=arm64] https://download.docker.com/linux/debian \
|
||||||
$(lsb_release -cs) \
|
$(lsb_release -cs) \
|
||||||
stable"
|
stable"
|
||||||
```
|
```
|
||||||
And finally update the package cache and install `docker` and its friends:
|
And finally update the package cache and install `docker` and its friends:
|
||||||
```command
|
```shell
|
||||||
sudo apt update
|
sudo apt update # [tl! .cmd:1]
|
||||||
sudo apt install docker-ce docker-ce-cli containerd.io
|
sudo apt install docker-ce docker-ce-cli containerd.io
|
||||||
```
|
```
|
||||||
![I put a container in your container](k2uiYi5e8.png)
|
![I put a container in your container](k2uiYi5e8.png)
|
||||||
|
@ -163,14 +164,14 @@ So while I can use the Duet for designing 3D models, I won't be able to actually
|
||||||
I came across [a Reddit post](https://www.reddit.com/r/Crostini/comments/jnbqv3/successfully_running_jupyter_notebook_on_samsung/) today describing how to install `conda` and get a Jupyter Notebook running on arm64 so I had to give it a try. It actually wasn't that bad!
|
I came across [a Reddit post](https://www.reddit.com/r/Crostini/comments/jnbqv3/successfully_running_jupyter_notebook_on_samsung/) today describing how to install `conda` and get a Jupyter Notebook running on arm64 so I had to give it a try. It actually wasn't that bad!
|
||||||
|
|
||||||
The key is to grab the appropriate version of [conda Miniforge](https://github.com/conda-forge/miniforge), make it executable, and run the installer:
|
The key is to grab the appropriate version of [conda Miniforge](https://github.com/conda-forge/miniforge), make it executable, and run the installer:
|
||||||
```command
|
```shell
|
||||||
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
|
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh # [tl! .cmd:2]
|
||||||
chmod +x Miniforge3-Linux-aarch64.sh
|
chmod +x Miniforge3-Linux-aarch64.sh
|
||||||
./Miniforge3-Linux-aarch64.sh
|
./Miniforge3-Linux-aarch64.sh
|
||||||
```
|
```
|
||||||
Exit the terminal and relaunch it, and then install Jupyter:
|
Exit the terminal and relaunch it, and then install Jupyter:
|
||||||
```command
|
```shell
|
||||||
conda install -c conda-forge notebook
|
conda install -c conda-forge notebook # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
You can then launch the notebook with `jupyter notebook` and it will automatically open up in a Chrome OS browser tab:
|
You can then launch the notebook with `jupyter notebook` and it will automatically open up in a Chrome OS browser tab:
|
||||||
|
|
|
@ -58,8 +58,8 @@ You can refer to my notes from last time for details on how I [created the Ubunt
|
||||||
| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) |
|
| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) |
|
||||||
|
|
||||||
As a gentle reminder, Oracle's `iptables` configuration inserts a `REJECT all` rule at the bottom of each chain. I needed to make sure that each of my `ALLOW` rules get inserted above that point. So I used `iptables -L INPUT --line-numbers` to identify which line held the `REJECT` rule, and then used `iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT` to insert the new rules above that point.
|
As a gentle reminder, Oracle's `iptables` configuration inserts a `REJECT all` rule at the bottom of each chain. I needed to make sure that each of my `ALLOW` rules get inserted above that point. So I used `iptables -L INPUT --line-numbers` to identify which line held the `REJECT` rule, and then used `iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT` to insert the new rules above that point.
|
||||||
```command
|
```shell
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:start]
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 443 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 443 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dports 3478-3479 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dports 3478-3479 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p tcp -m multiport --dports 3478-3479 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p tcp -m multiport --dports 3478-3479 -j ACCEPT
|
||||||
|
@ -69,13 +69,13 @@ sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5222 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5269 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5269 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 3478,3479 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 3478,3479 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 5349,5350 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 5349,5350 -j ACCEPT
|
||||||
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000:60100 -j ACCEPT
|
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000:60100 -j ACCEPT # [tl! .cmd:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Then to verify the rules are in the right order:
|
Then to verify the rules are in the right order:
|
||||||
```command-session
|
```shell
|
||||||
sudo iptables -L INPUT --line-numbers -n
|
sudo iptables -L INPUT --line-numbers -n # [tl! .cmd]
|
||||||
Chain INPUT (policy ACCEPT)
|
Chain INPUT (policy ACCEPT) # [tl! .nocopy:start]
|
||||||
num target prot opt source destination
|
num target prot opt source destination
|
||||||
1 ts-input all -- 0.0.0.0/0 0.0.0.0/0
|
1 ts-input all -- 0.0.0.0/0 0.0.0.0/0
|
||||||
2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
|
2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
|
||||||
|
@ -92,13 +92,13 @@ num target prot opt source destination
|
||||||
13 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5222
|
13 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5222
|
||||||
14 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
|
14 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
|
||||||
15 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 3478,3479
|
15 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 3478,3479
|
||||||
16 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
|
16 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Before moving on, it's important to save them so the rules will persist across reboots!
|
Before moving on, it's important to save them so the rules will persist across reboots!
|
||||||
```command-session
|
```shell
|
||||||
sudo netfilter-persistent save
|
sudo netfilter-persistent save # [tl! .cmd]
|
||||||
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
|
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
|
||||||
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
|
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -115,30 +115,30 @@ share.vpota.to 300 IN CNAME chat.vpota.to
|
||||||
### Install `docker` and `docker-compose`
|
### Install `docker` and `docker-compose`
|
||||||
Snikket is distributed as a set of docker containers which makes it super easy to get up and running on basically any Linux system. But, of course, you'll first need to [install `docker`](https://docs.docker.com/engine/install/ubuntu/)
|
Snikket is distributed as a set of docker containers which makes it super easy to get up and running on basically any Linux system. But, of course, you'll first need to [install `docker`](https://docs.docker.com/engine/install/ubuntu/)
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
# Update package index
|
# Update package index
|
||||||
sudo apt update
|
sudo apt update # [tl! .cmd]
|
||||||
# Install prereqs
|
# Install prereqs
|
||||||
sudo apt install ca-certificates curl gnupg lsb-release
|
sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd]
|
||||||
# Add docker's GPG key
|
# Add docker's GPG key
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # [tl! .cmd]
|
||||||
# Add the docker repo
|
# Add the docker repo
|
||||||
echo \
|
echo \ # [tl! .cmd]
|
||||||
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
|
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
|
||||||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
# Refresh the package index with the new repo added
|
# Refresh the package index with the new repo added
|
||||||
sudo apt update
|
sudo apt update # [tl! .cmd]
|
||||||
# Install docker
|
# Install docker
|
||||||
sudo apt install docker-ce docker-ce-cli containerd.io
|
sudo apt install docker-ce docker-ce-cli containerd.io # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
And install `docker-compose` also to simplify the container management:
|
And install `docker-compose` also to simplify the container management:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
# Download the docker-compose binary
|
# Download the docker-compose binary
|
||||||
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # [tl! .cmd]
|
||||||
# Make it executable
|
# Make it executable
|
||||||
sudo chmod +x /usr/local/bin/docker-compose
|
sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
Now we're ready to...
|
Now we're ready to...
|
||||||
|
@ -146,21 +146,21 @@ Now we're ready to...
|
||||||
### Install Snikket
|
### Install Snikket
|
||||||
This starts with just making a place for Snikket to live:
|
This starts with just making a place for Snikket to live:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo mkdir /etc/snikket
|
sudo mkdir /etc/snikket # [tl! .cmd:1]
|
||||||
cd /etc/snikket
|
cd /etc/snikket
|
||||||
```
|
```
|
||||||
|
|
||||||
And then grabbing the Snikket `docker-compose` file:
|
And then grabbing the Snikket `docker-compose` file:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml
|
sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
And then creating a very minimal configuration file:
|
And then creating a very minimal configuration file:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo vi snikket.conf
|
sudo vi snikket.conf # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
A basic config only needs two parameters:
|
A basic config only needs two parameters:
|
||||||
|
@ -176,7 +176,8 @@ In my case, I'm going to add two additional parameters to restrict the UDP TURN
|
||||||
|
|
||||||
So here's my config:
|
So here's my config:
|
||||||
|
|
||||||
```cfg {linenos=true}
|
```ini
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
SNIKKET_DOMAIN=chat.vpota.to
|
SNIKKET_DOMAIN=chat.vpota.to
|
||||||
SNIKKET_ADMIN_EMAIL=ops@example.com
|
SNIKKET_ADMIN_EMAIL=ops@example.com
|
||||||
|
|
||||||
|
@ -188,8 +189,8 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
|
||||||
### Start it up!
|
### Start it up!
|
||||||
With everything in place, I can start up the Snikket server:
|
With everything in place, I can start up the Snikket server:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo docker-compose up -d
|
sudo docker-compose up -d # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
This will take a moment or two to pull down all the required container images, start them, and automatically generate the SSL certificates. Very soon, though, I can point my browser to `https://chat.vpota.to` and see a lovely login page - complete with an automagically-valid-and-trusted certificate:
|
This will take a moment or two to pull down all the required container images, start them, and automatically generate the SSL certificates. Very soon, though, I can point my browser to `https://chat.vpota.to` and see a lovely login page - complete with an automagically-valid-and-trusted certificate:
|
||||||
|
@ -197,8 +198,8 @@ This will take a moment or two to pull down all the required container images, s
|
||||||
|
|
||||||
Of course, I don't yet have a way to log in, and like I mentioned earlier Snikket doesn't offer open user registration. Every user (even me, the admin!) has to be invited. Fortunately I can generate my first invite directly from the command line:
|
Of course, I don't yet have a way to log in, and like I mentioned earlier Snikket doesn't offer open user registration. Every user (even me, the admin!) has to be invited. Fortunately I can generate my first invite directly from the command line:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo docker exec snikket create-invite --admin --group default
|
sudo docker exec snikket create-invite --admin --group default # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
That command will return a customized invite link which I can copy and paste into my browser.
|
That command will return a customized invite link which I can copy and paste into my browser.
|
||||||
|
@ -251,8 +252,9 @@ One of the really cool things about Caddy is that it automatically generates SSL
|
||||||
|
|
||||||
Fortunately, the [Snikket reverse proxy documentation](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#basic) was recently updated with a sample config for making this happen. Matrix and Snikket really only overlap on ports `80` and `443` so those are the only ports I'll need to handle, which lets me go for the "Basic" configuration instead of the "Advanced" one. I can just adapt the sample config from the documentation and add that to my existing `/etc/caddy/Caddyfile` alongside the config for Matrix:
|
Fortunately, the [Snikket reverse proxy documentation](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#basic) was recently updated with a sample config for making this happen. Matrix and Snikket really only overlap on ports `80` and `443` so those are the only ports I'll need to handle, which lets me go for the "Basic" configuration instead of the "Advanced" one. I can just adapt the sample config from the documentation and add that to my existing `/etc/caddy/Caddyfile` alongside the config for Matrix:
|
||||||
|
|
||||||
```caddy {linenos=true}
|
```text
|
||||||
http://chat.vpota.to,
|
# torchlight! {"lineNumbers": true}
|
||||||
|
http://chat.vpota.to, # [tl! focus:start]
|
||||||
http://groups.chat.vpota.to,
|
http://groups.chat.vpota.to,
|
||||||
http://share.chat.vpota.to {
|
http://share.chat.vpota.to {
|
||||||
reverse_proxy localhost:5080
|
reverse_proxy localhost:5080
|
||||||
|
@ -266,7 +268,7 @@ share.chat.vpota.to {
|
||||||
tls_insecure_skip_verify
|
tls_insecure_skip_verify
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
} # [tl! focus:end]
|
||||||
|
|
||||||
matrix.bowdre.net {
|
matrix.bowdre.net {
|
||||||
reverse_proxy /_matrix/* http://localhost:8008
|
reverse_proxy /_matrix/* http://localhost:8008
|
||||||
|
@ -294,34 +296,32 @@ Since Snikket is completely containerized, moving between hosts is a simple matt
|
||||||
|
|
||||||
The Snikket team has actually put together a couple of scripts to assist with [backing up](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/backup.sh) and [restoring](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) an instance. I just adapted the last line of each to do what I needed:
|
The Snikket team has actually put together a couple of scripts to assist with [backing up](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/backup.sh) and [restoring](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) an instance. I just adapted the last line of each to do what I needed:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
sudo docker run --rm --volumes-from=snikket \
|
sudo docker run --rm --volumes-from=snikket \ # [tl! .cmd]
|
||||||
-v "/home/john/snikket-backup/":/backup debian:buster-slim \
|
-v "/home/john/snikket-backup/":/backup debian:buster-slim \
|
||||||
tar czf /backup/snikket-"$(date +%F-%H%m)".tar.gz /snikket
|
tar czf /backup/snikket-"$(date +%F-%H%m)".tar.gz /snikket
|
||||||
```
|
```
|
||||||
|
|
||||||
That will drop a compressed backup of the `snikket_data` volume into the specified directory, `/home/john/snikket-backup/`. While I'm at it, I'll also go ahead and copy the `docker-compose.yml` and `snikket.conf` files from `/etc/snikket/`:
|
That will drop a compressed backup of the `snikket_data` volume into the specified directory, `/home/john/snikket-backup/`. While I'm at it, I'll also go ahead and copy the `docker-compose.yml` and `snikket.conf` files from `/etc/snikket/`:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo cp -a /etc/snikket/* /home/john/snikket-backup/
|
sudo cp -a /etc/snikket/* /home/john/snikket-backup/ # [tl! .cmd]
|
||||||
```
|
ls -l /home/john/snikket-backup/ # [tl! .cmd]
|
||||||
```command-session
|
total 1728 # [tl! .nocopy:3]
|
||||||
ls -l /home/john/snikket-backup/
|
|
||||||
total 1728
|
|
||||||
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
|
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
|
||||||
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
|
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
|
||||||
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
|
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can then zip that up for easy transfer:
|
And I can then zip that up for easy transfer:
|
||||||
```command
|
```shell
|
||||||
tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/
|
tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/ # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
This would be a great time to go ahead and stop this original Snikket instance. After all, nothing that happens after the backup was exported is going to carry over anyway.
|
This would be a great time to go ahead and stop this original Snikket instance. After all, nothing that happens after the backup was exported is going to carry over anyway.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo docker-compose down
|
sudo docker-compose down # [tl! .cmd]
|
||||||
```
|
```
|
||||||
{{% notice tip "Update DNS" %}}
|
{{% notice tip "Update DNS" %}}
|
||||||
This is also a great time to update the `A` record for `chat.vpota.to` so that it points to the new server. It will need a little bit of time for the change to trickle out, and the updated record really needs to be in place before starting Snikket on the new server so that there aren't any certificate problems.
|
This is also a great time to update the `A` record for `chat.vpota.to` so that it points to the new server. It will need a little bit of time for the change to trickle out, and the updated record really needs to be in place before starting Snikket on the new server so that there aren't any certificate problems.
|
||||||
|
@ -330,20 +330,18 @@ This is also a great time to update the `A` record for `chat.vpota.to` so that i
|
||||||
|
|
||||||
Now I just need to transfer the archive from one server to the other. I've got [Tailscale](https://tailscale.com/)[^11] running on my various cloud servers so that they can talk to each other through a secure WireGuard tunnel (remember [WireGuard](/cloud-based-wireguard-vpn-remote-homelab-access/)?) without having to open any firewall ports between them, and that means I can just use `scp` to transfer the file without any fuss. I can even leverage Tailscale's [Magic DNS](https://tailscale.com/kb/1081/magicdns/) feature to avoid worrying with any IPs, just the hostname registered in Tailscale (`chat-oci`):
|
Now I just need to transfer the archive from one server to the other. I've got [Tailscale](https://tailscale.com/)[^11] running on my various cloud servers so that they can talk to each other through a secure WireGuard tunnel (remember [WireGuard](/cloud-based-wireguard-vpn-remote-homelab-access/)?) without having to open any firewall ports between them, and that means I can just use `scp` to transfer the file without any fuss. I can even leverage Tailscale's [Magic DNS](https://tailscale.com/kb/1081/magicdns/) feature to avoid worrying with any IPs, just the hostname registered in Tailscale (`chat-oci`):
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/
|
scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/ # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
Next, I SSH in to the new server and unzip the archive:
|
Next, I SSH in to the new server and unzip the archive:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
ssh snikket-oci-server
|
ssh snikket-oci-server # [tl! .cmd:3]
|
||||||
tar xf snikket-backup.tar.gz
|
tar xf snikket-backup.tar.gz
|
||||||
cd snikket-backup
|
cd snikket-backup
|
||||||
```
|
|
||||||
```command-session
|
|
||||||
ls -l
|
ls -l
|
||||||
total 1728
|
total 1728 # [tl! .nocopy:3]
|
||||||
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
|
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
|
||||||
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
|
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
|
||||||
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
|
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
|
||||||
|
@ -351,8 +349,8 @@ total 1728
|
||||||
|
|
||||||
Before I can restore the content of the `snikket-data` volume on the new server, I'll need to first go ahead and set up Snikket again. I've already got `docker` and `docker-compose` installed from when I installed Matrix so I'll skip to creating the Snikket directory and copying in the `docker-compose.yml` and `snikket.conf` files.
|
Before I can restore the content of the `snikket-data` volume on the new server, I'll need to first go ahead and set up Snikket again. I've already got `docker` and `docker-compose` installed from when I installed Matrix so I'll skip to creating the Snikket directory and copying in the `docker-compose.yml` and `snikket.conf` files.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
sudo mkdir /etc/snikket
|
sudo mkdir /etc/snikket # [tl! .cmd:3]
|
||||||
sudo cp docker-compose.yml /etc/snikket/
|
sudo cp docker-compose.yml /etc/snikket/
|
||||||
sudo cp snikket.conf /etc/snikket/
|
sudo cp snikket.conf /etc/snikket/
|
||||||
cd /etc/snikket
|
cd /etc/snikket
|
||||||
|
@ -360,7 +358,8 @@ cd /etc/snikket
|
||||||
|
|
||||||
Before I fire this up on the new host, I need to edit the `snikket.conf` to tell Snikket to use those different ports defined in the reverse proxy configuration using [a couple of `SNIKKET_TWEAK_*` lines](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#snikket):
|
Before I fire this up on the new host, I need to edit the `snikket.conf` to tell Snikket to use those different ports defined in the reverse proxy configuration using [a couple of `SNIKKET_TWEAK_*` lines](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#snikket):
|
||||||
|
|
||||||
```cfg {linenos=true}
|
```ini
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
SNIKKET_DOMAIN=chat.vpota.to
|
SNIKKET_DOMAIN=chat.vpota.to
|
||||||
SNIKKET_ADMIN_EMAIL=ops@example.com
|
SNIKKET_ADMIN_EMAIL=ops@example.com
|
||||||
|
|
||||||
|
@ -371,16 +370,16 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
|
||||||
```
|
```
|
||||||
|
|
||||||
Alright, let's start up the Snikket server:
|
Alright, let's start up the Snikket server:
|
||||||
```command
|
```shell
|
||||||
sudo docker-compose up -d
|
sudo docker-compose up -d # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
After a moment or two, I can point a browser to `https://chat.vpota.to` and see the login screen (with a valid SSL certificate!) but I won't actually be able to log in. As far as Snikket is concerned, this is a brand new setup.
|
After a moment or two, I can point a browser to `https://chat.vpota.to` and see the login screen (with a valid SSL certificate!) but I won't actually be able to log in. As far as Snikket is concerned, this is a brand new setup.
|
||||||
|
|
||||||
Now I can borrow the last line from the [`restore.sh` script](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) to bring in my data:
|
Now I can borrow the last line from the [`restore.sh` script](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) to bring in my data:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
sudo docker run --rm --volumes-from=snikket \
|
sudo docker run --rm --volumes-from=snikket \ # [tl! .cmd]
|
||||||
--mount type=bind,source="/home/john/snikket-backup/snikket-2021-12-19-1745.tar.gz",destination=/backup.tar.gz \
|
--mount type=bind,source="/home/john/snikket-backup/snikket-2021-12-19-1745.tar.gz",destination=/backup.tar.gz \
|
||||||
debian:buster-slim \
|
debian:buster-slim \
|
||||||
bash -c "rm -rf /snikket/*; tar xvf /backup.tar.gz -C /"
|
bash -c "rm -rf /snikket/*; tar xvf /backup.tar.gz -C /"
|
||||||
|
|
|
@ -15,8 +15,8 @@ tags:
|
||||||
Following a recent update, I found that the [Linux development environment](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/containers_and_vms.md) on my Framework Chromebook would fail to load if the [Tailscale](/secure-networking-made-simple-with-tailscale) daemon was already running. It seems that the Tailscale virtual interface may have interfered with how the CrOS Terminal app was expecting to connect to the Linux container. I initially worked around the problem by just disabling the `tailscaled` service, but having to remember to start it up manually was a pretty heavy cognitive load.
|
Following a recent update, I found that the [Linux development environment](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/containers_and_vms.md) on my Framework Chromebook would fail to load if the [Tailscale](/secure-networking-made-simple-with-tailscale) daemon was already running. It seems that the Tailscale virtual interface may have interfered with how the CrOS Terminal app was expecting to connect to the Linux container. I initially worked around the problem by just disabling the `tailscaled` service, but having to remember to start it up manually was a pretty heavy cognitive load.
|
||||||
|
|
||||||
Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the `systemctl edit` command to create a quick override configuration:
|
Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the `systemctl edit` command to create a quick override configuration:
|
||||||
```command
|
```shell
|
||||||
sudo systemctl edit tailscaled
|
sudo systemctl edit tailscaled # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
This shows me the existing contents of the `tailscaled.service` definition so I can easily insert some overrides above. In this case, I just want to use `sleep 5` as the `ExecStartPre` command so that the service start will be delayed by 5 seconds:
|
This shows me the existing contents of the `tailscaled.service` definition so I can easily insert some overrides above. In this case, I just want to use `sleep 5` as the `ExecStartPre` command so that the service start will be delayed by 5 seconds:
|
||||||
|
|
|
@ -36,7 +36,7 @@ Sounds great - but how do you actually make golink available on your tailnet? We
|
||||||
There are three things I'll need to do in the Tailscale admin portal before moving on:
|
There are three things I'll need to do in the Tailscale admin portal before moving on:
|
||||||
#### Create an ACL tag
|
#### Create an ACL tag
|
||||||
I assign ACL tags to devices in my tailnet based on their location and/or purpose, and I'm then able to use those in a policy to restrict access between certain devices. To that end, I'm going to create a new `tag:golink` tag for this purpose. Creating a new tag in Tailscale is really just going to the [Access Controls page of the admin console](https://login.tailscale.com/admin/acls) and editing the policy to specify a `tagOwner` who is permitted to assign the tag:
|
I assign ACL tags to devices in my tailnet based on their location and/or purpose, and I'm then able to use those in a policy to restrict access between certain devices. To that end, I'm going to create a new `tag:golink` tag for this purpose. Creating a new tag in Tailscale is really just going to the [Access Controls page of the admin console](https://login.tailscale.com/admin/acls) and editing the policy to specify a `tagOwner` who is permitted to assign the tag:
|
||||||
```text {hl_lines=[11]}
|
```json
|
||||||
"groups":
|
"groups":
|
||||||
"group:admins": ["john@example.com"],
|
"group:admins": ["john@example.com"],
|
||||||
},
|
},
|
||||||
|
@ -47,14 +47,14 @@ I assign ACL tags to devices in my tailnet based on their location and/or purpos
|
||||||
"tag:dns": ["group:admins"],
|
"tag:dns": ["group:admins"],
|
||||||
"tag:rsync": ["group:admins"],
|
"tag:rsync": ["group:admins"],
|
||||||
"tag:funnel": ["group:admins"],
|
"tag:funnel": ["group:admins"],
|
||||||
"tag:golink": ["group:admins"],
|
"tag:golink": ["group:admins"], // [tl! highlight]
|
||||||
},
|
},
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Configure ACL access
|
#### Configure ACL access
|
||||||
This step is really only necessary since I've altered the default Tailscale ACL and prevent my nodes from communicating with each other unless specifically permitted. I want to make sure that everything on my tailnet can access golink:
|
This step is really only necessary since I've altered the default Tailscale ACL and prevent my nodes from communicating with each other unless specifically permitted. I want to make sure that everything on my tailnet can access golink:
|
||||||
|
|
||||||
```text
|
```json
|
||||||
"acls": [
|
"acls": [
|
||||||
{
|
{
|
||||||
// make golink accessible to everything
|
// make golink accessible to everything
|
||||||
|
@ -80,20 +80,21 @@ After clicking the **Generate key** button, the key will be displayed. This is t
|
||||||
|
|
||||||
### Docker setup
|
### Docker setup
|
||||||
The [golink repo](https://github.com/tailscale/golink) offers this command for running the container:
|
The [golink repo](https://github.com/tailscale/golink) offers this command for running the container:
|
||||||
```command
|
```shell
|
||||||
docker run -it --rm ghcr.io/tailscale/golink:main
|
docker run -it --rm ghcr.io/tailscale/golink:main # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
The doc also indicates that I can pass the auth key to the golink service via the `TS_AUTHKEY` environment variable, and that all the configuration will be stored in `/home/nonroot` (which will be owned by uid/gid `65532`). I'll take this knowledge and use it to craft a `docker-compose.yml` to simplify container management.
|
The doc also indicates that I can pass the auth key to the golink service via the `TS_AUTHKEY` environment variable, and that all the configuration will be stored in `/home/nonroot` (which will be owned by uid/gid `65532`). I'll take this knowledge and use it to craft a `docker-compose.yml` to simplify container management.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
mkdir -p golink/data
|
mkdir -p golink/data # [tl! .cmd:3]
|
||||||
cd golink
|
cd golink
|
||||||
chmod 65532:65532 data
|
chmod 65532:65532 data
|
||||||
vi docker-compose.yaml
|
vi docker-compose.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# golink docker-compose.yaml
|
# golink docker-compose.yaml
|
||||||
version: '3'
|
version: '3'
|
||||||
services:
|
services:
|
||||||
|
@ -146,8 +147,8 @@ Some of my other golinks:
|
||||||
You can browse to `go/.export` to see a JSON-formatted listing of all configured shortcuts - or, if you're clever, you could do something like `curl http://go/.export -o links.json` to download a copy.
|
You can browse to `go/.export` to see a JSON-formatted listing of all configured shortcuts - or, if you're clever, you could do something like `curl http://go/.export -o links.json` to download a copy.
|
||||||
|
|
||||||
To restore, just pass `--snapshot /path/to/links.json` when starting golink. What I usually do is copy the file into the `data` folder that I'm mounting as a Docker volume, and then just run:
|
To restore, just pass `--snapshot /path/to/links.json` when starting golink. What I usually do is copy the file into the `data` folder that I'm mounting as a Docker volume, and then just run:
|
||||||
```command
|
```shell
|
||||||
sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json
|
sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Conclusion
|
### Conclusion
|
||||||
|
|
|
@ -30,21 +30,21 @@ Here's a condensed list of the [steps that I took to manually install Tailscale]
|
||||||
|
|
||||||
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
|
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
|
||||||
2. Download and extract it to the system:
|
2. Download and extract it to the system:
|
||||||
```command
|
```shell
|
||||||
wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz
|
wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz # [tl! .cmd:2]
|
||||||
tar xvf tailscale_1.34.1_arm64.tgz
|
tar xvf tailscale_1.34.1_arm64.tgz
|
||||||
cd tailscale_1.34.1_arm64/
|
cd tailscale_1.34.1_arm64/
|
||||||
```
|
```
|
||||||
3. Install the binaries and service files:
|
3. Install the binaries and service files:
|
||||||
```command
|
```shell
|
||||||
sudo install -m 755 tailscale /usr/bin/
|
sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:4]
|
||||||
sudo install -m 755 tailscaled /usr/sbin/
|
sudo install -m 755 tailscaled /usr/sbin/
|
||||||
sudo install -m 644 systemd/tailscaled.defaults /etc/default/tailscaled
|
sudo install -m 644 systemd/tailscaled.defaults /etc/default/tailscaled
|
||||||
sudo install -m 644 systemd/tailscaled.service /usr/lib/systemd/system/
|
sudo install -m 644 systemd/tailscaled.service /usr/lib/systemd/system/
|
||||||
```
|
```
|
||||||
4. Start the service:
|
4. Start the service:
|
||||||
```command
|
```shell
|
||||||
sudo systemctl enable tailscaled
|
sudo systemctl enable tailscaled # [tl! .cmd:1]
|
||||||
sudo systemctl start tailscaled
|
sudo systemctl start tailscaled
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -68,9 +68,9 @@ I've already got Docker installed on this machine, but if I didn't I would follo
|
||||||
|
|
||||||
I also verify that my install is using `cgroup` version 1 as version 2 is not currently supported:
|
I also verify that my install is using `cgroup` version 1 as version 2 is not currently supported:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
docker info | grep -i cgroup
|
docker info | grep -i cgroup # [tl! .cmd]
|
||||||
Cgroup Driver: cgroupfs
|
Cgroup Driver: cgroupfs # [tl! .nocopy:1]
|
||||||
Cgroup Version: 1
|
Cgroup Version: 1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -79,64 +79,49 @@ Next up, I'll install `kubectl` [as described here](https://kubernetes.io/docs/t
|
||||||
|
|
||||||
I can look at the [releases page on GithHub](https://github.com/kubernetes/kubernetes/releases) to see that the latest release for me is `1.22.5`. With this newfound knowledge I can follow the [Install kubectl binary with curl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) instructions to grab that specific version:
|
I can look at the [releases page on GithHub](https://github.com/kubernetes/kubernetes/releases) to see that the latest release for me is `1.22.5`. With this newfound knowledge I can follow the [Install kubectl binary with curl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) instructions to grab that specific version:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
curl -LO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl
|
curl -sLO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl # [tl! .cmd:1]
|
||||||
|
|
||||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
|
||||||
Dload Upload Total Spent Left Speed
|
|
||||||
100 154 100 154 0 0 2298 0 --:--:-- --:--:-- --:--:-- 2298
|
|
||||||
100 44.7M 100 44.7M 0 0 56.9M 0 --:--:-- --:--:-- --:--:-- 56.9M
|
|
||||||
```
|
|
||||||
```command-session
|
|
||||||
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||||
|
# [tl! .nocopy:2]
|
||||||
[sudo] password for john:
|
[sudo] password for john:
|
||||||
```
|
|
||||||
```command-session
|
kubectl version --client # [tl! .cmd]
|
||||||
kubectl version --client
|
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", # [tl! .nocopy:3]
|
||||||
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
|
GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean",
|
||||||
|
BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc",
|
||||||
|
Platform:"linux/amd64"}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### `kind` binary
|
#### `kind` binary
|
||||||
It's not strictly a requirement, but having the `kind` executable available will be handy for troubleshooting during the bootstrap process in case anything goes sideways. It can be installed in basically the same was as `kubectl`:
|
It's not strictly a requirement, but having the `kind` executable available will be handy for troubleshooting during the bootstrap process in case anything goes sideways. It can be installed in basically the same was as `kubectl`:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
|
curl -sLo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 # [tl! .cmd:2]
|
||||||
|
|
||||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
|
||||||
Dload Upload Total Spent Left Speed
|
|
||||||
100 98 100 98 0 0 513 0 --:--:-- --:--:-- --:--:-- 513
|
|
||||||
100 655 100 655 0 0 2212 0 --:--:-- --:--:-- --:--:-- 10076
|
|
||||||
100 6660k 100 6660k 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M
|
|
||||||
```
|
|
||||||
```command
|
|
||||||
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
|
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
|
||||||
```
|
|
||||||
```command-session
|
|
||||||
kind version
|
kind version
|
||||||
kind v0.11.1 go1.16.5 linux/amd64
|
kind v0.11.1 go1.16.5 linux/amd64 # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Tanzu CLI
|
#### Tanzu CLI
|
||||||
The final bit of required software is the Tanzu CLI, which can be downloaded from the [project on GitHub](https://github.com/vmware-tanzu/community-edition/releases).
|
The final bit of required software is the Tanzu CLI, which can be downloaded from the [project on GitHub](https://github.com/vmware-tanzu/community-edition/releases).
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
curl -H "Accept: application/vnd.github.v3.raw" \
|
curl -H "Accept: application/vnd.github.v3.raw" \ # [tl! .cmd]
|
||||||
-L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
|
-L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
|
||||||
bash -s v0.9.1 linux
|
bash -s v0.9.1 linux
|
||||||
```
|
```
|
||||||
|
|
||||||
And then unpack it and run the installer:
|
And then unpack it and run the installer:
|
||||||
```command
|
```shell
|
||||||
tar xf tce-linux-amd64-v0.9.1.tar.gz
|
tar xf tce-linux-amd64-v0.9.1.tar.gz # [tl! .cmd:2]
|
||||||
cd tce-linux-amd64-v0.9.1
|
cd tce-linux-amd64-v0.9.1
|
||||||
./install.sh
|
./install.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
I can then verify the installation is working correctly:
|
I can then verify the installation is working correctly:
|
||||||
```command-session
|
```shell
|
||||||
tanzu version
|
tanzu version # [tl! .cmd]
|
||||||
version: v0.2.1
|
version: v0.2.1 # [tl! .nocopy:2]
|
||||||
buildDate: 2021-09-29
|
buildDate: 2021-09-29
|
||||||
sha: ceaa474
|
sha: ceaa474
|
||||||
```
|
```
|
||||||
|
@ -146,15 +131,15 @@ Okay, now it's time for the good stuff - creating some shiny new Tanzu clusters!
|
||||||
|
|
||||||
#### Management cluster
|
#### Management cluster
|
||||||
I need to create a Management cluster first and I'd like to do that with the UI, so that's as simple as:
|
I need to create a Management cluster first and I'd like to do that with the UI, so that's as simple as:
|
||||||
```command
|
```shell
|
||||||
tanzu management-cluster create --ui
|
tanzu management-cluster create --ui # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
I should then be able to access the UI by pointing a web browser at `http://127.0.0.1:8080`... but I'm running this on a VM without a GUI, so I'll need to back up and tell it to bind on `0.0.0.0:8080` so the web installer will be accessible across the network. I can also include `--browser none` so that the installer doesn't bother with trying to launch a browser locally.
|
I should then be able to access the UI by pointing a web browser at `http://127.0.0.1:8080`... but I'm running this on a VM without a GUI, so I'll need to back up and tell it to bind on `0.0.0.0:8080` so the web installer will be accessible across the network. I can also include `--browser none` so that the installer doesn't bother with trying to launch a browser locally.
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none
|
tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none # [tl! .cmd]
|
||||||
|
# [tl! .nocopy:2]
|
||||||
Validating the pre-requisites...
|
Validating the pre-requisites...
|
||||||
Serving kickstart UI at http://[::]:8080
|
Serving kickstart UI at http://[::]:8080
|
||||||
```
|
```
|
||||||
|
@ -190,20 +175,22 @@ I skip the Tanzu Mission Control piece (since I'm still waiting on access to [TM
|
||||||
|
|
||||||
See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly **Deploy** button doesn't seem to work while connected to the web server remotely.
|
See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly **Deploy** button doesn't seem to work while connected to the web server remotely.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
tanzu management-cluster create --file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6
|
tanzu management-cluster create \ # [tl! .cmd]
|
||||||
|
--file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6
|
||||||
```
|
```
|
||||||
|
|
||||||
In fact, I'm going to copy that file into my working directory and give it a more descriptive name so that I can re-use it in the future.
|
In fact, I'm going to copy that file into my working directory and give it a more descriptive name so that I can re-use it in the future.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml ~/projects/tanzu-homelab/tce-mgmt.yaml
|
cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml \ # [tl! .cmd]
|
||||||
|
~/projects/tanzu-homelab/tce-mgmt.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Now I can run the install command:
|
Now I can run the install command:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
tanzu management-cluster create --file ./tce-mgmt.yaml -v 6
|
tanzu management-cluster create --file ./tce-mgmt.yaml -v 6 # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
After a moment or two of verifying prerequisites, I'm met with a polite offer to enable Tanzu Kubernetes Grid Service in vSphere:
|
After a moment or two of verifying prerequisites, I'm met with a polite offer to enable Tanzu Kubernetes Grid Service in vSphere:
|
||||||
|
@ -250,9 +237,9 @@ Some addons might be getting installed! Check their status by running the follow
|
||||||
|
|
||||||
I can run that last command to go ahead and verify that the addon installation has completed:
|
I can run that last command to go ahead and verify that the addon installation has completed:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl get apps -A
|
kubectl get apps -A # [tl! .cmd]
|
||||||
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
|
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE # [tl! .nocopy:5]
|
||||||
tkg-system antrea Reconcile succeeded 26s 6m49s
|
tkg-system antrea Reconcile succeeded 26s 6m49s
|
||||||
tkg-system metrics-server Reconcile succeeded 36s 6m49s
|
tkg-system metrics-server Reconcile succeeded 36s 6m49s
|
||||||
tkg-system tanzu-addons-manager Reconcile succeeded 22s 8m54s
|
tkg-system tanzu-addons-manager Reconcile succeeded 22s 8m54s
|
||||||
|
@ -261,9 +248,9 @@ tkg-system vsphere-csi Reconcile succeeded 36s 6m50s
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can use the Tanzu CLI to get some other details about the new management cluster:
|
And I can use the Tanzu CLI to get some other details about the new management cluster:
|
||||||
```command-session
|
```shell
|
||||||
tanzu management-cluster get tce-mgmt
|
tanzu management-cluster get tce-mgmt # [tl! .cmd]
|
||||||
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
|
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
|
||||||
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
|
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
|
||||||
|
|
||||||
|
|
||||||
|
@ -285,7 +272,7 @@ Providers:
|
||||||
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
|
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
|
||||||
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
|
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
|
||||||
capi-system cluster-api CoreProvider cluster-api v0.3.23
|
capi-system cluster-api CoreProvider cluster-api v0.3.23
|
||||||
capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10
|
capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10 # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -296,8 +283,8 @@ Excellent! Things are looking good so I can move on to create the cluster which
|
||||||
#### Workload cluster
|
#### Workload cluster
|
||||||
I won't use the UI for this but will instead take a copy of my `tce-mgmt.yaml` file and adapt it to suit the workload needs (as described [here](https://tanzucommunityedition.io/docs/latest/workload-clusters/)).
|
I won't use the UI for this but will instead take a copy of my `tce-mgmt.yaml` file and adapt it to suit the workload needs (as described [here](https://tanzucommunityedition.io/docs/latest/workload-clusters/)).
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
cp tce-mgmt.yaml tce-work.yaml
|
cp tce-mgmt.yaml tce-work.yaml # [tl! .cmd:1]
|
||||||
vi tce-work.yaml
|
vi tce-work.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -314,9 +301,9 @@ I *could* change a few others if I wanted to[^i_wont]:
|
||||||
|
|
||||||
After saving my changes to the `tce-work.yaml` file, I'm ready to deploy the cluster:
|
After saving my changes to the `tce-work.yaml` file, I'm ready to deploy the cluster:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster create --file tce-work.yaml
|
tanzu cluster create --file tce-work.yaml # [tl! .cmd]
|
||||||
Validating configuration...
|
Validating configuration... # [tl! .nocopy:start]
|
||||||
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
|
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
|
||||||
Creating workload cluster 'tce-work'...
|
Creating workload cluster 'tce-work'...
|
||||||
Waiting for cluster to be initialized...
|
Waiting for cluster to be initialized...
|
||||||
|
@ -324,13 +311,13 @@ Waiting for cluster nodes to be available...
|
||||||
Waiting for addons installation...
|
Waiting for addons installation...
|
||||||
Waiting for packages to be up and running...
|
Waiting for packages to be up and running...
|
||||||
|
|
||||||
Workload cluster 'tce-work' created
|
Workload cluster 'tce-work' created # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Right on! I'll use `tanzu cluster get` to check out the workload cluster:
|
Right on! I'll use `tanzu cluster get` to check out the workload cluster:
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster get tce-work
|
tanzu cluster get tce-work # [tl! .cmd]
|
||||||
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
|
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
|
||||||
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
|
tce-work default running 1/1 1/1 v1.21.2+vmware.1 <none>
|
||||||
ℹ
|
ℹ
|
||||||
|
|
||||||
|
@ -343,7 +330,7 @@ NAME READY SEVERITY RE
|
||||||
│ └─Machine/tce-work-control-plane-8km9m True 9m31s
|
│ └─Machine/tce-work-control-plane-8km9m True 9m31s
|
||||||
└─Workers
|
└─Workers
|
||||||
└─MachineDeployment/tce-work-md-0
|
└─MachineDeployment/tce-work-md-0
|
||||||
└─Machine/tce-work-md-0-687444b744-cck4x True 8m31s
|
└─Machine/tce-work-md-0-687444b744-cck4x True 8m31s # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
I can also go into vCenter and take a look at the VMs which constitute the two clusters:
|
I can also go into vCenter and take a look at the VMs which constitute the two clusters:
|
||||||
|
@ -360,9 +347,9 @@ Excellent, I've got a Tanzu management cluster and a Tanzu workload cluster. Wha
|
||||||
|
|
||||||
If I run `kubectl get nodes` right now, I'll only get information about the management cluster:
|
If I run `kubectl get nodes` right now, I'll only get information about the management cluster:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl get nodes
|
kubectl get nodes # [tl! .cmd]
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
|
||||||
tce-mgmt-control-plane-xtdnx Ready control-plane,master 18h v1.21.2+vmware.1
|
tce-mgmt-control-plane-xtdnx Ready control-plane,master 18h v1.21.2+vmware.1
|
||||||
tce-mgmt-md-0-745b858d44-4c9vv Ready <none> 17h v1.21.2+vmware.1
|
tce-mgmt-md-0-745b858d44-4c9vv Ready <none> 17h v1.21.2+vmware.1
|
||||||
```
|
```
|
||||||
|
@ -370,30 +357,29 @@ tce-mgmt-md-0-745b858d44-4c9vv Ready <none> 17h v1.21.2+v
|
||||||
#### Setting the right context
|
#### Setting the right context
|
||||||
To be able to deploy stuff to the workload cluster, I need to tell `kubectl` how to talk to it. And to do that, I'll first need to use `tanzu` to capture the cluster's kubeconfig:
|
To be able to deploy stuff to the workload cluster, I need to tell `kubectl` how to talk to it. And to do that, I'll first need to use `tanzu` to capture the cluster's kubeconfig:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
tanzu cluster kubeconfig get tce-work --admin
|
tanzu cluster kubeconfig get tce-work --admin # [tl! .cmd]
|
||||||
Credentials of cluster 'tce-work' have been saved
|
Credentials of cluster 'tce-work' have been saved # [tl! .nocopy:1]
|
||||||
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
|
||||||
```
|
```
|
||||||
|
|
||||||
I can now run `kubectl config get-contexts` and see that I have access to contexts on both management and workload clusters:
|
I can now run `kubectl config get-contexts` and see that I have access to contexts on both management and workload clusters:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl config get-contexts
|
kubectl config get-contexts # [tl! .cmd]
|
||||||
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
|
CURRENT NAME CLUSTER AUTHINFO NAMESPACE # [tl! .nocopy:2]
|
||||||
* tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
|
* tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
|
||||||
tce-work-admin@tce-work tce-work tce-work-admin
|
tce-work-admin@tce-work tce-work tce-work-admin
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can switch to the `tce-work` cluster like so:
|
And I can switch to the `tce-work` cluster like so:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl config use-context tce-work-admin@tce-work
|
kubectl config use-context tce-work-admin@tce-work # [tl! .cmd]
|
||||||
Switched to context "tce-work-admin@tce-work".
|
Switched to context "tce-work-admin@tce-work". # [tl! .nocopy]
|
||||||
```
|
|
||||||
```command-session
|
kubectl get nodes # [tl! .cmd]
|
||||||
kubectl get nodes
|
NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
|
||||||
NAME STATUS ROLES AGE VERSION
|
|
||||||
tce-work-control-plane-8km9m Ready control-plane,master 17h v1.21.2+vmware.1
|
tce-work-control-plane-8km9m Ready control-plane,master 17h v1.21.2+vmware.1
|
||||||
tce-work-md-0-687444b744-cck4x Ready <none> 17h v1.21.2+vmware.1
|
tce-work-md-0-687444b744-cck4x Ready <none> 17h v1.21.2+vmware.1
|
||||||
```
|
```
|
||||||
|
@ -405,13 +391,12 @@ Before I move on to deploying actually *useful* workloads, I'll start with deplo
|
||||||
|
|
||||||
I can check out the sample deployment that William put together [here](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb.yaml), and then deploy it with:
|
I can check out the sample deployment that William put together [here](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb.yaml), and then deploy it with:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl create ns yelb
|
kubectl create ns yelb # [tl! .cmd]
|
||||||
namespace/yelb created
|
namespace/yelb created # [tl! .nocopy:1]
|
||||||
```
|
|
||||||
```command-session
|
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml # [tl! .cmd]
|
||||||
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml
|
service/redis-server created # [tl! .nocopy:start]
|
||||||
service/redis-server created
|
|
||||||
service/yelb-db created
|
service/yelb-db created
|
||||||
service/yelb-appserver created
|
service/yelb-appserver created
|
||||||
service/yelb-ui created
|
service/yelb-ui created
|
||||||
|
@ -419,10 +404,9 @@ deployment.apps/yelb-ui created
|
||||||
deployment.apps/redis-server created
|
deployment.apps/redis-server created
|
||||||
deployment.apps/yelb-db created
|
deployment.apps/yelb-db created
|
||||||
deployment.apps/yelb-appserver created
|
deployment.apps/yelb-appserver created
|
||||||
```
|
# [tl! .nocopy:end]
|
||||||
```command-session
|
kubectl -n yelb get pods # [tl! .cmd]
|
||||||
kubectl -n yelb get pods
|
NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4]
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
redis-server-74556bbcb7-r9jqc 1/1 Running 0 10s
|
redis-server-74556bbcb7-r9jqc 1/1 Running 0 10s
|
||||||
yelb-appserver-d584bb889-2jspg 1/1 Running 0 10s
|
yelb-appserver-d584bb889-2jspg 1/1 Running 0 10s
|
||||||
yelb-db-694586cd78-wb8tt 1/1 Running 0 10s
|
yelb-db-694586cd78-wb8tt 1/1 Running 0 10s
|
||||||
|
@ -431,36 +415,35 @@ yelb-ui-8f54fd88c-k2dw9 1/1 Running 0 10s
|
||||||
|
|
||||||
Once the app is running, I can point my web browser at it to see it in action. But what IP do I use?
|
Once the app is running, I can point my web browser at it to see it in action. But what IP do I use?
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n yelb get svc/yelb-ui
|
kubectl -n yelb get svc/yelb-ui # [tl! .cmd]
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
|
||||||
yelb-ui NodePort 100.71.228.116 <none> 80:30001/TCP 84s
|
yelb-ui NodePort 100.71.228.116 <none> 80:30001/TCP 84s
|
||||||
```
|
```
|
||||||
|
|
||||||
This demo is using a `NodePort` type service to expose the front end, which means it will be accessible on port `30001` on the node it's running on. I can find that IP by:
|
This demo is using a `NodePort` type service to expose the front end, which means it will be accessible on port `30001` on the node it's running on. I can find that IP by:
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:"
|
kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:" # [tl! .cmd]
|
||||||
Node: tce-work-md-0-687444b744-cck4x/192.168.1.145
|
Node: tce-work-md-0-687444b744-cck4x/192.168.1.145 # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
So I can point my browser at `http://192.168.1.145:30001` and see the demo:
|
So I can point my browser at `http://192.168.1.145:30001` and see the demo:
|
||||||
![yelb demo page](yelb_nodeport_demo.png)
|
![yelb demo page](yelb_nodeport_demo.png)
|
||||||
|
|
||||||
After marveling at my own magnificence[^magnificence] for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the `yelb` namespace to clean up the work I just did:
|
After marveling at my own magnificence[^magnificence] for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the `yelb` namespace to clean up the work I just did:
|
||||||
```command-session
|
```shell
|
||||||
kubectl delete ns yelb
|
kubectl delete ns yelb # [tl! .cmd]
|
||||||
namespace "yelb" deleted
|
namespace "yelb" deleted # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
Now let's move on and try to deploy `yelb` behind a `LoadBalancer` service so it will get its own IP. William has a [deployment spec](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb-lb.yaml) for that too.
|
Now let's move on and try to deploy `yelb` behind a `LoadBalancer` service so it will get its own IP. William has a [deployment spec](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb-lb.yaml) for that too.
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl create ns yelb
|
kubectl create ns yelb # [tl! .cmd]
|
||||||
namespace/yelb created
|
namespace/yelb created # [tl! .nocopy:1]
|
||||||
```
|
|
||||||
```command-session
|
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml # [tl! .cmd]
|
||||||
kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml
|
service/redis-server created # [tl! .nocopy:8]
|
||||||
service/redis-server created
|
|
||||||
service/yelb-db created
|
service/yelb-db created
|
||||||
service/yelb-appserver created
|
service/yelb-appserver created
|
||||||
service/yelb-ui created
|
service/yelb-ui created
|
||||||
|
@ -468,10 +451,9 @@ deployment.apps/yelb-ui created
|
||||||
deployment.apps/redis-server created
|
deployment.apps/redis-server created
|
||||||
deployment.apps/yelb-db created
|
deployment.apps/yelb-db created
|
||||||
deployment.apps/yelb-appserver created
|
deployment.apps/yelb-appserver created
|
||||||
```
|
|
||||||
```command-session
|
kubectl -n yelb get pods # [tl! .cmd]
|
||||||
kubectl -n yelb get pods
|
NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4]
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
redis-server-74556bbcb7-q6l62 1/1 Running 0 7s
|
redis-server-74556bbcb7-q6l62 1/1 Running 0 7s
|
||||||
yelb-appserver-d584bb889-p5qgd 1/1 Running 0 7s
|
yelb-appserver-d584bb889-p5qgd 1/1 Running 0 7s
|
||||||
yelb-db-694586cd78-hjtn4 1/1 Running 0 7s
|
yelb-db-694586cd78-hjtn4 1/1 Running 0 7s
|
||||||
|
@ -479,9 +461,9 @@ yelb-ui-8f54fd88c-pm9qw 1/1 Running 0 7s
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can take a look at that service...
|
And I can take a look at that service...
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n yelb get svc/yelb-ui
|
kubectl -n yelb get svc/yelb-ui # [tl! .cmd]
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
|
||||||
yelb-ui LoadBalancer 100.67.177.185 <pending> 80:32339/TCP 15s
|
yelb-ui LoadBalancer 100.67.177.185 <pending> 80:32339/TCP 15s
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -492,25 +474,23 @@ Wait a minute. That external IP is *still* `<pending>`. What gives? Oh yeah I ne
|
||||||
#### Deploying `kube-vip` as a load balancer
|
#### Deploying `kube-vip` as a load balancer
|
||||||
Fortunately, William Lam [wrote up some tips](https://williamlam.com/2021/10/quick-tip-install-kube-vip-as-service-load-balancer-with-tanzu-community-edition-tce.html) for handling that too. It's [based on work by Scott Rosenberg](https://github.com/vrabbi/tkgm-customizations). The quick-and-dirty steps needed to make this work are:
|
Fortunately, William Lam [wrote up some tips](https://williamlam.com/2021/10/quick-tip-install-kube-vip-as-service-load-balancer-with-tanzu-community-edition-tce.html) for handling that too. It's [based on work by Scott Rosenberg](https://github.com/vrabbi/tkgm-customizations). The quick-and-dirty steps needed to make this work are:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
git clone https://github.com/vrabbi/tkgm-customizations.git
|
git clone https://github.com/vrabbi/tkgm-customizations.git # [tl! .cmd:3]
|
||||||
cd tkgm-customizations/carvel-packages/kube-vip-package
|
cd tkgm-customizations/carvel-packages/kube-vip-package
|
||||||
kubectl apply -n tanzu-package-repo-global -f metadata.yml
|
kubectl apply -n tanzu-package-repo-global -f metadata.yml
|
||||||
kubectl apply -n tanzu-package-repo-global -f package.yaml
|
kubectl apply -n tanzu-package-repo-global -f package.yaml
|
||||||
```
|
|
||||||
```command-session
|
cat << EOF > values.yaml # [tl! .cmd]
|
||||||
cat << EOF > values.yaml
|
|
||||||
vip_range: 192.168.1.64-192.168.1.80
|
vip_range: 192.168.1.64-192.168.1.80
|
||||||
EOF
|
EOF
|
||||||
```
|
|
||||||
```command
|
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml # [tl! .cmd]
|
||||||
tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now I can check out the `yelb-ui` service again:
|
Now I can check out the `yelb-ui` service again:
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n yelb get svc/yelb-ui
|
kubectl -n yelb get svc/yelb-ui # [tl!.cmd]
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
|
||||||
yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m
|
yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -518,9 +498,9 @@ And it's got an IP! I can point my browser to `http://192.168.1.65` now and see:
|
||||||
![Successful LoadBalancer test!](yelb_loadbalancer_demo.png)
|
![Successful LoadBalancer test!](yelb_loadbalancer_demo.png)
|
||||||
|
|
||||||
I'll keep the `kube-vip` load balancer since it'll come in handy, but I have no further use for `yelb`:
|
I'll keep the `kube-vip` load balancer since it'll come in handy, but I have no further use for `yelb`:
|
||||||
```command-session
|
```shell
|
||||||
kubectl delete ns yelb
|
kubectl delete ns yelb # [tl! .cmd]
|
||||||
namespace "yelb" deleted
|
namespace "yelb" deleted # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Persistent Volume Claims, Storage Classes, and Storage Policies
|
#### Persistent Volume Claims, Storage Classes, and Storage Policies
|
||||||
|
@ -533,7 +513,8 @@ Then I create a new vSphere Storage Policy called `tkg-storage-policy` which sta
|
||||||
![My Tanzu storage policy](storage_policy.png)
|
![My Tanzu storage policy](storage_policy.png)
|
||||||
|
|
||||||
So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called `vsphere-sc.yaml`:
|
So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called `vsphere-sc.yaml`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
kind: StorageClass
|
kind: StorageClass
|
||||||
apiVersion: storage.k8s.io/v1
|
apiVersion: storage.k8s.io/v1
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -544,13 +525,14 @@ parameters:
|
||||||
```
|
```
|
||||||
|
|
||||||
And then apply it with :
|
And then apply it with :
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f vsphere-sc.yaml
|
kubectl apply -f vsphere-sc.yaml # [tl! .cmd]
|
||||||
storageclass.storage.k8s.io/vsphere created
|
storageclass.storage.k8s.io/vsphere created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I can test that I can create a Persistent Volume Claim against the new `vsphere` Storage Class by putting this in a new file called `vsphere-pvc.yaml`:
|
I can test that I can create a Persistent Volume Claim against the new `vsphere` Storage Class by putting this in a new file called `vsphere-pvc.yaml`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: PersistentVolumeClaim
|
kind: PersistentVolumeClaim
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -567,15 +549,15 @@ spec:
|
||||||
```
|
```
|
||||||
|
|
||||||
And applying it:
|
And applying it:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f demo-pvc.yaml
|
kubectl apply -f demo-pvc.yaml # [tl! .cmd]
|
||||||
persistentvolumeclaim/vsphere-demo-1 created
|
persistentvolumeclaim/vsphere-demo-1 created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I can see the new claim, and confirm that its status is `Bound`:
|
I can see the new claim, and confirm that its status is `Bound`:
|
||||||
```command-session
|
```shell
|
||||||
kubectl get pvc
|
kubectl get pvc # [tl! .cmd]
|
||||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE # [tl! .nocopy:1]
|
||||||
vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi RWO vsphere 4m25s
|
vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi RWO vsphere 4m25s
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -583,9 +565,9 @@ And for bonus points, I can see that the container volume was created on the vSp
|
||||||
![Container Volume in vSphere](container_volume_in_vsphere.png)
|
![Container Volume in vSphere](container_volume_in_vsphere.png)
|
||||||
|
|
||||||
So that's storage sorted. I'll clean up my test volume before moving on:
|
So that's storage sorted. I'll clean up my test volume before moving on:
|
||||||
```command-session
|
```shell
|
||||||
kubectl delete -f demo-pvc.yaml
|
kubectl delete -f demo-pvc.yaml # [tl! .cmd]
|
||||||
persistentvolumeclaim "vsphere-demo-1" deleted
|
persistentvolumeclaim "vsphere-demo-1" deleted # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
### A real workload - phpIPAM
|
### A real workload - phpIPAM
|
||||||
|
@ -597,9 +579,9 @@ So I set to work exploring some containerization options, and I found [phpipam-d
|
||||||
|
|
||||||
To start, I'll create a new namespace to keep things tidy:
|
To start, I'll create a new namespace to keep things tidy:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl create ns ipam
|
kubectl create ns ipam # [tl! .cmd]
|
||||||
namespace/ipam created
|
namespace/ipam created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I'm going to wind up with four pods:
|
I'm going to wind up with four pods:
|
||||||
|
@ -614,7 +596,8 @@ I'll use each container's original `docker-compose` configuration and adapt that
|
||||||
|
|
||||||
#### phpipam-db
|
#### phpipam-db
|
||||||
The phpIPAM database will live inside a MariaDB container. Here's the relevant bit from `docker-compose`:
|
The phpIPAM database will live inside a MariaDB container. Here's the relevant bit from `docker-compose`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
services:
|
services:
|
||||||
phpipam-db:
|
phpipam-db:
|
||||||
image: mariadb:latest
|
image: mariadb:latest
|
||||||
|
@ -629,7 +612,8 @@ services:
|
||||||
So it will need a `Service` exposing the container's port `3306` so that other pods can connect to the database. For my immediate demo, using `type: ClusterIP` will be sufficient since all the connections will be coming from within the cluster. When I do this for real, it will need to be `type: LoadBalancer` so that the agent running on a different cluster can connect. And it will need a `PersistentVolumeClaim` so it can store the database data at `/var/lib/mysql`. It will also get passed an environment variable to set the initial `root` password on the database instance (which will be used later during the phpIPAM install to create the initial `phpipam` database).
|
So it will need a `Service` exposing the container's port `3306` so that other pods can connect to the database. For my immediate demo, using `type: ClusterIP` will be sufficient since all the connections will be coming from within the cluster. When I do this for real, it will need to be `type: LoadBalancer` so that the agent running on a different cluster can connect. And it will need a `PersistentVolumeClaim` so it can store the database data at `/var/lib/mysql`. It will also get passed an environment variable to set the initial `root` password on the database instance (which will be used later during the phpIPAM install to create the initial `phpipam` database).
|
||||||
|
|
||||||
It might look like this on the Kubernetes side:
|
It might look like this on the Kubernetes side:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# phpipam-db.yaml
|
# phpipam-db.yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
|
@ -700,7 +684,8 @@ Moving on:
|
||||||
|
|
||||||
#### phpipam-www
|
#### phpipam-www
|
||||||
This is the `docker-compose` excerpt for the web component:
|
This is the `docker-compose` excerpt for the web component:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
services:
|
services:
|
||||||
phpipam-web:
|
phpipam-web:
|
||||||
image: phpipam/phpipam-www:1.5x
|
image: phpipam/phpipam-www:1.5x
|
||||||
|
@ -718,7 +703,8 @@ services:
|
||||||
Based on that, I can see that my `phpipam-www` pod will need a container running the `phpipam/phpipam-www:1.5x` image, a `Service` of type `LoadBalancer` to expose the web interface on port `80`, a `PersistentVolumeClaim` mounted to `/phpipam/css/images/logo`, and some environment variables passed in to configure the thing. Note that the `IPAM_DATABASE_PASS` variable defines the password used for the `phpipam` user on the database (not the `root` user referenced earlier), and the `IPAM_DATABASE_WEBHOST=%` variable will define which hosts that `phpipam` database user will be able to connect from; setting it to `%` will make sure that my remote agent can connect to the database even if I don't know where the agent will be running.
|
Based on that, I can see that my `phpipam-www` pod will need a container running the `phpipam/phpipam-www:1.5x` image, a `Service` of type `LoadBalancer` to expose the web interface on port `80`, a `PersistentVolumeClaim` mounted to `/phpipam/css/images/logo`, and some environment variables passed in to configure the thing. Note that the `IPAM_DATABASE_PASS` variable defines the password used for the `phpipam` user on the database (not the `root` user referenced earlier), and the `IPAM_DATABASE_WEBHOST=%` variable will define which hosts that `phpipam` database user will be able to connect from; setting it to `%` will make sure that my remote agent can connect to the database even if I don't know where the agent will be running.
|
||||||
|
|
||||||
Here's how I'd adapt that into a structure that Kubernetes will understand:
|
Here's how I'd adapt that into a structure that Kubernetes will understand:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# phpipam-www.yaml
|
# phpipam-www.yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
|
@ -767,7 +753,7 @@ spec:
|
||||||
labels:
|
labels:
|
||||||
app: phpipam-www
|
app: phpipam-www
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers: # [tl! focus:2]
|
||||||
- name: phpipam-www
|
- name: phpipam-www
|
||||||
image: phpipam/phpipam-www:1.5x
|
image: phpipam/phpipam-www:1.5x
|
||||||
env:
|
env:
|
||||||
|
@ -792,7 +778,8 @@ spec:
|
||||||
|
|
||||||
#### phpipam-cron
|
#### phpipam-cron
|
||||||
This container has a pretty simple configuration in `docker-compose`:
|
This container has a pretty simple configuration in `docker-compose`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
services:
|
services:
|
||||||
phpipam-cron:
|
phpipam-cron:
|
||||||
image: phpipam/phpipam-cron:1.5x
|
image: phpipam/phpipam-cron:1.5x
|
||||||
|
@ -805,7 +792,8 @@ services:
|
||||||
|
|
||||||
No exposed ports, no need for persistence - just a base image and a few variables to tell it how to connect to the database and how often to run the scans:
|
No exposed ports, no need for persistence - just a base image and a few variables to tell it how to connect to the database and how often to run the scans:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# phpipam-cron.yaml
|
# phpipam-cron.yaml
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
|
@ -838,7 +826,8 @@ spec:
|
||||||
|
|
||||||
#### phpipam-agent
|
#### phpipam-agent
|
||||||
And finally, my remote scan agent. Here's the `docker-compose`:
|
And finally, my remote scan agent. Here's the `docker-compose`:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
services:
|
services:
|
||||||
phpipam-agent:
|
phpipam-agent:
|
||||||
container_name: phpipam-agent
|
container_name: phpipam-agent
|
||||||
|
@ -860,7 +849,8 @@ services:
|
||||||
It's got a few additional variables to make it extra-configurable, but still no need for persistence or network exposure. That `IPAM_AGENT_KEY` variable will need to get populated the appropriate key generated within the new phpIPAM deployment, but we can deal with that later.
|
It's got a few additional variables to make it extra-configurable, but still no need for persistence or network exposure. That `IPAM_AGENT_KEY` variable will need to get populated the appropriate key generated within the new phpIPAM deployment, but we can deal with that later.
|
||||||
|
|
||||||
For now, here's how I'd tell Kubernetes about it:
|
For now, here's how I'd tell Kubernetes about it:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# phpipam-agent.yaml
|
# phpipam-agent.yaml
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
|
@ -905,32 +895,32 @@ spec:
|
||||||
|
|
||||||
#### Deployment and configuration of phpIPAM
|
#### Deployment and configuration of phpIPAM
|
||||||
I can now go ahead and start deploying these containers, starting with the database one (upon which all the others rely):
|
I can now go ahead and start deploying these containers, starting with the database one (upon which all the others rely):
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f phpipam-db.yaml
|
kubectl apply -f phpipam-db.yaml # [tl! .cmd]
|
||||||
service/phpipam-db created
|
service/phpipam-db created # [tl! .nocopy:2]
|
||||||
persistentvolumeclaim/phpipam-db-pvc created
|
persistentvolumeclaim/phpipam-db-pvc created
|
||||||
deployment.apps/phpipam-db created
|
deployment.apps/phpipam-db created
|
||||||
```
|
```
|
||||||
|
|
||||||
And the web server:
|
And the web server:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f phpipam-www.yaml
|
kubectl apply -f phpipam-www.yaml # [tl! .cmd]
|
||||||
service/phpipam-www created
|
service/phpipam-www created # [tl! .nocopy:2]
|
||||||
persistentvolumeclaim/phpipam-www-pvc created
|
persistentvolumeclaim/phpipam-www-pvc created
|
||||||
deployment.apps/phpipam-www created
|
deployment.apps/phpipam-www created
|
||||||
```
|
```
|
||||||
|
|
||||||
And the cron runner:
|
And the cron runner:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f phpipam-cron.yaml
|
kubectl apply -f phpipam-cron.yaml # [tl! .cmd]
|
||||||
deployment.apps/phpipam-cron created
|
deployment.apps/phpipam-cron created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I'll hold off on the agent container for now since I'll need to adjust the configuration slightly after getting phpIPAM set up, but I will go ahead and check out my work so far:
|
I'll hold off on the agent container for now since I'll need to adjust the configuration slightly after getting phpIPAM set up, but I will go ahead and check out my work so far:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
kubectl -n ipam get all
|
kubectl -n ipam get all # [tl! .cmd]
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE # [tl! .nocopy:start]
|
||||||
pod/phpipam-cron-6c994897c4-6rsnp 1/1 Running 0 4m30s
|
pod/phpipam-cron-6c994897c4-6rsnp 1/1 Running 0 4m30s
|
||||||
pod/phpipam-db-5f4c47d4b9-sb5bd 1/1 Running 0 16m
|
pod/phpipam-db-5f4c47d4b9-sb5bd 1/1 Running 0 16m
|
||||||
pod/phpipam-www-769c95c68d-94klg 1/1 Running 0 5m59s
|
pod/phpipam-www-769c95c68d-94klg 1/1 Running 0 5m59s
|
||||||
|
@ -947,7 +937,7 @@ deployment.apps/phpipam-www 1/1 1 1 5m59s
|
||||||
NAME DESIRED CURRENT READY AGE
|
NAME DESIRED CURRENT READY AGE
|
||||||
replicaset.apps/phpipam-cron-6c994897c4 1 1 1 4m30s
|
replicaset.apps/phpipam-cron-6c994897c4 1 1 1 4m30s
|
||||||
replicaset.apps/phpipam-db-5f4c47d4b9 1 1 1 16m
|
replicaset.apps/phpipam-db-5f4c47d4b9 1 1 1 16m
|
||||||
replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s
|
replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
And I can point my browser to the `EXTERNAL-IP` associated with the `phpipam-www` service to see the initial setup page:
|
And I can point my browser to the `EXTERNAL-IP` associated with the `phpipam-www` service to see the initial setup page:
|
||||||
|
@ -977,9 +967,9 @@ I'll copy the agent code and plug it into my `phpipam-agent.yaml` file:
|
||||||
```
|
```
|
||||||
|
|
||||||
And then deploy that:
|
And then deploy that:
|
||||||
```command-session
|
```shell
|
||||||
kubectl apply -f phpipam-agent.yaml
|
kubectl apply -f phpipam-agent.yaml # [tl! .cmd]
|
||||||
deployment.apps/phpipam-agent created
|
deployment.apps/phpipam-agent created # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
The scan agent isn't going to do anything until it's assigned to a subnet though, so now I head to **Administration > IP related management > Sections**. phpIPAM comes with a few default sections and ranges and such defined so I'll delete those and create a new one that I'll call `Lab`.
|
The scan agent isn't going to do anything until it's assigned to a subnet though, so now I head to **Administration > IP related management > Sections**. phpIPAM comes with a few default sections and ranges and such defined so I'll delete those and create a new one that I'll call `Lab`.
|
||||||
|
|
|
@ -41,21 +41,21 @@ The host will need to be in maintenance mode in order to apply the upgrade, and
|
||||||
|
|
||||||
### 3. Place host in maintenance mode
|
### 3. Place host in maintenance mode
|
||||||
I can do that by SSH'ing to the host and running:
|
I can do that by SSH'ing to the host and running:
|
||||||
```commandroot
|
```shell
|
||||||
esxcli system maintenanceMode set -e true
|
esxcli system maintenanceMode set -e true # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
And can confirm that it happened with:
|
And can confirm that it happened with:
|
||||||
```commandroot-session
|
```shell
|
||||||
esxcli system maintenanceMode get
|
esxcli system maintenanceMode get # [tl! .cmd]
|
||||||
Enabled
|
Enabled # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. Identify the profile name
|
### 4. Identify the profile name
|
||||||
Because this is an *upgrade* from one major release to another rather than a simple *update*, I need to know the name of the profile which will be applied. I can identify that with:
|
Because this is an *upgrade* from one major release to another rather than a simple *update*, I need to know the name of the profile which will be applied. I can identify that with:
|
||||||
```commandroot-session
|
```shell
|
||||||
esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip
|
esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip # [tl! .cmd]
|
||||||
Name Vendor Acceptance Level Creation Time Modification Time
|
Name Vendor Acceptance Level Creation Time Modification Time # [tl! .nocopy:3]
|
||||||
---------------------------- ------------ ---------------- ------------------- -----------------
|
---------------------------- ------------ ---------------- ------------------- -----------------
|
||||||
ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||||
ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||||
|
@ -68,13 +68,13 @@ In this case, I'll use the `ESXi-8.0.0-20513097-standard` profile.
|
||||||
|
|
||||||
### 5. Install the upgrade
|
### 5. Install the upgrade
|
||||||
Now for the moment of truth:
|
Now for the moment of truth:
|
||||||
```commandroot
|
```shell
|
||||||
esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip -p ESXi-8.0.0-20513097-standard
|
esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip -p ESXi-8.0.0-20513097-standard # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
When it finishes (successfully), it leaves a little message that the update won't be complete until the host is rebooted, so I'll go ahead and do that as well:
|
When it finishes (successfully), it leaves a little message that the update won't be complete until the host is rebooted, so I'll go ahead and do that as well:
|
||||||
```commandroot
|
```shell
|
||||||
reboot
|
reboot # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
And then wait (oh-so-patiently) for the host to come back up.
|
And then wait (oh-so-patiently) for the host to come back up.
|
||||||
|
|
|
@ -16,7 +16,8 @@ Unfortunately, I found that this approach can take a long time to run and often
|
||||||
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found [this blog post with the solution](https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/).
|
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found [this blog post with the solution](https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/).
|
||||||
|
|
||||||
So here's what I put together:
|
So here's what I put together:
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
|
# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
|
||||||
# It creates a scheduled task to start the update process after a one-minute delay so that you don't have to maintain
|
# It creates a scheduled task to start the update process after a one-minute delay so that you don't have to maintain
|
||||||
# the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later.
|
# the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later.
|
||||||
|
|
|
@ -54,32 +54,29 @@ This needs to be run directly on the vCenter appliance so you'll need to copy th
|
||||||
|
|
||||||
|
|
||||||
Once that's done, just execute this on your local workstation to copy the `.zip` from your `~/Downloads/` folder to the VCSA's `/tmp/` directory:
|
Once that's done, just execute this on your local workstation to copy the `.zip` from your `~/Downloads/` folder to the VCSA's `/tmp/` directory:
|
||||||
```command
|
```shell
|
||||||
scp ~/Downloads/vdt-v1.1.4.zip root@vcsa.lab.bowdre.net:/tmp/
|
scp ~/Downloads/vdt-v1.1.4.zip root@vcsa.lab.bowdre.net:/tmp/ # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Extract
|
### 3. Extract
|
||||||
Now pop back over to an SSH session to the VCSA, extract the `.zip`, and get ready for action:
|
Now pop back over to an SSH session to the VCSA, extract the `.zip`, and get ready for action:
|
||||||
```commandroot
|
```shell
|
||||||
cd /tmp
|
cd /tmp # [tl! .cmd_root:1]
|
||||||
```
|
|
||||||
```commandroot-session
|
|
||||||
unzip vdt-v1.1.4.zip
|
unzip vdt-v1.1.4.zip
|
||||||
Archive: vdt-v1.1.4.zip
|
Archive: vdt-v1.1.4.zip # [tl! .nocopy:5]
|
||||||
3557676756cffd658fd61aab5a6673269104e83c
|
3557676756cffd658fd61aab5a6673269104e83c
|
||||||
creating: vdt-v1.1.4/
|
creating: vdt-v1.1.4/
|
||||||
...
|
...
|
||||||
inflating: vdt-v1.1.4/vdt.py
|
inflating: vdt-v1.1.4/vdt.py
|
||||||
```
|
|
||||||
```commandroot
|
cd vdt-v1.1.4/ # [tl! .cmd_root]
|
||||||
cd vdt-v1.1.4/
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. Execute
|
### 4. Execute
|
||||||
Now for the fun part:
|
Now for the fun part:
|
||||||
```commandroot-session
|
```shell
|
||||||
python vdt.py
|
python vdt.py # [tl! .cmd_root]
|
||||||
_________________________
|
_________________________ # [tl! .nocopy:7]
|
||||||
RUNNING PULSE CHECK
|
RUNNING PULSE CHECK
|
||||||
|
|
||||||
Today: Sunday, August 28 19:53:00
|
Today: Sunday, August 28 19:53:00
|
||||||
|
@ -95,7 +92,7 @@ After entering the SSO password, VDT will run for a few minutes and generate an
|
||||||
Once the script has completed, it's time to look through the results and fix whatever can be found. As an example, here are some of the findings from my _deliberately-broken-for-the-purposes-of-this-post_ vCenter:
|
Once the script has completed, it's time to look through the results and fix whatever can be found. As an example, here are some of the findings from my _deliberately-broken-for-the-purposes-of-this-post_ vCenter:
|
||||||
|
|
||||||
#### Hostname/PNID mismatch
|
#### Hostname/PNID mismatch
|
||||||
```log {hl_lines=[8,9,23,24]}
|
```text
|
||||||
VCENTER BASIC INFO
|
VCENTER BASIC INFO
|
||||||
BASIC:
|
BASIC:
|
||||||
Current Time: 2022-08-28 19:54:08.370889
|
Current Time: 2022-08-28 19:54:08.370889
|
||||||
|
@ -103,7 +100,7 @@ BASIC:
|
||||||
vCenter Load Average: 0.26, 0.19, 0.12
|
vCenter Load Average: 0.26, 0.19, 0.12
|
||||||
Number of CPUs: 2
|
Number of CPUs: 2
|
||||||
Total Memory: 11.71
|
Total Memory: 11.71
|
||||||
vCenter Hostname: VCSA
|
vCenter Hostname: VCSA # [tl! highlight:1]
|
||||||
vCenter PNID: vcsa.lab.bowdre.net
|
vCenter PNID: vcsa.lab.bowdre.net
|
||||||
vCenter IP Address: 192.168.1.12
|
vCenter IP Address: 192.168.1.12
|
||||||
Proxy Configured: "no"
|
Proxy Configured: "no"
|
||||||
|
@ -118,16 +115,16 @@ DETAILS:
|
||||||
Number of Clusters: 1
|
Number of Clusters: 1
|
||||||
Disabled Plugins: None
|
Disabled Plugins: None
|
||||||
|
|
||||||
[FAIL] The hostname and PNID do not match!
|
[FAIL] The hostname and PNID do not match! # [tl! highlight:1]
|
||||||
Please see https://kb.vmware.com/s/article/2130599 for more details.
|
Please see https://kb.vmware.com/s/article/2130599 for more details.
|
||||||
```
|
```
|
||||||
Silly me - I must have changed the hostname at some point, which is not generally a Thing Which Should Be done. I can quickly [consult the referenced KB](https://kb.vmware.com/s/article/2130599) to figure out how to fix my mistake using the `/opt/vmware/share/vami/vami_config_net` utility.
|
Silly me - I must have changed the hostname at some point, which is not generally a Thing Which Should Be done. I can quickly [consult the referenced KB](https://kb.vmware.com/s/article/2130599) to figure out how to fix my mistake using the `/opt/vmware/share/vami/vami_config_net` utility.
|
||||||
|
|
||||||
#### Missing DNS
|
#### Missing DNS
|
||||||
```log {hl_lines=[3,4,5,12,13]}
|
```text
|
||||||
Nameserver Queries
|
Nameserver Queries
|
||||||
192.168.1.5
|
192.168.1.5
|
||||||
[FAIL] DNS with UDP - unable to resolve vcsa to 192.168.1.12
|
[FAIL] DNS with UDP - unable to resolve vcsa to 192.168.1.12 # [tl! highlight:2]
|
||||||
[FAIL] Reverse DNS - unable to resolve 192.168.1.12 to vcsa
|
[FAIL] Reverse DNS - unable to resolve 192.168.1.12 to vcsa
|
||||||
[FAIL] DNS with TCP - unable to resolve vcsa to 192.168.1.12
|
[FAIL] DNS with TCP - unable to resolve vcsa to 192.168.1.12
|
||||||
|
|
||||||
|
@ -136,13 +133,13 @@ Nameserver Queries
|
||||||
dig +noall +answer -x <ip> <namserver>
|
dig +noall +answer -x <ip> <namserver>
|
||||||
dig +short +tcp <fqdn> <nameserver>
|
dig +short +tcp <fqdn> <nameserver>
|
||||||
|
|
||||||
RESULT: [FAIL]
|
RESULT: [FAIL] # [tl! highlight:1]
|
||||||
Please see KB: https://kb.vmware.com/s/article/54682
|
Please see KB: https://kb.vmware.com/s/article/54682
|
||||||
```
|
```
|
||||||
Whoops - I guess I should go recreate the appropriate DNS records.
|
Whoops - I guess I should go recreate the appropriate DNS records.
|
||||||
|
|
||||||
#### Old core files
|
#### Old core files
|
||||||
```log
|
```text
|
||||||
CORE FILE CHECK
|
CORE FILE CHECK
|
||||||
INFO:
|
INFO:
|
||||||
These core files are older than 72 hours. consider deleting them
|
These core files are older than 72 hours. consider deleting them
|
||||||
|
@ -167,19 +164,19 @@ at your discretion to reduce the size of log bundles.
|
||||||
```
|
```
|
||||||
Those core files can be useful for investigating specific issues, but holding on to them long-term doesn't really do much good. _After checking to be sure I don't need them_, I can get rid of them all pretty easily like so:
|
Those core files can be useful for investigating specific issues, but holding on to them long-term doesn't really do much good. _After checking to be sure I don't need them_, I can get rid of them all pretty easily like so:
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
find /storage/core/ -name "core.*" -type f -mtime +3 -exec rm {} \;
|
find /storage/core/ -name "core.*" -type f -mtime +3 -exec rm {} \; # [tl! .cmd_root]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### NTP status
|
#### NTP status
|
||||||
```log
|
```text
|
||||||
VC NTP CHECK
|
VC NTP CHECK
|
||||||
[FAIL] NTP and Host time are both disabled!
|
[FAIL] NTP and Host time are both disabled!
|
||||||
```
|
```
|
||||||
Oh yeah, let's turn that back on with `systemctl start ntpd`.
|
Oh yeah, let's turn that back on with `systemctl start ntpd`.
|
||||||
|
|
||||||
#### Account status
|
#### Account status
|
||||||
```log
|
```text
|
||||||
Root Account Check
|
Root Account Check
|
||||||
[FAIL] Root password expires in 13 days
|
[FAIL] Root password expires in 13 days
|
||||||
Please search for 'Change the Password of the Root User'
|
Please search for 'Change the Password of the Root User'
|
||||||
|
@ -187,14 +184,14 @@ Oh yeah, let's turn that back on with `systemctl start ntpd`.
|
||||||
```
|
```
|
||||||
That's a good thing to know. I'll [take care of that](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenter.configuration.doc/GUID-48BAF973-4FD3-4FF3-B1B6-5F7286C9B59A.html) while I'm thinking about it.
|
That's a good thing to know. I'll [take care of that](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenter.configuration.doc/GUID-48BAF973-4FD3-4FF3-B1B6-5F7286C9B59A.html) while I'm thinking about it.
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
chage -M -1 -E -1 root
|
chage -M -1 -E -1 root # [tl! .cmd_root]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Recheck
|
#### Recheck
|
||||||
Now that I've corrected these issues, I can run VDT again to confirm that everything is back in a good state:
|
Now that I've corrected these issues, I can run VDT again to confirm that everything is back in a good state:
|
||||||
|
|
||||||
```log {hl_lines=[8,9,"25-27",32,35,"55-56",59]}
|
```text {hl_lines=[8,9,"25-27",32,35,"55-56",59]}
|
||||||
VCENTER BASIC INFO
|
VCENTER BASIC INFO
|
||||||
BASIC:
|
BASIC:
|
||||||
Current Time: 2022-08-28 20:13:25.192503
|
Current Time: 2022-08-28 20:13:25.192503
|
||||||
|
@ -202,7 +199,7 @@ Now that I've corrected these issues, I can run VDT again to confirm that everyt
|
||||||
vCenter Load Average: 0.28, 0.14, 0.10
|
vCenter Load Average: 0.28, 0.14, 0.10
|
||||||
Number of CPUs: 2
|
Number of CPUs: 2
|
||||||
Total Memory: 11.71
|
Total Memory: 11.71
|
||||||
vCenter Hostname: vcsa.lab.bowdre.net
|
vCenter Hostname: vcsa.lab.bowdre.net # [tl! highlight:1]
|
||||||
vCenter PNID: vcsa.lab.bowdre.net
|
vCenter PNID: vcsa.lab.bowdre.net
|
||||||
vCenter IP Address: 192.168.1.12
|
vCenter IP Address: 192.168.1.12
|
||||||
Proxy Configured: "no"
|
Proxy Configured: "no"
|
||||||
|
@ -219,20 +216,20 @@ DETAILS:
|
||||||
[...]
|
[...]
|
||||||
Nameserver Queries
|
Nameserver Queries
|
||||||
192.168.1.5
|
192.168.1.5
|
||||||
[PASS] DNS with UDP - resolved vcsa.lab.bowdre.net to 192.168.1.12
|
[PASS] DNS with UDP - resolved vcsa.lab.bowdre.net to 192.168.1.12 # [tl! highlight:2]
|
||||||
[PASS] Reverse DNS - resolved 192.168.1.12 to vcsa.lab.bowdre.net
|
[PASS] Reverse DNS - resolved 192.168.1.12 to vcsa.lab.bowdre.net
|
||||||
[PASS] DNS with TCP - resolved vcsa.lab.bowdre.net to 192.168.1.12
|
[PASS] DNS with TCP - resolved vcsa.lab.bowdre.net to 192.168.1.12
|
||||||
Commands used:
|
Commands used:
|
||||||
dig +short <fqdn> <nameserver>
|
dig +short <fqdn> <nameserver>
|
||||||
dig +noall +answer -x <ip> <namserver>
|
dig +noall +answer -x <ip> <namserver>
|
||||||
dig +short +tcp <fqdn> <nameserver>
|
dig +short +tcp <fqdn> <nameserver>
|
||||||
RESULT: [PASS]
|
RESULT: [PASS] # [tl! highlight]
|
||||||
[...]
|
[...]
|
||||||
CORE FILE CHECK
|
CORE FILE CHECK
|
||||||
[PASS] Number of core files: 0
|
[PASS] Number of core files: 0 # [tl! highlight:1]
|
||||||
[PASS] Number of hprof files: 0
|
[PASS] Number of hprof files: 0
|
||||||
[...]
|
[...]
|
||||||
NTP Status Check
|
NTP Status Check # [tl! collapse:start]
|
||||||
+-----------------------------------LEGEND-----------------------------------+
|
+-----------------------------------LEGEND-----------------------------------+
|
||||||
| remote: NTP peer server |
|
| remote: NTP peer server |
|
||||||
| refid: server that this peer gets its time from |
|
| refid: server that this peer gets its time from |
|
||||||
|
@ -246,14 +243,14 @@ NTP Status Check
|
||||||
| + Peer selected for possible synchronization |
|
| + Peer selected for possible synchronization |
|
||||||
| – Peer is a candidate for selection |
|
| – Peer is a candidate for selection |
|
||||||
| ~ Peer is statically configured |
|
| ~ Peer is statically configured |
|
||||||
+----------------------------------------------------------------------------+
|
+----------------------------------------------------------------------------+ # [tl! collapse:end]
|
||||||
remote refid st t when poll reach delay offset jitter
|
remote refid st t when poll reach delay offset jitter
|
||||||
==============================================================================
|
==============================================================================
|
||||||
*104.171.113.34 130.207.244.240 2 u 1 64 17 16.831 -34.597 0.038
|
*104.171.113.34 130.207.244.240 2 u 1 64 17 16.831 -34.597 0.038
|
||||||
RESULT: [PASS]
|
RESULT: [PASS] # [tl! highlight]
|
||||||
[...]
|
[...]
|
||||||
Root Account Check
|
Root Account Check
|
||||||
[PASS] Root password never expires
|
[PASS] Root password never expires # [tl! highlight]
|
||||||
```
|
```
|
||||||
All better!
|
All better!
|
||||||
|
|
||||||
|
|
|
@ -28,29 +28,29 @@ I found that the quite-popular [Minimal Mistakes](https://mademistakes.com/work/
|
||||||
A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
|
A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
|
||||||
|
|
||||||
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
|
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
|
||||||
```command
|
```shell
|
||||||
sudo apt-get install ruby-full build-essential zlib1g-dev
|
sudo apt-get install ruby-full build-essential zlib1g-dev # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
I added the following to my `~/.zshrc` file so that the gems would be installed under my home directory rather than somewhere more privileged:
|
I added the following to my `~/.zshrc` file so that the gems would be installed under my home directory rather than somewhere more privileged:
|
||||||
```command
|
```shell
|
||||||
export GEM_HOME="$HOME/gems"
|
export GEM_HOME="$HOME/gems" # [tl! .cmd:1]
|
||||||
export PATH="$HOME/gems/bin:$PATH"
|
export PATH="$HOME/gems/bin:$PATH"
|
||||||
```
|
```
|
||||||
|
|
||||||
And then ran `source ~/.zshrc` so the change would take immediate effect.
|
And then ran `source ~/.zshrc` so the change would take immediate effect.
|
||||||
|
|
||||||
I could then install Jekyll:
|
I could then install Jekyll:
|
||||||
```command
|
```shell
|
||||||
gem install jekyll bundler
|
gem install jekyll bundler # [tl! .cmd]
|
||||||
```
|
```
|
||||||
|
|
||||||
I then `cd`ed to the local repo and ran `bundle install` to also load up the components specified in the repo's `Gemfile`.
|
I then `cd`ed to the local repo and ran `bundle install` to also load up the components specified in the repo's `Gemfile`.
|
||||||
|
|
||||||
And, finally, I can run this to start up the local Jekyll server instance:
|
And, finally, I can run this to start up the local Jekyll server instance:
|
||||||
```command-session
|
```shell
|
||||||
bundle exec jekyll serve -l --drafts
|
bundle exec jekyll serve -l --drafts # [tl! .cmd]
|
||||||
Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
|
Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml # [tl! .nocopy:start]
|
||||||
Source: /home/jbowdre/projects/jbowdre.github.io
|
Source: /home/jbowdre/projects/jbowdre.github.io
|
||||||
Destination: /home/jbowdre/projects/jbowdre.github.io/_site
|
Destination: /home/jbowdre/projects/jbowdre.github.io/_site
|
||||||
Incremental build: enabled
|
Incremental build: enabled
|
||||||
|
@ -62,7 +62,7 @@ Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
|
||||||
Auto-regeneration: enabled for '/home/jbowdre/projects/jbowdre.github.io'
|
Auto-regeneration: enabled for '/home/jbowdre/projects/jbowdre.github.io'
|
||||||
LiveReload address: http://0.0.0.0:35729
|
LiveReload address: http://0.0.0.0:35729
|
||||||
Server address: http://0.0.0.0:4000
|
Server address: http://0.0.0.0:4000
|
||||||
Server running... press ctrl-c to stop.
|
Server running... press ctrl-c to stop. # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
And there it is!
|
And there it is!
|
||||||
|
|
|
@ -12,8 +12,8 @@ tags:
|
||||||
- meta
|
- meta
|
||||||
---
|
---
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
cp -a virtuallypotato.com runtimeterror.dev
|
cp -a virtuallypotato.com runtimeterror.dev # [tl! .cmd:2]
|
||||||
rm -rf virtuallypotato.com
|
rm -rf virtuallypotato.com
|
||||||
ln -s virtuallypotato.com runtimeterror.dev
|
ln -s virtuallypotato.com runtimeterror.dev
|
||||||
```
|
```
|
||||||
|
|
|
@ -94,15 +94,15 @@ Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `
|
||||||
|
|
||||||
After logging in to the VM, I entered the router's configuration mode:
|
After logging in to the VM, I entered the router's configuration mode:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
configure
|
configure # [tl! .cmd]
|
||||||
[edit]
|
[edit] # [tl! .nocopy]
|
||||||
```
|
```
|
||||||
|
|
||||||
I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
|
I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
set interfaces ethernet eth0 address '192.168.1.8/24'
|
set interfaces ethernet eth0 address '192.168.1.8/24' # [tl! .cmd_root:start]
|
||||||
set interfaces ethernet eth0 description 'Outside'
|
set interfaces ethernet eth0 description 'Outside'
|
||||||
set interfaces ethernet eth1 mtu '9000'
|
set interfaces ethernet eth1 mtu '9000'
|
||||||
set interfaces ethernet eth1 vif 1610 address '172.16.10.1/24'
|
set interfaces ethernet eth1 vif 1610 address '172.16.10.1/24'
|
||||||
|
@ -117,13 +117,13 @@ set interfaces ethernet eth1 vif 1630 mtu '1500'
|
||||||
set interfaces ethernet eth1 vif 1698 description 'VLAN 1698 for vSAN'
|
set interfaces ethernet eth1 vif 1698 description 'VLAN 1698 for vSAN'
|
||||||
set interfaces ethernet eth1 vif 1698 mtu '9000'
|
set interfaces ethernet eth1 vif 1698 mtu '9000'
|
||||||
set interfaces ethernet eth1 vif 1699 description 'VLAN 1699 for vMotion'
|
set interfaces ethernet eth1 vif 1699 description 'VLAN 1699 for vMotion'
|
||||||
set interfaces ethernet eth1 vif 1699 mtu '9000'
|
set interfaces ethernet eth1 vif 1699 mtu '9000' # [tl! .cmd_root:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
I also set up NAT for the networks that should be routable:
|
I also set up NAT for the networks that should be routable:
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
set nat source rule 10 outbound-interface 'eth0'
|
set nat source rule 10 outbound-interface 'eth0' # [tl! .cmd_root:start]
|
||||||
set nat source rule 10 source address '172.16.10.0/24'
|
set nat source rule 10 source address '172.16.10.0/24'
|
||||||
set nat source rule 10 translation address 'masquerade'
|
set nat source rule 10 translation address 'masquerade'
|
||||||
set nat source rule 20 outbound-interface 'eth0'
|
set nat source rule 20 outbound-interface 'eth0'
|
||||||
|
@ -134,13 +134,13 @@ set nat source rule 30 source address '172.16.30.0/24'
|
||||||
set nat source rule 30 translation address 'masquerade'
|
set nat source rule 30 translation address 'masquerade'
|
||||||
set nat source rule 100 outbound-interface 'eth0'
|
set nat source rule 100 outbound-interface 'eth0'
|
||||||
set nat source rule 100 translation address 'masquerade'
|
set nat source rule 100 translation address 'masquerade'
|
||||||
set protocols static route 0.0.0.0/0 next-hop 192.168.1.1
|
set protocols static route 0.0.0.0/0 next-hop 192.168.1.1 # [tl! .cmd_root:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
And I configured DNS forwarding:
|
And I configured DNS forwarding:
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
set service dns forwarding allow-from '0.0.0.0/0'
|
set service dns forwarding allow-from '0.0.0.0/0' # [tl! .cmd_root:start]
|
||||||
set service dns forwarding domain 10.16.172.in-addr.arpa. server '192.168.1.5'
|
set service dns forwarding domain 10.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
set service dns forwarding domain 20.16.172.in-addr.arpa. server '192.168.1.5'
|
set service dns forwarding domain 20.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
set service dns forwarding domain 30.16.172.in-addr.arpa. server '192.168.1.5'
|
set service dns forwarding domain 30.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
|
@ -148,13 +148,13 @@ set service dns forwarding domain lab.bowdre.net server '192.168.1.5'
|
||||||
set service dns forwarding listen-address '172.16.10.1'
|
set service dns forwarding listen-address '172.16.10.1'
|
||||||
set service dns forwarding listen-address '172.16.20.1'
|
set service dns forwarding listen-address '172.16.20.1'
|
||||||
set service dns forwarding listen-address '172.16.30.1'
|
set service dns forwarding listen-address '172.16.30.1'
|
||||||
set service dns forwarding name-server '192.168.1.1'
|
set service dns forwarding name-server '192.168.1.1' # [tl! .cmd_root:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
|
Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
|
||||||
|
|
||||||
```commandroot
|
```shell
|
||||||
set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative
|
set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative # [tl! .cmd_root:start]
|
||||||
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router '172.16.10.1'
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router '172.16.10.1'
|
||||||
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server '192.168.1.5'
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server '192.168.1.5'
|
||||||
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 domain-name 'lab.bowdre.net'
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 domain-name 'lab.bowdre.net'
|
||||||
|
@ -174,7 +174,7 @@ set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/
|
||||||
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 domain-name 'lab.bowdre.net'
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 domain-name 'lab.bowdre.net'
|
||||||
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 lease '86400'
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 lease '86400'
|
||||||
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 start '172.16.30.100'
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 start '172.16.30.100'
|
||||||
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop '172.16.30.200'
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop '172.16.30.200' # [tl! .cmd_root:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this server jockey just configured a router!
|
Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this server jockey just configured a router!
|
||||||
|
@ -211,9 +211,9 @@ I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created ne
|
||||||
|
|
||||||
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
|
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
|
||||||
|
|
||||||
```commandroot-session
|
```shell
|
||||||
vmkping -I vmk1 172.16.98.22
|
vmkping -I vmk1 172.16.98.22 # [tl! .cmd_root]
|
||||||
PING 172.16.98.22 (172.16.98.22): 56 data bytes
|
PING 172.16.98.22 (172.16.98.22): 56 data bytes # [tl! .nocopy:start]
|
||||||
64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms
|
64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms
|
||||||
64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms
|
64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms
|
||||||
64 bytes from 172.16.98.22: icmp_seq=2 ttl=64 time=0.262 ms
|
64 bytes from 172.16.98.22: icmp_seq=2 ttl=64 time=0.262 ms
|
||||||
|
@ -221,17 +221,16 @@ PING 172.16.98.22 (172.16.98.22): 56 data bytes
|
||||||
--- 172.16.98.22 ping statistics ---
|
--- 172.16.98.22 ping statistics ---
|
||||||
3 packets transmitted, 3 packets received, 0% packet loss
|
3 packets transmitted, 3 packets received, 0% packet loss
|
||||||
round-trip min/avg/max = 0.243/0.255/0.262 ms
|
round-trip min/avg/max = 0.243/0.255/0.262 ms
|
||||||
```
|
# [tl! .nocopy:end]
|
||||||
```commandroot-session
|
vmkping -I vmk2 172.16.99.22 -S vmotion # [tl! .cmd_root]
|
||||||
vmkping -I vmk2 172.16.99.22 -S vmotion
|
PING 172.16.99.22 (172.16.99.22): 56 data bytes # [tl! .nocopy:start]
|
||||||
PING 172.16.99.22 (172.16.99.22): 56 data bytes
|
|
||||||
64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms
|
64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms
|
||||||
64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms
|
64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms
|
||||||
64 bytes from 172.16.99.22: icmp_seq=2 ttl=64 time=0.242 ms
|
64 bytes from 172.16.99.22: icmp_seq=2 ttl=64 time=0.242 ms
|
||||||
|
|
||||||
--- 172.16.99.22 ping statistics ---
|
--- 172.16.99.22 ping statistics ---
|
||||||
3 packets transmitted, 3 packets received, 0% packet loss
|
3 packets transmitted, 3 packets received, 0% packet loss
|
||||||
round-trip min/avg/max = 0.202/0.252/0.312 ms
|
round-trip min/avg/max = 0.202/0.252/0.312 ms # [tl! .nocopy:end]
|
||||||
```
|
```
|
||||||
|
|
||||||
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
|
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
|
||||||
|
|
|
@ -25,7 +25,8 @@ So this will generate a name that looks something like `[user]_[catalog_item]_[s
|
||||||
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
|
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
|
||||||
|
|
||||||
So I hop over to vRO and create a new action, which I call `getTimestamp`. It doesn't require any inputs, and returns a single string. Here's the code:
|
So I hop over to vRO and create a new action, which I call `getTimestamp`. It doesn't require any inputs, and returns a single string. Here's the code:
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: getTimestamp action
|
// JavaScript: getTimestamp action
|
||||||
// Inputs: None
|
// Inputs: None
|
||||||
// Returns: result (String)
|
// Returns: result (String)
|
||||||
|
|
|
@ -58,7 +58,8 @@ I also went ahead and specified that the action will return a String.
|
||||||
|
|
||||||
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
|
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: createDeploymentName
|
// JavaScript: createDeploymentName
|
||||||
// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
|
// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
|
||||||
// envCode (String), functionCode (String), appCode (String)
|
// envCode (String), functionCode (String), appCode (String)
|
||||||
|
@ -126,7 +127,8 @@ This gets filed under the existing `CustomProvisioning` folder, and I name it `n
|
||||||
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
|
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
|
||||||
![getNetworksForSite action](IdrT-Un8H1.png)
|
![getNetworksForSite action](IdrT-Un8H1.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: getNetworksForSite
|
// JavaScript: getNetworksForSite
|
||||||
// Inputs: siteCode (String)
|
// Inputs: siteCode (String)
|
||||||
// Returns: site.value (Array/String)
|
// Returns: site.value (Array/String)
|
||||||
|
@ -163,7 +165,8 @@ inputs:
|
||||||
|
|
||||||
and update the resource configuration for the network entity to constrain it based on `input.network` instead of `input.site` as before:
|
and update the resource configuration for the network entity to constrain it based on `input.network` instead of `input.site` as before:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
type: Cloud.vSphere.Machine
|
type: Cloud.vSphere.Machine
|
||||||
|
|
|
@ -57,7 +57,8 @@ Now it's time to leave the Infrastructure tab and visit the Design one, where I'
|
||||||
![My first Cloud Template!](RtMljqM9x.png)
|
![My first Cloud Template!](RtMljqM9x.png)
|
||||||
|
|
||||||
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
|
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
# Image Mapping
|
# Image Mapping
|
||||||
|
@ -73,7 +74,8 @@ inputs:
|
||||||
|
|
||||||
The first input is going to ask the user to select the desired Operating System for this deployment. The `oneOf` type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly "Windows Server 2019" `title` which is tied to the `ws2019` `const` value. For now, I'll also set the `default` value of the field so I don't have to actually click the dropdown each time I test the deployment.
|
The first input is going to ask the user to select the desired Operating System for this deployment. The `oneOf` type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly "Windows Server 2019" `title` which is tied to the `ws2019` `const` value. For now, I'll also set the `default` value of the field so I don't have to actually click the dropdown each time I test the deployment.
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# Flavor Mapping
|
# Flavor Mapping
|
||||||
size:
|
size:
|
||||||
title: Resource Size
|
title: Resource Size
|
||||||
|
@ -92,7 +94,8 @@ Now I'm asking the user to pick the t-shirt size of the VM. These will correspon
|
||||||
|
|
||||||
The `resources` section is where the data from the inputs gets applied to the deployment:
|
The `resources` section is where the data from the inputs gets applied to the deployment:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
type: Cloud.vSphere.Machine
|
type: Cloud.vSphere.Machine
|
||||||
|
@ -112,7 +115,8 @@ So I'm connecting the selected `input.image` to the Image Mapping configured in
|
||||||
|
|
||||||
All together now:
|
All together now:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
# Image Mapping
|
# Image Mapping
|
||||||
|
@ -188,7 +192,8 @@ I'll also use the `net:bow` and `net:dre` tags to logically divide up the networ
|
||||||
|
|
||||||
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
|
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
inputs:
|
inputs:
|
||||||
# Datacenter location
|
# Datacenter location
|
||||||
site:
|
site:
|
||||||
|
@ -204,7 +209,8 @@ I'm using the `enum` option now instead of `oneOf` since the site names shouldn'
|
||||||
|
|
||||||
And then I'll add some `constraints` to the `resources` section, making use of the `to_lower` function from the [cloud template expression syntax](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html) to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
|
And then I'll add some `constraints` to the `resources` section, making use of the `to_lower` function from the [cloud template expression syntax](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html) to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
type: Cloud.vSphere.Machine
|
type: Cloud.vSphere.Machine
|
||||||
|
|
|
@ -40,7 +40,8 @@ Since I try to keep things modular, I'm going to write a new vRO action within t
|
||||||
|
|
||||||
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
|
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: checkForAdConflict action
|
// JavaScript: checkForAdConflict action
|
||||||
// Inputs: computerName (String)
|
// Inputs: computerName (String)
|
||||||
// Outputs: (Boolean)
|
// Outputs: (Boolean)
|
||||||
|
@ -65,7 +66,8 @@ Now I can pop back over to my massive `Generate unique hostname` workflow and dr
|
||||||
|
|
||||||
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
|
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: check for AD conflict task
|
// JavaScript: check for AD conflict task
|
||||||
// Inputs: candidateVmName (String), conflict (Boolean)
|
// Inputs: candidateVmName (String), conflict (Boolean)
|
||||||
// Outputs: conflict (Boolean)
|
// Outputs: conflict (Boolean)
|
||||||
|
@ -103,22 +105,23 @@ Luckily, vRO does provide a way to import scripts bundled with their required mo
|
||||||
|
|
||||||
I start by creating a folder to store the script and needed module, and then I create the required `handler.ps1` file.
|
I start by creating a folder to store the script and needed module, and then I create the required `handler.ps1` file.
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
mkdir checkDnsConflicts
|
mkdir checkDnsConflicts # [tl! .cmd:2]
|
||||||
cd checkDnsConflicts
|
cd checkDnsConflicts
|
||||||
touch handler.ps1
|
touch handler.ps1
|
||||||
```
|
```
|
||||||
|
|
||||||
I then create a `Modules` folder and install the DnsClient-PS module:
|
I then create a `Modules` folder and install the DnsClient-PS module:
|
||||||
|
|
||||||
```command
|
```shell
|
||||||
mkdir Modules
|
mkdir Modules # [tl! .cmd:1]
|
||||||
pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
|
pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
|
||||||
```
|
```
|
||||||
|
|
||||||
And then it's time to write the PowerShell script in `handler.ps1`:
|
And then it's time to write the PowerShell script in `handler.ps1`:
|
||||||
|
|
||||||
```powershell {linenos=true}
|
```powershell
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
# PowerShell: checkForDnsConflict script
|
# PowerShell: checkForDnsConflict script
|
||||||
# Inputs: $inputs.hostname (String), $inputs.domain (String)
|
# Inputs: $inputs.hostname (String), $inputs.domain (String)
|
||||||
# Outputs: $queryresult (String)
|
# Outputs: $queryresult (String)
|
||||||
|
@ -147,9 +150,9 @@ function handler {
|
||||||
|
|
||||||
Now to package it up in a `.zip` which I can then import into vRO:
|
Now to package it up in a `.zip` which I can then import into vRO:
|
||||||
|
|
||||||
```command-session
|
```shell
|
||||||
zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
|
zip -r --exclude=\*.zip -X checkDnsConflicts.zip . # [tl! .cmd]
|
||||||
adding: Modules/ (stored 0%)
|
adding: Modules/ (stored 0%) # [tl! .nocopy:start]
|
||||||
adding: Modules/DnsClient-PS/ (stored 0%)
|
adding: Modules/DnsClient-PS/ (stored 0%)
|
||||||
adding: Modules/DnsClient-PS/1.0.0/ (stored 0%)
|
adding: Modules/DnsClient-PS/1.0.0/ (stored 0%)
|
||||||
adding: Modules/DnsClient-PS/1.0.0/Public/ (stored 0%)
|
adding: Modules/DnsClient-PS/1.0.0/Public/ (stored 0%)
|
||||||
|
@ -170,10 +173,9 @@ zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
|
||||||
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%)
|
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%)
|
||||||
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%)
|
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%)
|
||||||
adding: handler.ps1 (deflated 49%)
|
adding: handler.ps1 (deflated 49%)
|
||||||
```
|
# [tl! .nocopy:end]
|
||||||
```command-session
|
ls # [tl! .cmd]
|
||||||
ls
|
checkDnsConflicts.zip handler.ps1 Modules # [tl! .nocopy]
|
||||||
checkDnsConflicts.zip handler.ps1 Modules
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### checkForDnsConflict action (Deprecated)
|
#### checkForDnsConflict action (Deprecated)
|
||||||
|
@ -190,7 +192,8 @@ Just like with the `check for AD conflict` action, I'll add this onto the workfl
|
||||||
|
|
||||||
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
|
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: check for DNS conflict
|
// JavaScript: check for DNS conflict
|
||||||
// Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties)
|
// Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties)
|
||||||
// Outputs: conflict (Boolean)
|
// Outputs: conflict (Boolean)
|
||||||
|
|
|
@ -37,7 +37,8 @@ I'll start by adding those fields as inputs on my cloud template.
|
||||||
|
|
||||||
I already have a `site` input at the top of the template, used for selecting the deployment location. I'll leave that there:
|
I already have a `site` input at the top of the template, used for selecting the deployment location. I'll leave that there:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
inputs:
|
inputs:
|
||||||
site:
|
site:
|
||||||
type: string
|
type: string
|
||||||
|
@ -49,7 +50,8 @@ inputs:
|
||||||
|
|
||||||
I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
|
I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
environment:
|
environment:
|
||||||
type: string
|
type: string
|
||||||
title: Environment
|
title: Environment
|
||||||
|
@ -62,7 +64,8 @@ I'll add the rest of the naming components below the prompts for image selection
|
||||||
|
|
||||||
And a dropdown for those function options:
|
And a dropdown for those function options:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
function:
|
function:
|
||||||
type: string
|
type: string
|
||||||
title: Function Code
|
title: Function Code
|
||||||
|
@ -82,7 +85,8 @@ And a dropdown for those function options:
|
||||||
|
|
||||||
And finally a text entry field for the application descriptor. Note that this one includes the `minLength` and `maxLength` constraints to enforce the three-character format.
|
And finally a text entry field for the application descriptor. Note that this one includes the `minLength` and `maxLength` constraints to enforce the three-character format.
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
app:
|
app:
|
||||||
type: string
|
type: string
|
||||||
title: Application Code
|
title: Application Code
|
||||||
|
@ -95,7 +99,8 @@ And finally a text entry field for the application descriptor. Note that this on
|
||||||
|
|
||||||
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for `environment` since I only want the first letter. I use the `substring()` function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a `dnsDomain` property that will be useful later when I need to query for DNS conflicts.
|
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for `environment` since I only want the first letter. I use the `substring()` function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a `dnsDomain` property that will be useful later when I need to query for DNS conflicts.
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
resources:
|
resources:
|
||||||
Cloud_vSphere_Machine_1:
|
Cloud_vSphere_Machine_1:
|
||||||
type: Cloud.vSphere.Machine
|
type: Cloud.vSphere.Machine
|
||||||
|
@ -111,7 +116,8 @@ resources:
|
||||||
|
|
||||||
So here's the complete template:
|
So here's the complete template:
|
||||||
|
|
||||||
```yaml {linenos=true}
|
```yaml
|
||||||
|
# torchlight! {"lineNumbers": true}
|
||||||
formatVersion: 1
|
formatVersion: 1
|
||||||
inputs:
|
inputs:
|
||||||
site:
|
site:
|
||||||
|
@ -228,7 +234,8 @@ The first thing I'll want this workflow to do (particularly for testing) is to t
|
||||||
|
|
||||||
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
|
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: logPayloadProperties
|
// JavaScript: logPayloadProperties
|
||||||
// Inputs: payload (Properties)
|
// Inputs: payload (Properties)
|
||||||
// Outputs: none
|
// Outputs: none
|
||||||
|
@ -291,7 +298,8 @@ Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing
|
||||||
|
|
||||||
The script for this is pretty straight-forward:
|
The script for this is pretty straight-forward:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: parse payload
|
// JavaScript: parse payload
|
||||||
// Inputs: inputProperties (Properties)
|
// Inputs: inputProperties (Properties)
|
||||||
// Outputs: requestProperties (Properties), originalNames (Array/string)
|
// Outputs: requestProperties (Properties), originalNames (Array/string)
|
||||||
|
@ -333,7 +341,8 @@ Select **Output** at the top of the *New Variable* dialog and the complete the f
|
||||||
|
|
||||||
And here's the script for that task:
|
And here's the script for that task:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: Apply new names
|
// JavaScript: Apply new names
|
||||||
// Inputs: inputProperties (Properties), newNames (Array/string)
|
// Inputs: inputProperties (Properties), newNames (Array/string)
|
||||||
// Outputs: resourceNames (Array/string)
|
// Outputs: resourceNames (Array/string)
|
||||||
|
@ -363,7 +372,8 @@ Okay, on to the schema. This workflow may take a little while to execute, and it
|
||||||
|
|
||||||
The script is very short:
|
The script is very short:
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: create lock
|
// JavaScript: create lock
|
||||||
// Inputs: lockOwner (String), lockId (String)
|
// Inputs: lockOwner (String), lockId (String)
|
||||||
// Outputs: none
|
// Outputs: none
|
||||||
|
@ -377,7 +387,8 @@ We're getting to the meat of the operation now - another scriptable task named `
|
||||||
![Task: generate hostnameBase](XATryy20y.png)
|
![Task: generate hostnameBase](XATryy20y.png)
|
||||||
|
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: generate hostnameBase
|
// JavaScript: generate hostnameBase
|
||||||
// Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String)
|
// Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String)
|
||||||
// Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number)
|
// Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number)
|
||||||
|
@ -415,7 +426,8 @@ I've only got the one vCenter in my lab. At work, I've got multiple vCenters so
|
||||||
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
|
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
|
||||||
![Task: prepare vCenter SDK connection](ByIWO66PC.png)
|
![Task: prepare vCenter SDK connection](ByIWO66PC.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: prepare vCenter SDK connection
|
// JavaScript: prepare vCenter SDK connection
|
||||||
// Inputs: none
|
// Inputs: none
|
||||||
// Outputs: sdkConnections (Array/VC:SdkConnection)
|
// Outputs: sdkConnections (Array/VC:SdkConnection)
|
||||||
|
@ -432,7 +444,8 @@ Next, I'm going to drop another ForEach element onto the canvas. For each vCente
|
||||||
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
|
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
|
||||||
![Task: unpack results for all hosts](gIEFRnilq.png)
|
![Task: unpack results for all hosts](gIEFRnilq.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: unpack results for all hosts
|
// JavaScript: unpack results for all hosts
|
||||||
// Inputs: vmsByHost (Array/Array)
|
// Inputs: vmsByHost (Array/Array)
|
||||||
// Outputs: vmNames (Array/string)
|
// Outputs: vmNames (Array/string)
|
||||||
|
@ -453,7 +466,8 @@ vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()})
|
||||||
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
|
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
|
||||||
![Task: generate hostnameSeq & candidateVmName](fWlSrD56N.png)
|
![Task: generate hostnameSeq & candidateVmName](fWlSrD56N.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: generate hostnameSeq & candidateVmName
|
// JavaScript: generate hostnameSeq & candidateVmName
|
||||||
// Inputs: hostnameBase (String), digitCount (Number)
|
// Inputs: hostnameBase (String), digitCount (Number)
|
||||||
// Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String)
|
// Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String)
|
||||||
|
@ -500,7 +514,8 @@ System.log("Proposed VM name: " + candidateVmName)
|
||||||
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
|
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
|
||||||
![Task: check for VM name conflicts](qmHszypww.png)
|
![Task: check for VM name conflicts](qmHszypww.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: check for VM name conflicts
|
// JavaScript: check for VM name conflicts
|
||||||
// Inputs: candidateVmName (String), vmNames (Array/string)
|
// Inputs: candidateVmName (String), vmNames (Array/string)
|
||||||
// Outputs: conflict (Boolean)
|
// Outputs: conflict (Boolean)
|
||||||
|
@ -527,7 +542,8 @@ I can then drag the new element away from the "everything is fine" flow, and con
|
||||||
|
|
||||||
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
|
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: conflict resolution
|
// JavaScript: conflict resolution
|
||||||
// Inputs: none
|
// Inputs: none
|
||||||
// Outputs: conflict (Boolean)
|
// Outputs: conflict (Boolean)
|
||||||
|
@ -542,7 +558,8 @@ So if `check VM name conflict` encounters a collision with an existing VM name i
|
||||||
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
|
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
|
||||||
![Task: return nextVmName](5QFTPHp5H.png)
|
![Task: return nextVmName](5QFTPHp5H.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript: return nextVmName
|
// JavaScript: return nextVmName
|
||||||
// Inputs: candidateVmName (String)
|
// Inputs: candidateVmName (String)
|
||||||
// Outputs: nextVmName (String)
|
// Outputs: nextVmName (String)
|
||||||
|
@ -555,7 +572,8 @@ System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
|
||||||
And we should also remove that lock that we created at the start of this workflow.
|
And we should also remove that lock that we created at the start of this workflow.
|
||||||
![Task: remove lock](BhBnBh8VB.png)
|
![Task: remove lock](BhBnBh8VB.png)
|
||||||
|
|
||||||
```js {linenos=true}
|
```javascript
|
||||||
|
// torchlight! {"lineNumbers": true}
|
||||||
// JavaScript remove lock
|
// JavaScript remove lock
|
||||||
// Inputs: lockId (String), lockOwner (String)
|
// Inputs: lockId (String), lockOwner (String)
|
||||||
// Outputs: none
|
// Outputs: none
|
||||||
|
|
Loading…
Reference in a new issue