{% for post in posts %}
{% include archive-single.html type=entries_layout %}
{% endfor %}
-
{% endraw %}
+
```
Putting it all together now, here's my new `_layouts/series.html` file:
-```liquid
-{% raw %}---
+```jinja-html
+# torchlight! {"lineNumbers": true}
+---
layout: archive
---
@@ -134,7 +140,8 @@ layout: archive
### Series pages
Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly:
```markdown
-{% raw %}---
+// torchlight! {"lineNumbers": true}
+---
title: "Adventures in vRealize Automation 8"
layout: series
permalink: "/series/vra8"
@@ -145,7 +152,7 @@ header:
teaser: assets/images/posts-2020/RtMljqM9x.png
---
-*Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.*{% endraw %}
+*Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.*
```
You can see that this page is referencing the series layout I just created, and it's going to live at `http://localhost/series/vra8` - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
@@ -155,7 +162,8 @@ Check it out [here](/series/vra8):
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
```markdown
-{% raw %}---
+// torchlight! {"lineNumbers": true}
+---
title: "Tips & Tricks"
layout: series
permalink: "/series/tips"
@@ -165,13 +173,14 @@ header:
teaser: assets/images/posts-2020/kJ_l7gPD2.png
---
-*Useful tips and tricks I've stumbled upon.*{% endraw %}
+*Useful tips and tricks I've stumbled upon.*
```
### Changing the category permalink
Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path:
```yaml
+# torchlight! {"lineNumbers": true}
category_archive:
type: liquid
# path: /categories/
@@ -183,45 +192,49 @@ tag_archive:
I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink:
```markdown
-{% raw %}---
+// torchlight! {"lineNumbers": true}
+---
title: "Posts by Series"
layout: categories
permalink: /series/
author_profile: true
----{% endraw %}
+---
```
### Fixing category links in posts
-The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`).
+The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`).
![Old category link](20210724-old-category-link.png)
-That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
+That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
-```liquid
-{% raw %}
```
It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/page__taxonomy.html) is being used to display the tags and categories, so I then went to that file in the `_include` directory:
-```liquid
-{% raw %}{% if site.tag_archive.type and page.tags[0] %}
+```jinja-html
+# torchlight! {"lineNumbers": true}
+{% if site.tag_archive.type and page.tags[0] %}
{% include tag-list.html %}
{% endif %}
{% if site.category_archive.type and page.categories[0] %}
{% include category-list.html %}
-{% endif %}{% endraw %}
+{% endif %}
```
Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/category-list.html) is what I actually want. Here's that file:
-```liquid
-{% raw %}{% case site.category_archive.type %}
+```jinja-html
+# torchlight! {"lineNumbers": true}
+{% case site.category_archive.type %}
{% when "liquid" %}
{% assign path_type = "#" %}
{% when "jekyll-archives" %}
@@ -239,19 +252,21 @@ Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes
{% endfor %}
-{% endif %}{% endraw %}
+{% endif %}
```
I'm using the `liquid` archive approach since I can't use the `jekyll-archives` plugin, so I can see that it's setting the `path_type` to `"#"`. And near the bottom of the file, I can see that it's assembling the category link by slugifying the `category_word`, sticking the `path_type` in front of it, and then putting the `site.category_archive.path` (which I edited earlier in `_config.yml`) in front of that. So that's why my category links look like `/series/#category`. I can just edit the top of this file to statically set `path_type = nil` and that should clear this up in a jiffy:
-```liquid
-{% raw %}{% assign path_type = nil %}
+```jinja-html
+# torchlight! {"lineNumbers": true}
+{% assign path_type = nil %}
{% if site.category_archive.path %}
{% assign categories_sorted = page.categories | sort_natural %}
- [...]{% endraw %}
+ [...]
```
To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](https://github.com/mmistakes/minimal-mistakes/blob/master/_data/ui-text.yml) to update the string used for `categories_label`:
```yaml
+# torchlight! {"lineNumbers": true}
meta_label :
tags_label : "Tags:"
categories_label : "Series:"
@@ -265,6 +280,7 @@ Much better!
### Updating the navigation header
And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit `_data/navigation.yml` with links to my new pages:
```yaml
+# torchlight! {"lineNumbers": true}
main:
- title: "vRealize Automation 8"
url: /series/vra8
diff --git a/content/posts/run-scripts-in-guest-os-with-vra-abx-actions/index.md b/content/posts/run-scripts-in-guest-os-with-vra-abx-actions/index.md
index 026ee26..86bba78 100644
--- a/content/posts/run-scripts-in-guest-os-with-vra-abx-actions/index.md
+++ b/content/posts/run-scripts-in-guest-os-with-vra-abx-actions/index.md
@@ -30,12 +30,13 @@ I will also add some properties to tell PowerCLI (and the `Invoke-VmScript` cmdl
##### Inputs section
I'll kick this off by going into Cloud Assembly and editing the `WindowsDemo` template I've been working on for the past few eons. I'll add a `diskSize` input:
```yaml
+# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
site: [...]
image: [...]
size: [...]
- diskSize:
+ diskSize: # [tl! focus:5]
title: 'System drive size'
default: 60
type: integer
@@ -46,14 +47,15 @@ inputs:
[...]
```
-The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
+The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
I'll also drop in an `adminsList` input at the bottom of the section:
```yaml
+# torchlight! {"lineNumbers": true}
[...]
poc_email: [...]
ticket: [...]
- adminsList:
+ adminsList: # [tl! focus:4]
type: string
title: Administrators
description: Comma-separated list of domain accounts/groups which need admin access to this server.
@@ -64,7 +66,7 @@ resources:
```
##### Resources section
-In the Resources section of the cloud template, I'm going to add a few properties that will tell the ABX script how to connect to the appropriate vCenter and then the VM.
+In the Resources section of the cloud template, I'm going to add a few properties that will tell the ABX script how to connect to the appropriate vCenter and then the VM.
- `vCenter`: The vCenter server where the VM will be deployed, and thus the server which PowerCLI will authenticate against. In this case, I've only got one vCenter, but a larger environment might have multiples. Defining this in the cloud template makes it easy to select automagically if needed. (For instance, if I had a `bow-vcsa` and a `dre-vcsa` for my different sites, I could do something like `vCenter: '${input.site}-vcsa.lab.bowdre.net'` here.)
- `vCenterUser`: The username with rights to the VM in vCenter. Again, this doesn't have to be a static assignment.
- `templateUser`: This is the account that will be used by `Invoke-VmScript` to log in to the guest OS. My template will use the default `Administrator` account for non-domain systems, but the `lab\vra` service account on domain-joined systems (using the `adJoin` input I [set up earlier](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)).
@@ -72,6 +74,7 @@ In the Resources section of the cloud template, I'm going to add a few propertie
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
```yaml
+# torchlight! {"lineNumbers": true}
[...]
resources:
Cloud_vSphere_Machine_1:
@@ -80,7 +83,7 @@ resources:
image: '${input.image}'
flavor: '${input.size}'
site: '${input.site}'
- vCenter: vcsa.lab.bowdre.net
+ vCenter: vcsa.lab.bowdre.net # [tl! focus:3]
vCenterUser: vra@lab.bowdre.net
templateUser: '${input.adJoin ? "vra@lab" : "Administrator"}'
adminsList: '${input.adminsList}'
@@ -89,16 +92,17 @@ resources:
app: '${input.app}'
adJoin: '${input.adJoin}'
ignoreActiveDirectory: '${!input.adJoin}'
-[...]
+[...]
```
And I will add in a `storage` property as well which will automatically adjust the deployed VMDK size to match the specified input:
```yaml
+# torchlight! {"lineNumbers": true}
[...]
description: '${input.description}'
networks: [...]
constraints: [...]
- storage:
+ storage: # [tl! focus:1]
bootDiskCapacityInGB: '${input.diskSize}'
Cloud_vSphere_Network_1:
type: Cloud.vSphere.Network
@@ -109,6 +113,7 @@ And I will add in a `storage` property as well which will automatically adjust t
##### Complete template
Okay, all together now:
```yaml
+# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
site:
@@ -196,13 +201,13 @@ inputs:
poc_email:
type: string
title: Point of Contact Email
- default: jack.shephard@virtuallypotato.com
+ default: jack.shephard@example.com
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
ticket:
type: string
title: Ticket/Request Number
default: 4815162342
- adminsList:
+ adminsList:
type: string
title: Administrators
description: Comma-separated list of domain accounts/groups which need admin access to this server.
@@ -297,6 +302,7 @@ And I'll pop over to the right side to map the Action Constants I created earlie
Now for The Script:
```powershell
+# torchlight! {"lineNumbers": true}
<# vRA 8.x ABX action to perform certain in-guest actions post-deploy:
Windows:
- auto-update VM tools
@@ -304,12 +310,12 @@ Now for The Script:
- extend C: volume to fill disk
- set up remote access
- create a scheduled task to (attempt to) apply Windows updates
-
+
## Action Secrets:
templatePassWinDomain # password for domain account with admin rights to the template (domain-joined deployments)
templatePassWinWorkgroup # password for local account with admin rights to the template (standalone deployments)
vCenterPassword # password for vCenter account passed from the cloud template
-
+
## Action Inputs:
## Inputs from deployment:
resourceNames[0] # VM name [BOW-DVRT-XXX003]
@@ -326,8 +332,8 @@ function handler($context, $inputs) {
$vcUser = $inputs.customProperties.vCenterUser
$vcPassword = $context.getSecret($inputs."vCenterPassword")
$vCenter = $inputs.customProperties.vCenter
-
- # Create vmtools connection to the VM
+
+ # Create vmtools connection to the VM
$vmName = $inputs.resourceNames[0]
Connect-ViServer -Server $vCenter -User $vcUser -Password $vcPassword -Force
$vm = Get-VM -Name $vmName
@@ -335,13 +341,13 @@ function handler($context, $inputs) {
if (-not (Wait-Tools -VM $vm -TimeoutSeconds 180)) {
Write-Error "Unable to establish connection with VM tools" -ErrorAction Stop
}
-
+
# Detect OS type
$count = 0
While (!$osType) {
Try {
$osType = ($vm | Get-View).Guest.GuestFamily.ToString()
- $toolsStatus = ($vm | Get-View).Guest.ToolsStatus.ToString()
+ $toolsStatus = ($vm | Get-View).Guest.ToolsStatus.ToString()
} Catch {
# 60s timeout
if ($count -ge 12) {
@@ -354,7 +360,7 @@ function handler($context, $inputs) {
}
}
Write-Host "$vmName is a $osType and its tools status is $toolsStatus."
-
+
# Update tools on Windows if out of date
if ($osType.Equals("windowsGuest") -And $toolsStatus.Equals("toolsOld")) {
Write-Host "Updating VM Tools..."
@@ -364,7 +370,7 @@ function handler($context, $inputs) {
Write-Error "Unable to establish connection with VM tools" -ErrorAction Stop
}
}
-
+
# Run OS-specific tasks
if ($osType.Equals("windowsGuest")) {
# Initialize Windows variables
@@ -373,7 +379,7 @@ function handler($context, $inputs) {
$adJoin = $inputs.customProperties.adJoin
$templateUser = $inputs.customProperties.templateUser
$templatePassword = $adJoin.Equals("true") ? $context.getSecret($inputs."templatePassWinDomain") : $context.getSecret($inputs."templatePassWinWorkgroup")
-
+
# Add domain accounts to local administrators group
if ($adminsList.Length -gt 0 -And $adJoin.Equals("true")) {
# Standardize users entered without domain as DOMAIN\username
@@ -440,7 +446,7 @@ function handler($context, $inputs) {
Start-Sleep -s 10
Write-Host "Creating a scheduled task to apply updates..."
$runUpdateScript = Invoke-VMScript -VM $vm -ScriptText $updateScript -GuestUser $templateUser -GuestPassword $templatePassword
- Write-Host "Created task:`n" $runUpdateScript.ScriptOutput "`n"
+ Write-Host "Created task:`n" $runUpdateScript.ScriptOutput "`n"
} elseif ($osType.Equals("linuxGuest")) {
#TODO
Write-Host "Linux systems not supported by this action... yet"
@@ -479,7 +485,7 @@ I do have another subsciption on that event already, [`VM Post-Provisioning`](/a
After hitting the **Save** button, I go back to that other `VM Post-Provisioning` subscription, set it to enable blocking, and give it a priority of `1`:
![Blocking VM Post-Provisioning](20210903_old_subscription_blocking.png)
-This will ensure that the new subscription fires after the older one completes, and that should avoid any conflicts between the two.
+This will ensure that the new subscription fires after the older one completes, and that should avoid any conflicts between the two.
### Testing
Alright, now let's see if it worked. I head into Service Broker to submit the deployment request:
@@ -499,50 +505,50 @@ Logging in to server.
logged in to server vcsa.lab.bowdre.net:443
Read-only file system
09/03/2021 19:08:27 Get-VM Finished execution
-09/03/2021 19:08:27 Get-VM
+09/03/2021 19:08:27 Get-VM
Waiting for VM Tools to start...
-09/03/2021 19:08:29 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:08:29 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:08:29 Wait-Tools Finished execution
-09/03/2021 19:08:29 Wait-Tools
+09/03/2021 19:08:29 Wait-Tools
09/03/2021 19:08:29 Get-View Finished execution
-09/03/2021 19:08:29 Get-View
+09/03/2021 19:08:29 Get-View
09/03/2021 19:08:29 Get-View Finished execution
-09/03/2021 19:08:29 Get-View
+09/03/2021 19:08:29 Get-View
BOW-PSVS-XXX001 is a windowsGuest and its tools status is toolsOld.
Updating VM Tools...
-09/03/2021 19:08:30 Update-Tools 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:08:30 Update-Tools 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:08:30 Update-Tools Finished execution
-09/03/2021 19:08:30 Update-Tools
+09/03/2021 19:08:30 Update-Tools
Waiting for VM Tools to start...
-09/03/2021 19:09:00 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:09:00 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:09:00 Wait-Tools Finished execution
-09/03/2021 19:09:00 Wait-Tools
+09/03/2021 19:09:00 Wait-Tools
Administrators: "lab\testy"
Attempting to add administrator accounts...
-09/03/2021 19:09:10 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:09:10 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:09:10 Invoke-VMScript Finished execution
-09/03/2021 19:09:10 Invoke-VMScript
+09/03/2021 19:09:10 Invoke-VMScript
Successfully added ["lab\testy"] to Administrators group.
Attempting to extend system volume...
-09/03/2021 19:09:27 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:09:27 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:09:27 Invoke-VMScript Finished execution
-09/03/2021 19:09:27 Invoke-VMScript
+09/03/2021 19:09:27 Invoke-VMScript
Successfully extended system partition.
Attempting to enable remote access (RDP, WMI, File and Printer Sharing, PSRemoting)...
-09/03/2021 19:09:49 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:09:49 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:09:49 Invoke-VMScript Finished execution
-09/03/2021 19:09:49 Invoke-VMScript
+09/03/2021 19:09:49 Invoke-VMScript
Successfully enabled remote access.
Creating a scheduled task to apply updates...
-09/03/2021 19:10:12 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
+09/03/2021 19:10:12 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
09/03/2021 19:10:12 Invoke-VMScript Finished execution
-09/03/2021 19:10:12 Invoke-VMScript
+09/03/2021 19:10:12 Invoke-VMScript
Created task:
-
-TaskPath TaskName State
--------- -------- -----
-\ Initial_Updates Ready
-\ Initial_Updates Ready
+
+TaskPath TaskName State
+-------- -------- -----
+\ Initial_Updates Ready
+\ Initial_Updates Ready
```
So it *claims* to have successfully updated the VM tools, added `lab\testy` to the local `Administrators` group, extended the `C:` volume to fill the 65GB virtual disk, added firewall rules to permit remote access, and created a scheduled task to apply updates. I can open a console session to the VM to spot-check the results.
diff --git a/content/posts/script-to-convert-posts-to-hugo-page-bundles/index.md b/content/posts/script-to-convert-posts-to-hugo-page-bundles/index.md
index 0b0e2e1..04ff16e 100644
--- a/content/posts/script-to-convert-posts-to-hugo-page-bundles/index.md
+++ b/content/posts/script-to-convert-posts-to-hugo-page-bundles/index.md
@@ -48,9 +48,9 @@ site
└── third-post-image-4.png
```
-So the article contents go under `site/content/post/` in a file called `name-of-article.md`. Each article may embed image (or other file types), and those get stored in `site/static/images/post/` and referenced like `![Image for first post](/images/post/first-post-image-1.png)`. When Hugo builds a site, it processes the stuff under the `site/content/` folder to render the Markdown files into browser-friendly HTML pages but it _doesn't_ process anything in the `site/static/` folder; that's treated as static content and just gets dropped as-is into the resulting site.
+So the article contents go under `site/content/post/` in a file called `name-of-article.md`. Each article may embed image (or other file types), and those get stored in `site/static/images/post/` and referenced like `![Image for first post](/images/post/first-post-image-1.png)`. When Hugo builds a site, it processes the stuff under the `site/content/` folder to render the Markdown files into browser-friendly HTML pages but it _doesn't_ process anything in the `site/static/` folder; that's treated as static content and just gets dropped as-is into the resulting site.
-It's functional, but things can get pretty messy when you've got a bunch of image files and are struggling to keep track of which images go with which post.
+It's functional, but things can get pretty messy when you've got a bunch of image files and are struggling to keep track of which images go with which post.
Like I mentioned earlier, Hugo's Page Bundles group a page's resources together in one place. Each post gets its own folder under `site/content/` and then all of the other files it needs to reference can get dropped in there too. With Page Bundles, the folder tree looks like this:
@@ -78,25 +78,26 @@ site
└── logo.png
```
-Images and other files are now referenced in the post directly like `![Image for post 1](/first-post-image-1.png)`, and this makes it a lot easier to keep track of which images go with which post. And since the files aren't considered to be static anymore, Page Bundles enables Hugo to perform certain [Image Processing tasks](https://gohugo.io/content-management/image-processing/) when the site gets built.
+Images and other files are now referenced in the post directly like `![Image for post 1](/first-post-image-1.png)`, and this makes it a lot easier to keep track of which images go with which post. And since the files aren't considered to be static anymore, Page Bundles enables Hugo to perform certain [Image Processing tasks](https://gohugo.io/content-management/image-processing/) when the site gets built.
Anyway, I wanted to start using Page Bundles but didn't want to have to manually go through all my posts to move the images and update the paths so I spent a few minutes cobbling together a quick script to help me out. It's pretty similar to the one I created to help [migrate images from Hashnode to my Jekyll site](/script-to-update-image-embed-links-in-markdown-files/) last time around - and, like that script, it's not pretty, polished, or flexible in the least, but it did the trick for me.
-This one needs to be run from one step above the site root (`../site/` in the example above), and it gets passed the relative path to a post (`site/content/posts/first-post.md`). From there, it will create a new folder with the same name (`site/content/posts/first-post/`) and move the post into there while renaming it to `index.md` (`site/content/posts/first-post/index.md`).
+This one needs to be run from one step above the site root (`../site/` in the example above), and it gets passed the relative path to a post (`site/content/posts/first-post.md`). From there, it will create a new folder with the same name (`site/content/posts/first-post/`) and move the post into there while renaming it to `index.md` (`site/content/posts/first-post/index.md`).
-It then looks through the newly-relocated post to find all the image embeds. It moves the image files into the post directory, and then updates the post to point to the new image locations.
+It then looks through the newly-relocated post to find all the image embeds. It moves the image files into the post directory, and then updates the post to point to the new image locations.
Next it updates the links for any thumbnail images mentioned in the front matter post metadata. In most of my past posts, I reused an image already embedded in the post as the thumbnail so those files would already be moved by the time the script gets to that point. For the few exceptions, it also needs to move those image files over as well.
Lastly, it changes the `usePageBundles` flag from `false` to `true` so that Hugo knows what we've done.
-```bash
+```shell
+# torchlight! {"lineNumbers": true}
#!/bin/bash
-# Hasty script to convert a given standard Hugo post (where the post content and
+# Hasty script to convert a given standard Hugo post (where the post content and
# images are stored separately) to a Page Bundle (where the content and images are
-# stored together in the same directory).
+# stored together in the same directory).
#
-# Run this from the directory directly above the site root, and provide the relative
+# Run this from the directory directly above the site root, and provide the relative
# path to the existing post that needs to be converted.
#
# Usage: ./convert-to-pagebundle.sh vpotato/content/posts/hello-hugo.md
diff --git a/content/posts/script-to-update-image-embed-links-in-markdown-files/index.md b/content/posts/script-to-update-image-embed-links-in-markdown-files/index.md
index 5d6174e..fa01dac 100644
--- a/content/posts/script-to-update-image-embed-links-in-markdown-files/index.md
+++ b/content/posts/script-to-update-image-embed-links-in-markdown-files/index.md
@@ -12,24 +12,23 @@ title: Script to update image embed links in Markdown files
toc: false
---
-I'm preparing to migrate this blog thingy from Hashnode (which has been great!) to a [GitHub Pages site with Jekyll](https://docs.github.com/en/pages/setting-up-a-github-pages-site-with-jekyll/creating-a-github-pages-site-with-jekyll) so that I can write posts locally and then just do a `git push` to publish them - and get some more practice using `git` in the process. Of course, I've written some admittedly-great content here and I don't want to abandon that.
+I'm preparing to migrate this blog thingy from Hashnode (which has been great!) to a [GitHub Pages site with Jekyll](https://docs.github.com/en/pages/setting-up-a-github-pages-site-with-jekyll/creating-a-github-pages-site-with-jekyll) so that I can write posts locally and then just do a `git push` to publish them - and get some more practice using `git` in the process. Of course, I've written some admittedly-great content here and I don't want to abandon that.
Hashnode helpfully automatically backs up my posts in Markdown format to a private GitHub repo so it was easy to clone those into a local working directory, but all the embedded images were still hosted on Hashnode:
```markdown
-
![Clever image title](https://cdn.hashnode.com/res/hashnode/image/upload/v1600098180227/lhTnVwCO3.png)
-
```
I wanted to download those images to `./assets/images/posts-2020/` within my local Jekyll working directory, and then update the `*.md` files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:
-```bash
+```shell
+# torchlight! {"lineNumbers": true}
#!/bin/bash
# Hasty script to process a blog post markdown file, capture the URL for embedded images,
# download the image locally, and modify the markdown file with the relative image path.
#
-# Run it from the top level of a Jekyll blog directory for best results, and pass the
+# Run it from the top level of a Jekyll blog directory for best results, and pass the
# filename of the blog post you'd like to process.
#
# Ex: ./imageMigration.sh 2021-07-19-Bulk-migrating-images-in-a-blog-post.md
@@ -49,16 +48,14 @@ done
I could then run that against all of the Markdown posts under `./_posts/` with:
-```bash
-for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done
+```shell
+for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done # [tl! .cmd]
```
And the image embeds in the local copy of my posts now all look like this:
```markdown
-
![Clever image title](lhTnVwCO3.png)
-
```
Brilliant!
\ No newline at end of file
diff --git a/content/posts/secure-networking-made-simple-with-tailscale/index.md b/content/posts/secure-networking-made-simple-with-tailscale/index.md
index a69b118..c0dbe8c 100644
--- a/content/posts/secure-networking-made-simple-with-tailscale/index.md
+++ b/content/posts/secure-networking-made-simple-with-tailscale/index.md
@@ -54,8 +54,8 @@ The first step in getting up and running with Tailscale is to sign up at [https:
Once you have a Tailscale account, you're ready to install the Tailscale client. The [download page](https://tailscale.com/download) outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
-```bash
-curl -fsSL https://tailscale.com/install.sh | sh
+```shell
+curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd]
```
After the install completes, it will tell you exactly what you need to do next:
@@ -71,9 +71,9 @@ There are also Tailscale apps available for [iOS](https://tailscale.com/download
#### Basic `tailscale up`
Running `sudo tailscale up` then reveals the next step:
-```bash
-❯ sudo tailscale up
-
+```shell
+sudo tailscale up # [tl! .cmd]
+# [tl! .nocopy:3]
To authenticate, visit:
https://login.tailscale.com/a/1872939939df
@@ -83,8 +83,8 @@ I can copy that address into a browser and I'll get prompted to log in to my Tai
That was pretty easy, right? But what about if I can't easily get to a web browser from the terminal session on a certain device? No worries, `tailscale up` has a flag for that:
-```bash
-sudo tailscale up --qr
+```shell
+sudo tailscale up --qr # [tl! .cmd]
```
That will convert the URL to a QR code that I can scan from my phone.
@@ -93,44 +93,44 @@ That will convert the URL to a QR code that I can scan from my phone.
There are a few additional flags that can be useful under certain situations:
- `--advertise-exit-node` to tell the tailnet that this could be used as an exit node for internet traffic
-```bash
-sudo tailscale up --advertise-exit-node
+```shell
+sudo tailscale up --advertise-exit-node # [tl! .cmd]
```
- `--advertise-routes` to let the node perform subnet routing functions to provide connectivity to specified local subnets
-```bash
-sudo tailscale up --advertise-routes "192.168.1.0/24,172.16.0.0/16"
+```shell
+sudo tailscale up --advertise-routes "192.168.1.0/24,172.16.0.0/16" # [tl! .cmd]
```
- `--advertise-tags`[^tags] to associate the node with certain tags for ACL purposes (like `tag:home` to identify stuff in my home network and `tag:cloud` to label external cloud-hosted resources)
-```bash
-sudo tailscale up --advertise-tags "tag:cloud"
+```shell
+sudo tailscale up --advertise-tags "tag:cloud" # [tl! .cmd]
```
- `--hostname` to manually specific a hostname to use within the tailnet
-```bash
-sudo tailscale up --hostname "tailnode"
+```shell
+sudo tailscale up --hostname "tailnode" # [tl! .cmd]
```
- `--shields-up` to block incoming traffic
-```bash
-sudo tailscale up --shields-up
+```shell
+sudo tailscale up --shields-up # [tl! .cmd]
```
These flags can also be combined with each other:
-```bash
-sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr
+```shell
+sudo tailscale up --hostname "tailnode" --advertise-exit-node --qr # [tl! .cmd]
```
[^tags]: Before being able to assign tags at the command line, you must first define tag owners who can manage the tag. On a personal account, you've only got one user to worry with but you still have to set this up first. I'll go over this in a bit but here's [the documentation](https://tailscale.com/kb/1068/acl-tags/#defining-a-tag) if you want to skip ahead.
#### Sidebar: Tailscale on VyOS
Getting Tailscale on [my VyOS virtual router](/vmware-home-lab-on-intel-nuc-9/#vyos) was unfortunately a little more involved than [leveraging the built-in WireGuard capability](/cloud-based-wireguard-vpn-remote-homelab-access/#configure-vyos-router-as-wireguard-peer). I found the [vyos-tailscale](https://github.com/DMarby/vyos-tailscale) project to help with building a customized VyOS installation ISO with the `tailscaled` daemon added in. I was then able to copy the ISO over to my VyOS instance and install it as if it were a [standard upgrade](https://docs.vyos.io/en/latest/installation/update.html). I could then bring up the interface, advertise my home networks, and make it available as an exit node with:
-```bash
-sudo tailscale up --advertise-exit-node --advertise-routes "192.168.1.0/24,172.16.0.0/16"
+```shell
+sudo tailscale up --advertise-exit-node --advertise-routes "192.168.1.0/24,172.16.0.0/16" # [tl! .cmd]
```
#### Other `tailscale` commands
Once there are a few members, I can use the `tailscale status` command to see a quick overview of the tailnet:
-```bash
-❯ tailscale status
-100.115.115.39 deb01 john@ linux -
+```shell
+tailscale status # [tl! .cmd]
+100.115.115.39 deb01 john@ linux - # [tl! .nocopy:start]
100.118.115.69 ipam john@ linux -
100.116.90.109 johns-iphone john@ iOS -
100.116.31.85 matrix john@ linux -
@@ -138,16 +138,16 @@ Once there are a few members, I can use the `tailscale status` command to see a
100.94.127.1 pixelbook john@ android -
100.75.110.50 snikket john@ linux -
100.96.24.81 vyos john@ linux -
-100.124.116.125 win01 john@ windows -
+100.124.116.125 win01 john@ windows - # [tl! .nocopy:end]
```
Without doing any other configuration beyond just installing Tailscale and connecting it to my account, I can now easily connect from any of these devices to any of the other devices using the listed Tailscale IP[^magicdns]. Entering `ssh 100.116.31.85` will connect me to my Matrix server.
`tailscale ping` lets me check the latency between two Tailscale nodes at the Tailscale layer; the first couple of pings will likely be delivered through a nearby DERP server until the NAT traversal magic is able to kick in:
-```bash
-❯ tailscale ping snikket
-pong from snikket (100.75.110.50) via DERP(nyc) in 34ms
+```shell
+tailscale ping snikket # [tl! .cmd]
+pong from snikket (100.75.110.50) via DERP(nyc) in 34ms # [tl! .nocopy:3]
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
pong from snikket (100.75.110.50) via DERP(nyc) in 35ms
pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
@@ -155,9 +155,9 @@ pong from snikket (100.75.110.50) via [PUBLIC_IP]:41641 in 23ms
The `tailscale netcheck` command will give me some details about my local Tailscale node, like whether it's able to pass UDP traffic, which DERP server is the closest, and the latency to all Tailscale DERP servers:
-```bash
-❯ tailscale netcheck
-
+```shell
+tailscale netcheck # [tl! .cmd]
+# [tl! .nocopy:start]
Report:
* UDP: true
* IPv4: yes, [LOCAL_PUBLIC_IP]:52661
@@ -178,7 +178,7 @@ Report:
- tok: 154.9ms (Tokyo)
- syd: 215.3ms (Sydney)
- sin: 243.7ms (Singapore)
- - blr: 244.6ms (Bangalore)
+ - blr: 244.6ms (Bangalore) # [tl! .nocopy:end]
```
[^magicdns]: I could also connect using the Tailscale hostname, if [MagicDNS](https://tailscale.com/kb/1081/magicdns/) is enabled - but I'm getting ahead of myself.
@@ -245,6 +245,7 @@ This ACL file uses a format called [HuJSON](https://github.com/tailscale/hujson)
I'm going to start by creating a group called `admins` and add myself to that group. This isn't strictly necessary since I am the only user in the organization, but I feel like it's a nice practice anyway. Then I'll add the `tagOwners` section to map each tag to its owner, the new group I just created:
```json
+// torchlight! {"lineNumbers": true}
{
"groups": {
"group:admins": ["john@example.com"],
@@ -277,6 +278,7 @@ Each ACL rule consists of four named parts:
So I'll add this to the top of my policy file:
```json
+// torchlight! {"lineNumbers": true}
{
"acls": [
{
@@ -306,6 +308,7 @@ Earlier I configured Tailscale to force all nodes to use my home DNS server for
Option 2 sounds better to me so that's what I'm going to do. Instead of putting an IP address directly into the ACL rule I'd rather use a hostname, and unfortunately the Tailscale host names aren't available within ACL rule declarations. But I can define a host alias in the policy to map a friendly name to the IP:
```json
+// torchlight! {"lineNumbers": true}
{
"hosts": {
"win01": "100.124.116.125"
@@ -315,6 +318,7 @@ Option 2 sounds better to me so that's what I'm going to do. Instead of putting
And I can then create a new rule for `"users": ["tag:cloud"]` to add an exception for `win01:53`:
```json
+// torchlight! {"lineNumbers": true}
{
"acls": [
{
@@ -332,6 +336,7 @@ And I can then create a new rule for `"users": ["tag:cloud"]` to add an exceptio
And that gets DNS working again for my cloud servers while still serving the results from my NextDNS configuration. Here's the complete policy configuration:
```json
+// torchlight! {"lineNumbers": true}
{
"acls": [
{
diff --git a/content/posts/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/index.md b/content/posts/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/index.md
index b2cf338..baea11a 100644
--- a/content/posts/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/index.md
+++ b/content/posts/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications/index.md
@@ -38,42 +38,43 @@ You're ready to roll once the Terminal opens and gives you a prompt:
Your first action should be to go ahead and install any patches:
```shell
-sudo apt update
+sudo apt update # [tl! .cmd:1]
sudo apt upgrade
```
### Zsh, Oh My Zsh, and powerlevel10k theme
I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting `zsh` is straight forward:
```shell
-sudo apt install zsh
+sudo apt install zsh # [tl! .cmd]
```
Go ahead and launch `zsh` (by typing '`zsh`') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to `vi`-style, and enable both `autocd` and `appendhistory`. Once you're back at the (new) `penguin%` prompt we can move on to installing the [Oh My Zsh plugin framework](https://github.com/ohmyzsh/ohmyzsh).
Just grab the installer script like so:
```shell
-wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
+wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh # [tl! .cmd]
```
Review it if you'd like (and you should! *Always* review code before running it!!), and then execute it:
```shell
-sh install.sh
+sh install.sh # [tl! .cmd]
```
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
![Oh my!](8q-WT0AyC.png)
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
```shell
-git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
+git clone --depth=1 https://github.com/romkatv/powerlevel10k.git \ # [tl! .cmd]
+ ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
```
Now we just need to edit `~/.zshrc` to point to the new theme:
```shell
-sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc
+sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc # [tl! .cmd]
```
We'll need to launch another instance of `zsh` for the theme change to take effect so first lets go ahead and manually set `zsh` as our default shell. We can use `sudo` to get around the whole "don't have a password set" inconvenience:
```shell
-sudo chsh -s /bin/zsh [username]
+sudo chsh -s /bin/zsh [username] # [tl! .cmd]
```
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
-![pwerlevel10k configurator](K1ScSuWcg.png)
+![powerlevel10k configurator](K1ScSuWcg.png)
This theme is crazy-configurable, but fortunately the configurator wizard does a great job of helping you choose the options that work best for you.
I pick the Classic prompt style, Unicode character set, Dark prompt color, 24-hour time, Angled separators, Sharp prompt heads, Flat prompt tails, 2-line prompt height, Dotted prompt connection, Right prompt frame, Sparse prompt spacing, Fluent prompt flow, Enabled transient prompt, Verbose instant prompt, and (finally) Yes to apply the changes.
@@ -83,7 +84,7 @@ Looking good!
### Visual Studio Code
I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer [here](https://code.visualstudio.com/Download#) or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
```shell
-curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb
+curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb # [tl! .cmd:1]
sudo apt install ./code_arm64.deb
```
VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with `code [filename]`:
@@ -105,7 +106,7 @@ I'm working on setting up a [VMware homelab on an Intel NUC 9](https://twitter.c
PowerShell for ARM is still in an early stage so while [it is supported](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#support-for-arm-processors) it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives [here](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#linux), and I grabbed the latest `-linux-arm64.tar.gz` release I could find [here](https://github.com/PowerShell/PowerShell/releases).
```shell
-curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz
+curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz # [tl! .cmd:4]
sudo mkdir -p /opt/microsoft/powershell/7
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
sudo chmod +x /opt/microsoft/powershell/7/pwsh
@@ -125,7 +126,7 @@ The Linux (Beta) environment consists of a hardened virtual machine (named `term
The docker installation has a few prerequisites:
```shell
-sudo apt install \
+sudo apt install \ # [tl! .cmd]
apt-transport-https \
ca-certificates \
curl \
@@ -134,18 +135,18 @@ sudo apt install \
```
Then we need to grab the Docker repo key:
```shell
-curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
+curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # [tl! .cmd]
```
And then we can add the repo:
```shell
-sudo add-apt-repository \
+sudo add-apt-repository \ # [tl! .cmd]
"deb [arch=arm64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
```
And finally update the package cache and install `docker` and its friends:
```shell
-sudo apt update
+sudo apt update # [tl! .cmd:1]
sudo apt install docker-ce docker-ce-cli containerd.io
```
![I put a container in your container](k2uiYi5e8.png)
@@ -164,13 +165,13 @@ I came across [a Reddit post](https://www.reddit.com/r/Crostini/comments/jnbqv3/
The key is to grab the appropriate version of [conda Miniforge](https://github.com/conda-forge/miniforge), make it executable, and run the installer:
```shell
-wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
+wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh # [tl! .cmd:2]
chmod +x Miniforge3-Linux-aarch64.sh
./Miniforge3-Linux-aarch64.sh
```
Exit the terminal and relaunch it, and then install Jupyter:
```shell
-conda install -c conda-forge notebook
+conda install -c conda-forge notebook # [tl! .cmd]
```
You can then launch the notebook with `jupyter notebook` and it will automatically open up in a Chrome OS browser tab:
diff --git a/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md b/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md
index a22fd51..342ceac 100644
--- a/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md
+++ b/content/posts/snikket-private-xmpp-chat-on-oracle-cloud-free-tier/index.md
@@ -32,7 +32,10 @@ I shared a [few months back](/federated-matrix-server-synapse-on-oracle-clouds-f
I recently came across the [Snikket project](https://snikket.org/), which [aims](https://snikket.org/about/goals/) to make decentralized end-to-end encrypted personal messaging simple and accessible for *everyone*, with an emphasis on providing a consistent experience across the network. Snikket does this by maintaining a matched set of server and client[^2] software with feature and design parity, making it incredibly easy to deploy and manage the server, and simplifying user registration with invite links. In contrast to Matrix, Snikket does not operate an open server on which users can self-register but instead requires users to be invited to a hosted instance. The idea is that a server would be used by small groups of family and friends where every user knows (and trusts!) the server operator while also ensuring the complete decentralization of the network[^3].
How simple is the server install?
-{{< tweet user="johndotbowdre" id="1461356940466933768" >}}
+> I spun up a quick @snikket_im XMPP server last night to check out the project - and I do mean QUICK. It took me longer to register a new domain than to deploy the server on GCP and create my first account through the client.
+>
+> — John (@johndotbowdre) November 18, 2021
+
Seriously, their [4-step quick-start guide](https://snikket.org/service/quickstart/) is so good that I didn't feel the need to do a blog post about my experience. I've now been casually using Snikket for a bit over month and remain very impressed both by the software and the project itself, and have even deployed a new Snikket instance for my family to use. My parents were actually able to join the chat without any issues, which is a testament to how easy it is from a user perspective too.
A few days ago I migrated my original Snikket instance from Google Cloud (GCP) to the same Oracle Cloud Infrastructure (OCI) virtual server that's hosting my Matrix homeserver so I thought I might share some notes first on the installation process. At the end, I'll share the tweaks which were needed to get Snikket to run happily alongside Matrix.
@@ -55,8 +58,8 @@ You can refer to my notes from last time for details on how I [created the Ubunt
| `60000-60100`[^4] | UDP | Audio/Video data proxy (TURN data) |
As a gentle reminder, Oracle's `iptables` configuration inserts a `REJECT all` rule at the bottom of each chain. I needed to make sure that each of my `ALLOW` rules get inserted above that point. So I used `iptables -L INPUT --line-numbers` to identify which line held the `REJECT` rule, and then used `iptables -I INPUT [LINE_NUMBER] -m state --state NEW -p [PROTOCOL] --dport [PORT] -j ACCEPT` to insert the new rules above that point.
-```bash
-sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT
+```shell
+sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:start]
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dports 3478-3479 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p tcp -m multiport --dports 3478-3479 -j ACCEPT
@@ -66,13 +69,13 @@ sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5222 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p tcp --dport 5269 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 3478,3479 -j ACCEPT
sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 5349,5350 -j ACCEPT
-sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000:60100 -j ACCEPT
+sudo iptables -I INPUT 9 -m state --state NEW -p udp -m multiport --dports 60000:60100 -j ACCEPT # [tl! .cmd:end]
```
Then to verify the rules are in the right order:
-```bash
-$ sudo iptables -L INPUT --line-numbers -n
-Chain INPUT (policy ACCEPT)
+```shell
+sudo iptables -L INPUT --line-numbers -n # [tl! .cmd]
+Chain INPUT (policy ACCEPT) # [tl! .nocopy:start]
num target prot opt source destination
1 ts-input all -- 0.0.0.0/0 0.0.0.0/0
2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
@@ -89,13 +92,13 @@ num target prot opt source destination
13 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5222
14 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
15 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW multiport dports 3478,3479
-16 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
+16 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited # [tl! .nocopy:end]
```
Before moving on, it's important to save them so the rules will persist across reboots!
-```bash
-$ sudo netfilter-persistent save
-run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
+```shell
+sudo netfilter-persistent save # [tl! .cmd]
+run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
@@ -112,30 +115,30 @@ share.vpota.to 300 IN CNAME chat.vpota.to
### Install `docker` and `docker-compose`
Snikket is distributed as a set of docker containers which makes it super easy to get up and running on basically any Linux system. But, of course, you'll first need to [install `docker`](https://docs.docker.com/engine/install/ubuntu/)
-```bash
+```shell
# Update package index
-sudo apt update
+sudo apt update # [tl! .cmd]
# Install prereqs
-sudo apt install ca-certificates curl gnupg lsb-release
+sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd]
# Add docker's GPG key
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # [tl! .cmd]
# Add the docker repo
-echo \
+echo \ # [tl! .cmd]
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Refresh the package index with the new repo added
-sudo apt update
+sudo apt update # [tl! .cmd]
# Install docker
-sudo apt install docker-ce docker-ce-cli containerd.io
+sudo apt install docker-ce docker-ce-cli containerd.io # [tl! .cmd]
```
And install `docker-compose` also to simplify the container management:
-```bash
+```shell
# Download the docker-compose binary
-sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
+sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # [tl! .cmd]
# Make it executable
-sudo chmod +x /usr/local/bin/docker-compose
+sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd]
```
Now we're ready to...
@@ -143,21 +146,21 @@ Now we're ready to...
### Install Snikket
This starts with just making a place for Snikket to live:
-```bash
-sudo mkdir /etc/snikket
+```shell
+sudo mkdir /etc/snikket # [tl! .cmd:1]
cd /etc/snikket
```
And then grabbing the Snikket `docker-compose` file:
-```bash
-sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml
+```shell
+sudo curl -o docker-compose.yml https://snikket.org/service/resources/docker-compose.beta.yml # [tl! .cmd]
```
And then creating a very minimal configuration file:
-```bash
-sudo vi snikket.conf
+```shell
+sudo vi snikket.conf # [tl! .cmd]
```
A basic config only needs two parameters:
@@ -173,7 +176,8 @@ In my case, I'm going to add two additional parameters to restrict the UDP TURN
So here's my config:
-```
+```ini
+# torchlight! {"lineNumbers": true}
SNIKKET_DOMAIN=chat.vpota.to
SNIKKET_ADMIN_EMAIL=ops@example.com
@@ -185,8 +189,8 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
### Start it up!
With everything in place, I can start up the Snikket server:
-```bash
-sudo docker-compose up -d
+```shell
+sudo docker-compose up -d # [tl! .cmd]
```
This will take a moment or two to pull down all the required container images, start them, and automatically generate the SSL certificates. Very soon, though, I can point my browser to `https://chat.vpota.to` and see a lovely login page - complete with an automagically-valid-and-trusted certificate:
@@ -194,8 +198,8 @@ This will take a moment or two to pull down all the required container images, s
Of course, I don't yet have a way to log in, and like I mentioned earlier Snikket doesn't offer open user registration. Every user (even me, the admin!) has to be invited. Fortunately I can generate my first invite directly from the command line:
-```bash
-sudo docker exec snikket create-invite --admin --group default
+```shell
+sudo docker exec snikket create-invite --admin --group default # [tl! .cmd]
```
That command will return a customized invite link which I can copy and paste into my browser.
@@ -248,33 +252,34 @@ One of the really cool things about Caddy is that it automatically generates SSL
Fortunately, the [Snikket reverse proxy documentation](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#basic) was recently updated with a sample config for making this happen. Matrix and Snikket really only overlap on ports `80` and `443` so those are the only ports I'll need to handle, which lets me go for the "Basic" configuration instead of the "Advanced" one. I can just adapt the sample config from the documentation and add that to my existing `/etc/caddy/Caddyfile` alongside the config for Matrix:
-```
-http://chat.vpota.to,
+```text
+# torchlight! {"lineNumbers": true}
+http://chat.vpota.to, # [tl! focus:start]
http://groups.chat.vpota.to,
http://share.chat.vpota.to {
- reverse_proxy localhost:5080
+ reverse_proxy localhost:5080
}
chat.vpota.to,
groups.chat.vpota.to,
share.chat.vpota.to {
- reverse_proxy https://localhost:5443 {
- transport http {
- tls_insecure_skip_verify
- }
- }
-}
+ reverse_proxy https://localhost:5443 {
+ transport http {
+ tls_insecure_skip_verify
+ }
+ }
+} # [tl! focus:end]
matrix.bowdre.net {
- reverse_proxy /_matrix/* http://localhost:8008
- reverse_proxy /_synapse/client/* http://localhost:8008
+ reverse_proxy /_matrix/* http://localhost:8008
+ reverse_proxy /_synapse/client/* http://localhost:8008
}
bowdre.net {
- route {
- respond /.well-known/matrix/server `{"m.server": "matrix.bowdre.net:443"}`
- redir https://virtuallypotato.com
- }
+ route {
+ respond /.well-known/matrix/server `{"m.server": "matrix.bowdre.net:443"}`
+ redir https://virtuallypotato.com
+ }
}
```
@@ -291,32 +296,32 @@ Since Snikket is completely containerized, moving between hosts is a simple matt
The Snikket team has actually put together a couple of scripts to assist with [backing up](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/backup.sh) and [restoring](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) an instance. I just adapted the last line of each to do what I needed:
-```bash
-sudo docker run --rm --volumes-from=snikket \
+```shell
+sudo docker run --rm --volumes-from=snikket \ # [tl! .cmd]
-v "/home/john/snikket-backup/":/backup debian:buster-slim \
tar czf /backup/snikket-"$(date +%F-%H%m)".tar.gz /snikket
```
That will drop a compressed backup of the `snikket_data` volume into the specified directory, `/home/john/snikket-backup/`. While I'm at it, I'll also go ahead and copy the `docker-compose.yml` and `snikket.conf` files from `/etc/snikket/`:
-```bash
-$ sudo cp -a /etc/snikket/* /home/john/snikket-backup/
-$ ls -l /home/john/snikket-backup/
-total 1728
+```shell
+sudo cp -a /etc/snikket/* /home/john/snikket-backup/ # [tl! .cmd]
+ls -l /home/john/snikket-backup/ # [tl! .cmd]
+total 1728 # [tl! .nocopy:3]
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
```
And I can then zip that up for easy transfer:
-```bash
-tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/
+```shell
+tar cvf /home/john/snikket-backup.tar.gz /home/john/snikket-backup/ # [tl! .cmd]
```
This would be a great time to go ahead and stop this original Snikket instance. After all, nothing that happens after the backup was exported is going to carry over anyway.
-```bash
-sudo docker-compose down
+```shell
+sudo docker-compose down # [tl! .cmd]
```
{{% notice tip "Update DNS" %}}
This is also a great time to update the `A` record for `chat.vpota.to` so that it points to the new server. It will need a little bit of time for the change to trickle out, and the updated record really needs to be in place before starting Snikket on the new server so that there aren't any certificate problems.
@@ -325,18 +330,18 @@ This is also a great time to update the `A` record for `chat.vpota.to` so that i
Now I just need to transfer the archive from one server to the other. I've got [Tailscale](https://tailscale.com/)[^11] running on my various cloud servers so that they can talk to each other through a secure WireGuard tunnel (remember [WireGuard](/cloud-based-wireguard-vpn-remote-homelab-access/)?) without having to open any firewall ports between them, and that means I can just use `scp` to transfer the file without any fuss. I can even leverage Tailscale's [Magic DNS](https://tailscale.com/kb/1081/magicdns/) feature to avoid worrying with any IPs, just the hostname registered in Tailscale (`chat-oci`):
-```bash
-scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/
+```shell
+scp /home/john/snikket-backup.tar.gz chat-oci:/home/john/ # [tl! .cmd]
```
Next, I SSH in to the new server and unzip the archive:
-```bash
-$ ssh snikket-oci-server
-$ tar xf snikket-backup.tar.gz
-$ cd snikket-backup
-$ ls -l
-total 1728
+```shell
+ssh snikket-oci-server # [tl! .cmd:3]
+tar xf snikket-backup.tar.gz
+cd snikket-backup
+ls -l
+total 1728 # [tl! .nocopy:3]
-rw-r--r-- 1 root root 993 Dec 19 17:47 docker-compose.yml
-rw-r--r-- 1 root root 1761046 Dec 19 17:46 snikket-2021-12-19-1745.tar.gz
-rw-r--r-- 1 root root 299 Dec 19 17:47 snikket.conf
@@ -344,8 +349,8 @@ total 1728
Before I can restore the content of the `snikket-data` volume on the new server, I'll need to first go ahead and set up Snikket again. I've already got `docker` and `docker-compose` installed from when I installed Matrix so I'll skip to creating the Snikket directory and copying in the `docker-compose.yml` and `snikket.conf` files.
-```bash
-sudo mkdir /etc/snikket
+```shell
+sudo mkdir /etc/snikket # [tl! .cmd:3]
sudo cp docker-compose.yml /etc/snikket/
sudo cp snikket.conf /etc/snikket/
cd /etc/snikket
@@ -353,7 +358,8 @@ cd /etc/snikket
Before I fire this up on the new host, I need to edit the `snikket.conf` to tell Snikket to use those different ports defined in the reverse proxy configuration using [a couple of `SNIKKET_TWEAK_*` lines](https://github.com/snikket-im/snikket-server/blob/master/docs/advanced/reverse_proxy.md#snikket):
-```
+```ini
+# torchlight! {"lineNumbers": true}
SNIKKET_DOMAIN=chat.vpota.to
SNIKKET_ADMIN_EMAIL=ops@example.com
@@ -364,16 +370,16 @@ SNIKKET_TWEAK_TURNSERVER_MAX_PORT=60100
```
Alright, let's start up the Snikket server:
-```bash
-sudo docker-compose up -d
+```shell
+sudo docker-compose up -d # [tl! .cmd]
```
After a moment or two, I can point a browser to `https://chat.vpota.to` and see the login screen (with a valid SSL certificate!) but I won't actually be able to log in. As far as Snikket is concerned, this is a brand new setup.
Now I can borrow the last line from the [`restore.sh` script](https://github.com/snikket-im/snikket-selfhosted/blob/main/scripts/restore.sh) to bring in my data:
-```bash
-sudo docker run --rm --volumes-from=snikket \
+```shell
+sudo docker run --rm --volumes-from=snikket \ # [tl! .cmd]
--mount type=bind,source="/home/john/snikket-backup/snikket-2021-12-19-1745.tar.gz",destination=/backup.tar.gz \
debian:buster-slim \
bash -c "rm -rf /snikket/*; tar xvf /backup.tar.gz -C /"
diff --git a/content/posts/spotlight-on-torchlight/index.md b/content/posts/spotlight-on-torchlight/index.md
new file mode 100644
index 0000000..dfd69b2
--- /dev/null
+++ b/content/posts/spotlight-on-torchlight/index.md
@@ -0,0 +1,949 @@
+---
+title: "Spotlight on Torchlight"
+date: 2023-11-09
+lastmod: 2023-11-13
+description: "Syntax highlighting powered by the Torchlight.dev API makes it easier to dress up code blocks. Here's an overview of what I did to replace this blog's built-in Hugo highlighter (Chroma) with Torchlight."
+featured: false
+toc: true
+comment: true
+series: Projects # Projects, Scripts
+tags:
+ - javascript
+ - hugo
+ - meta
+---
+
+I've been futzing around a bit with how code blocks render on this blog. Hugo has a built-in, _really fast_, [syntax highlighter](https://gohugo.io/content-management/syntax-highlighting/) courtesy of [Chroma](https://github.com/alecthomas/chroma). Chroma is basically automatic and it renders very quickly[^fast] during the `hugo` build process, and it's a pretty solid "works everywhere out of the box" option.
+
+That said, the one-size-fits-all approach may not actually fit everyone *well*, and Chroma does leave me wanting a bit more. Chroma sometimes struggles with tokenizing and highlighting certain languages, leaving me with boring monochromatic text blocks. Hugo's implementation supports highlighting individual lines by inserting directives next to the code fence backticks (like `{hl_lines="11-13"}` to highlight lines 11-13), but that can be clumsy if you're not sure which lines need to be highlighted[^eleven], are needing to highlight multiple disjointed lines, or later insert additional lines which throw off the count. And sometimes I'd like to share a full file for context while also collapsing it down to just the bits I'm going to write about. That's not something that can be done with the built-in highlighter (at least not without tacking on a bunch of extra JavaScript and CSS nonsense[^nonsense]).
+
+[^fast]: Did I mention that it's fast?
+[^eleven]: (or how to count to eleven)
+[^nonsense]: Spoiler: I'm going to tack on some JS and CSS nonsense later - we'll get to that.
+
+But then I found a post from Sebastian de Deyne about [Better code highlighting in Hugo with Torchlight](https://sebastiandedeyne.com/better-code-highlighting-in-hugo-with-torchlight). and I thought that [Torchlight](https://torchlight.dev) sounded pretty promising.
+
+From Torchlight's [docs](https://torchlight.dev/docs),
+
+> *Torchlight is a VS Code-compatible syntax highlighter that requires no JavaScript, supports every language, every VS Code theme, line highlighting, git diffing, and more.*
+>
+> *Unlike traditional syntax highlighting tools, Torchlight is an HTTP API that tokenizes and highlights your code on our backend server instead of in the visitor's browser.*
+>
+> *We find this to be the easiest and most powerful way to achieve accurate and feature rich syntax highlighting.*
+>
+> *Client-side language parsers are limited in their complexity since they have to run in the browser environment. There are a lot of edge cases that those libraries can't catch.*
+>
+> *Torchlight relies on the VS Code parsing engine and TextMate language grammars to achieve the most accurate results possible. We bring the power of the entire VS Code ecosystem to your docs or blog.*
+
+In short: Code blocks in, formatted HTML out, and no JavaScript or extra code to render this slick display in the browser:
+```toml
+# torchlight! {"lineNumbers": true}
+# netlify.toml
+[build]
+ publish = "public"
+
+[build.environment]
+ HUGO_VERSION = "0.111.3" # [tl! --]
+ HUGO_VERSION = "0.116.1" # [tl! ++ reindex(-1)]
+
+[context.production] # [tl! focus:5 highlight:3,1]
+ command = """
+ hugo --minify
+ npm i @torchlight-api/torchlight-cli
+ npx torchlight
+ """
+
+[context.preview] # [tl! collapse:start]
+ command = """
+ hugo --minify --environment preview
+ npm i @torchlight-api/torchlight-cli
+ npx torchlight
+ """
+ [[headers]]
+ for = "/*"
+ [headers.values]
+ X-Robots-Tag = "noindex"
+
+[[redirects]]
+ from = "/*"
+ to = "/404/"
+ status = 404 # [tl! collapse:end]
+```
+
+Pretty nice, right? That block's got:
+- Colorful, accurate syntax highlighting
+- Traditional line highlighting
+- A shnazzy blur/focus to really make the important lines pop
+- In-line diffs to show what's changed
+- An expandable section to reveal additional context on-demand
+
+And marking-up that code block was pretty easy and intuitive. Torchlight is controlled by [annotations](https://torchlight.dev/docs/annotations) inserted as comments appropriate for whatever language you're using (like `# [tl! highlight]` to highlight a single line). In most cases you can just put the annotation right at the end of the line you're trying to flag. You can also [specify ranges](https://torchlight.dev/docs/annotations/ranges) relative to the current line (`[tl! focus:5]` to apply the focus effect to the current line and the next five) or use `:start` and `:end` so you don't have to count at all.
+```toml
+# torchlight! {"torchlightAnnotations": false}
+# netlify.toml
+[build]
+ publish = "public"
+
+[build.environment]
+ # diff: remove this line
+ HUGO_VERSION = "0.111.3" # [tl! --]
+ # diff: add this line, adjust line numbering to compensate
+ HUGO_VERSION = "0.116.1" # [tl! ++ reindex(-1)]
+
+# focus this line and the following 5, highlight the third line down
+[context.production] # [tl! focus:5 highlight:3,1]
+ command = """
+ hugo --minify
+ npm i @torchlight-api/torchlight-cli
+ npx torchlight
+ """
+
+# collapse everything from `:start` to `:end`
+[context.preview] # [tl! collapse:start]
+ command = """
+ hugo --minify --environment preview
+ npm i @torchlight-api/torchlight-cli
+ npx torchlight
+ """
+ [[headers]]
+ for = "/*"
+ [headers.values]
+ X-Robots-Tag = "noindex"
+
+[[redirects]]
+ from = "/*"
+ to = "/404/"
+ status = 404 # [tl! collapse:end]
+```
+
+See what I mean? Being able to put the annotations directly on the line(s) they modify is a lot easier to manage than trying to keep track of multiple line numbers in the header. And I think the effect is pretty cool.
+
+### Basic setup
+So what did it take to get this working on my blog?
+
+I started with registering for a free[^free] account at [torchlight.dev](https://app.torchlight.dev/register?plan=free_month) and generating an API token. I'll need to include that later with calls to the Torchlight API. The token will be stashed as an environment variable in my Netlify configuration, but I'll also stick it in a local `.env` file for use with local builds:
+```shell
+echo "TORCHLIGHT_TOKEN=torch_[...]" > ./.env # [tl! .cmd]
+```
+
+[^free]: Torchlight is free for sites which don't generate revenue, though it does require a link back to `torchlight.dev`. I stuck the attribution link in the footer. More pricing info [here](https://torchlight.dev/#pricing).
+
+#### Installation
+I then used `npm` to install Torchlight in the root of my Hugo repo:
+```shell
+npm i @torchlight-api/torchlight-cli # [tl! .cmd]
+# [tl! .nocopy:1]
+added 94 packages in 5s
+```
+
+That created a few new files and directories that I don't want to sync with the repo, so I added those to my `.gitignore` configuration. I'll also be sure to add that `.env` file so that I don't commit any secrets!
+```
+# torchlight! {"lineNumbers": true}
+# .gitignore
+.hugo_build.lock
+/node_modules/ [tl! ++:2]
+/package-lock.json
+/package.json
+/public/
+/resources/
+/.env [tl! ++]
+```
+
+The [installation instructions](https://torchlight.dev/docs/clients/cli#init-command) say to then initialize Torchlight like so:
+```shell
+npx torchlight init # [tl! .cmd focus]
+# [tl! .nocopy:start]
+node:internal/fs/utils:350
+ throw err;
+ ^
+
+Error: ENOENT: no such file or directory, open '/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/stubs/config.js' # [tl! focus]
+ at Object.openSync (node:fs:603:3)
+ at Object.readFileSync (node:fs:471:35)
+ at write (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:524:39)
+ at init (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:538:12)
+ at Command. (/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/bin/torchlight.cjs.js:722:12)
+ at Command.listener [as _actionHandler] (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:488:17)
+ at /home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1227:65
+ at Command._chainOrCall (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1144:12)
+ at Command._parseCommand (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1227:27)
+ at Command._dispatchSubcommand (/home/john/projects/runtimeterror/node_modules/commander/lib/command.js:1050:25) {
+ errno: -2,
+ syscall: 'open',
+ code: 'ENOENT',
+ path: '/home/john/projects/runtimeterror/node_modules/@torchlight-api/torchlight-cli/dist/stubs/config.js'
+}
+
+Node.js v18.17.1
+# [tl! .nocopy:end]
+```
+
+Oh. Hmm.
+
+There's an [open issue](https://github.com/torchlight-api/torchlight-cli/issues/4) which reveals that the stub config file is actually located under the `src/` directory instead of `dist/`. And it turns out the `init` step isn't strictly necessary, it's just a helper to get you a working config to start.
+
+#### Configuration
+Now that I know where the stub config lives, I can simply copy it to my repo root. I'll then get to work modifying it to suit my needs:
+
+```shell
+cp node_modules/@torchlight-api/torchlight-cli/src/stubs/config.js ./torchlight.config.js # [tl! .cmd]
+```
+
+```js
+// torchlight! {"lineNumbers": true}
+// torchlight.config.js
+module.exports = {
+ // Your token from https://torchlight.dev
+ token: process.env.TORCHLIGHT_TOKEN, // this will come from a netlify build var [tl! highlight focus]
+
+ // The Torchlight client caches highlighted code blocks. Here you
+ // can define which directory you'd like to use. You'll likely
+ // want to add this directory to your .gitignore. Set to
+ // `false` to use an in-memory cache. You may also
+ // provide a full cache implementation.
+ cache: 'cache', // [tl! -- focus:1]
+ cache: false, // disable cache for netlify builds [tl! ++ reindex(-1)]
+
+ // Which theme you want to use. You can find all of the themes at
+ // https://torchlight.dev/docs/themes.
+ theme: 'material-theme-palenight', // [tl! -- focus:1]
+ theme: 'one-dark-pro', // switch up the theme [tl! ++ reindex(-1)]
+
+ // The Host of the API.
+ host: 'https://api.torchlight.dev',
+
+ // Global options to control block-level settings.
+ // https://torchlight.dev/docs/options
+ options: {
+ // Turn line numbers on or off globally.
+ lineNumbers: false,
+
+ // Control the `style` attribute applied to line numbers.
+ // lineNumbersStyle: '',
+
+ // Turn on +/- diff indicators.
+ diffIndicators: true,
+
+ // If there are any diff indicators for a line, put them
+ // in place of the line number to save horizontal space.
+ diffIndicatorsInPlaceOfLineNumbers: true // [tl! --]
+ diffIndicatorsInPlaceOfLineNumbers: true, // [tl! ++ reindex(-1)]
+
+ // When lines are collapsed, this is the text that will
+ // be shown to indicate that they can be expanded.
+ // summaryCollapsedIndicator: '...', [tl! --]
+ summaryCollapsedIndicator: 'Click to expand...', // make the collapse a little more explicit [tl! ++ reindex(-1)]
+ },
+
+ // Options for the highlight command.
+ highlight: {
+ // Directory where your un-highlighted source files live. If
+ // left blank, Torchlight will use the current directory.
+ input: '', // [tl! -- focus:1]
+ input: 'public', // tells Torchlight where to find Hugo's processed HTML output [tl! ++ reindex(-1)]
+
+ // Directory where your highlighted files should be placed. If
+ // left blank, files will be modified in place.
+ output: '',
+
+ // Globs to include when looking for files to highlight.
+ includeGlobs: [
+ '**/*.htm',
+ '**/*.html'
+ ],
+
+ // String patterns to ignore (not globs). The entire file
+ // path will be searched and if any of these strings
+ // appear, the file will be ignored.
+ excludePatterns: [
+ '/node_modules/',
+ '/vendor/'
+ ]
+ }
+}
+```
+
+You can find more details about the configuration options [here](https://torchlight.dev/docs/clients/cli#configuration-file).
+
+#### Stylization
+It's not strictly necessary for the basic functionality, but applying a little bit of extra CSS to match up with the classes leveraged by Torchlight can help to make things look a bit more polished. Fortunately for this _fake-it-til-you-make-it_ dev, Torchlight provides sample CSS that work great for this:
+
+- [Basic CSS](https://torchlight.dev/docs/css) for generally making things look tidy
+- [Focus CSS](https://torchlight.dev/docs/annotations/focusing#css) for that slick blur/focus effect
+- [Collapse CSS](https://torchlight.dev/docs/annotations/collapsing#required-css) for some accordion action
+
+Put those blocks together (along with a few minor tweaks), and here's what I started with in `assets/css/torchlight.css`:
+```css
+// torchlight! {"lineNumbers": true}
+
+/*********************************************
+* Basic styling for Torchlight code blocks. *
+**********************************************/
+
+/*
+ Margin and rounding are personal preferences,
+ overflow-x-auto is recommended.
+*/
+pre {
+ border-radius: 0.25rem;
+ margin-top: 1rem;
+ margin-bottom: 1rem;
+ overflow-x: auto;
+}
+
+/*
+ Add some vertical padding and expand the width
+ to fill its container. The horizontal padding
+ comes at the line level so that background
+ colors extend edge to edge.
+*/
+pre.torchlight {
+ display: block;
+ min-width: -webkit-max-content;
+ min-width: -moz-max-content;
+ min-width: max-content;
+ padding-top: 1rem;
+ padding-bottom: 1rem;
+}
+
+/*
+ Horizontal line padding to match the vertical
+ padding from the code block above.
+*/
+pre.torchlight .line {
+ padding-left: 1rem;
+ padding-right: 1rem;
+}
+
+/*
+ Push the code away from the line numbers and
+ summary caret indicators.
+*/
+pre.torchlight .line-number,
+pre.torchlight .summary-caret {
+ margin-right: 1rem;
+}
+
+/*********************************************
+* Focus styling *
+**********************************************/
+
+/*
+ Blur and dim the lines that don't have the `.line-focus` class,
+ but are within a code block that contains any focus lines.
+*/
+.torchlight.has-focus-lines .line:not(.line-focus) {
+ transition: filter 0.35s, opacity 0.35s;
+ filter: blur(.095rem);
+ opacity: .65;
+}
+
+/*
+ When the code block is hovered, bring all the lines into focus.
+*/
+.torchlight.has-focus-lines:hover .line:not(.line-focus) {
+ filter: blur(0px);
+ opacity: 1;
+}
+
+/*********************************************
+* Collapse styling *
+**********************************************/
+
+.torchlight summary:focus {
+ outline: none;
+}
+
+/* Hide the default markers, as we provide our own */
+.torchlight details > summary::marker,
+.torchlight details > summary::-webkit-details-marker {
+ display: none;
+}
+
+.torchlight details .summary-caret::after {
+ pointer-events: none;
+}
+
+/* Add spaces to keep everything aligned */
+.torchlight .summary-caret-empty::after,
+.torchlight details .summary-caret-middle::after,
+.torchlight details .summary-caret-end::after {
+ content: " ";
+}
+
+/* Show a minus sign when the block is open. */
+.torchlight details[open] .summary-caret-start::after {
+ content: "-";
+}
+
+/* And a plus sign when the block is closed. */
+.torchlight details:not([open]) .summary-caret-start::after {
+ content: "+";
+}
+
+/* Hide the [...] indicator when open. */
+.torchlight details[open] .summary-hide-when-open {
+ display: none;
+}
+
+/* Show the [...] indicator when closed. */
+.torchlight details:not([open]) .summary-hide-when-open {
+ display: initial;
+}
+
+/*********************************************
+* Additional styling *
+**********************************************/
+
+/* Fix for disjointed horizontal scrollbars */
+.highlight div {
+ overflow-x: visible;
+}
+```
+
+I'll make sure that this CSS gets dynamically attached to any pages with a code block by adding this to the bottom of my `layouts/partials/head.html`:
+```html
+
+{{ if (findRE "
+{{ end }}
+```
+
+As a bit of housekeeping, I'm also going to remove the built-in highlighter configuration from my `config/_default/markup.toml` file to make sure it doesn't conflict with Torchlight:
+```toml
+# torchlight! {"lineNumbers": true}
+# config/_default/markup.toml
+[goldmark]
+ [goldmark.renderer]
+ hardWraps = false
+ unsafe = true
+ xhtml = false
+ [goldmark.extensions]
+ typographer = false
+
+[highlight] # [tl! --:start]
+ anchorLineNos = true
+ codeFences = true
+ guessSyntax = true
+ hl_Lines = ''
+ lineNos = false
+ lineNoStart = 1
+ lineNumbersInTable = false
+ noClasses = false
+ tabwidth = 2
+ style = 'monokai'
+# [tl! --:end]
+# Table of contents # [tl! reindex(10)]
+# Add toc = true to content front matter to enable
+[tableOfContents]
+ endLevel = 5
+ ordered = false
+ startLevel = 3
+```
+
+### Building
+Now that the pieces are in place, it's time to start building!
+
+#### Local
+I like to preview my blog as I work on it so that I know what it will look like before I hit `git push` and let Netlify do its magic. And Hugo has been fantastic for that! But since I'm offloading the syntax highlighting to the Torchlight API, I'll need to manually build the site instead of relying on Hugo's instant preview builds.
+
+There are a couple of steps I'll use for this:
+1. First, I'll `source .env` to load the `TORCHLIGHT_TOKEN` for the API.
+2. Then, I'll use `hugo --minify --environment local -D` to render my site into the `public/` directory.
+3. Next, I'll call `npx torchlight` to parse the HTML files in `public/`, extract the content of any `
`/`` blocks, send it to the Torchlight API to work the magic, and write the formatted code blocks back to the existing HTML files.
+4. Finally, I use `python3 -m http.server --directory public 1313` to serve the `public/` directory so I can view the content at `http://localhost:1313`.
+
+I'm lazy, though, so I'll even put that into a quick `build.sh` script to help me run local builds:
+```shell
+# torchlight! {"lineNumbers": true}
+#!/usr/bin/env bash
+# Quick script to run local builds
+source .env
+hugo --minify --environment local -D
+npx torchlight
+python3 -m http.server --directory public 1313
+```
+
+Now I can just make the script executable and fire it off:
+```shell
+chmod +x build.sh # [tl! focus:3 .cmd:1]
+./build.sh
+Start building sites … # [tl! .nocopy:start]
+hugo v0.111.3+extended linux/amd64 BuildDate=unknown VendorInfo=nixpkgs
+
+ | EN
+-------------------+------
+ Pages | 202
+ Paginator pages | 0
+ Non-page files | 553
+ Static files | 49
+ Processed images | 0
+ Aliases | 5
+ Sitemaps | 1
+ Cleaned | 0
+
+Total in 248 ms
+Highlighting index.html
+Highlighting 3d-modeling-and-printing-on-chrome-os/index.html
+Highlighting 404/index.html
+Highlighting about/index.html # [tl! collapse:start]
+
+ + + + O
+ o '
+ ________________ _
+ \__(=======/_=_/____.--'-`--.___
+ \ \ `,--,-.___.----'
+ .--`\\--'../ |
+ '---._____.|] -0- |o
+ * | -0- -O-
+ ' o 0 | '
+ . -0- . '
+
+Did you really want to see the full file list?
+
+Highlighting tags/vsphere/index.html # [tl! collapse:end]
+Highlighting tags/windows/index.html
+Highlighting tags/wireguard/index.html
+Highlighting tags/wsl/index.html # [tl! focus:1]
+Writing to /home/john/projects/runtimeterror/public/abusing-chromes-custom-search-engines-for-fun-and-profit/index.html
+Writing to /home/john/projects/runtimeterror/public/auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker/index.html
+Writing to /home/john/projects/runtimeterror/public/cat-file-without-comments/index.html # [tl! collapse:start]
+
+ ' * + -O- |
+ o o .
+ ___________ 0 o .
+ +/-/_"/-/_/-/| -0- o -O- * *
+ /"-/-_"/-_//|| . -O-
+ /__________/|/| + | *
+ |"|_'='-]:+|/|| . o -0- . *
+ |-+-|.|_'-"||// + | | ' ' 0
+ |[".[:!+-'=|// | -0- 0 -O-
+ |='!+|-:]|-|/ -0- o |-0- 0 -O-
+ ---------- * | -O| + o
+ o -O- -0- -0- -O-
+ | + | -O- |
+ -0- -0- . O
+ -O- | -O- *
+your code will be assimilated
+
+Writing to /home/john/projects/runtimeterror/public/k8s-on-vsphere-node-template-with-packer/index.html # [tl! collapse:end]
+Writing to /home/john/projects/runtimeterror/public/tanzu-community-edition-k8s-homelab/index.html
+Serving HTTP on 0.0.0.0 port 1313 (http://0.0.0.0:1313/) ... # [tl! focus:1]
+127.0.0.1 - - [07/Nov/2023 20:34:29] "GET /spotlight-on-torchlight/ HTTP/1.1" 200 -
+```
+
+#### Netlify
+Setting up Netlify to leverage the Torchlight API is kind of similar. I'll start with logging in to the [Netlify dashboard](https://app.netlify.com) and navigating to **Site Configuration > Environment Variables**. There, I'll click on **Add a variable > Add a ingle variable**. I'll give the new variable a key of `TORCHLIGHT_TOKEN` and set its value to the token I obtained earlier.
+
+![](netlify-env-var.png)
+
+Once that's done, I edit the `netlify.toml` file at the root of my site repo to alter the build commands:
+```toml
+# torchlight! {"lineNumbers": true}
+[build]
+ publish = "public"
+
+[build.environment]
+ HUGO_VERSION = "0.111.3"
+
+[context.production] # [tl! focus:6]
+ command = "hugo" # [tl! -- ++:1,5 reindex(-1):1,1]
+ command = """
+ hugo --minify
+ npm i @torchlight-api/torchlight-cli
+ npx torchlight
+ """
+
+```
+
+Now when I `git push` new content, Netlify will use Hugo to build the site, then install and call Torchlight to `++fancy;` the code blocks before the site gets served. Very nice!
+
+### #Goals
+Of course, I. Just. Can't. leave well enough alone, so my work here isn't finished - not by a long shot.
+
+You see, I'm a sucker for handy "copy" buttons attached to code blocks, and that's not something that Torchlight does (it just returns rendered HTML, remember? No fancy JavaScript here). I also wanted to add informative prompt indicators (like `$` and `#`) to code blocks representing command-line inputs (rather than script files). And I'd like to flag text returned by a command so that *only* the commands get copied, effectively ignoring the returned text, diff-removed lines, diff markers, line numbers, and prompt indicators.
+
+I had previously implemented a solution based *heavily* on Justin James' blog post, [Hugo - Dynamically Add Copy Code Snippet Button](https://digitaldrummerj.me/hugo-add-copy-code-snippet-button/). Getting that Chroma-focused solution to work well with Torchlight-formatted code blocks took some work, particularly since I'm inept at web development and can barely spell "CSS" and "JavaScrapped".
+
+But I[^copilot] eventually fumbled through the changes required to meet my #goals, and I'm pretty happy with how it all works.
+
+[^copilot]: With a little help from my Copilot buddy...
+
+#### Custom classes
+Remember Torchlight's in-line annotations that I mentioned earlier? They're pretty capable out of the box, but can also be expanded through the use of [custom classes](https://torchlight.dev/docs/annotations/classes). This makes it easy to selectively apply special handling to selected lines of code, something that's otherwise pretty dang tricky to do with Chroma.
+
+So, for instance, I could add a class `.cmd` for standard user-level command-line inputs:
+```shell
+# torchlight! {"torchlightAnnotations":false}
+sudo make me a sandwich # [tl! .cmd]
+```
+```shell
+sudo make me a sandwich # [tl! .cmd]
+```
+
+Or `.cmd_root` for a root prompt:
+```shell
+# torchlight! {"torchlightAnnotations": false}
+wall "Make your own damn sandwich." # [tl! .cmd_root]
+```
+```shell
+wall "Make your own damn sandwich." # [tl! .cmd_root]
+```
+
+And for deviants:
+```powershell
+# torchlight! {"torchlightAnnotations": false}
+Write-Host -ForegroundColor Green "A taco is a sandwich" # [tl! .cmd_pwsh]
+```
+```powershell
+Write-Host -ForegroundColor Green "A taco is a sandwich" # [tl! .cmd_pwsh]
+```
+
+I also came up with a cleverly-named `.nocopy` class for the returned lines that shouldn't be copyable:
+```shell
+# torchlight! {"torchlightAnnotations": false}
+copy this # [tl! .cmd]
+but not this # [tl! .nocopy]
+```
+```shell
+copy this # [tl! .cmd]
+but not this # [tl! .nocopy]
+```
+
+So that's how I'll tie my custom classes to individual lines of code[^ranges], but I still need to actually define those classes.
+
+I'll drop those at the bottom of the `assets/css/torchlight.css` file I created earlier:
+
+```css
+// torchlight! {"lineNumbers": true}
+/* [tl! collapse:start]
+/*********************************************
+* Basic styling for Torchlight code blocks. *
+**********************************************/
+
+/*
+ Margin and rounding are personal preferences,
+ overflow-x-auto is recommended.
+*/
+pre {
+ border-radius: 0.25rem;
+ margin-top: 1rem;
+ margin-bottom: 1rem;
+ overflow-x: auto;
+}
+
+/*
+ Add some vertical padding and expand the width
+ to fill its container. The horizontal padding
+ comes at the line level so that background
+ colors extend edge to edge.
+*/
+pre.torchlight {
+ display: block;
+ min-width: -webkit-max-content;
+ min-width: -moz-max-content;
+ min-width: max-content;
+ padding-top: 1rem;
+ padding-bottom: 1rem;
+}
+
+/*
+ Horizontal line padding to match the vertical
+ padding from the code block above.
+*/
+pre.torchlight .line {
+ padding-left: 1rem;
+ padding-right: 1rem;
+}
+
+/*
+ Push the code away from the line numbers and
+ summary caret indicators.
+*/
+pre.torchlight .line-number,
+pre.torchlight .summary-caret {
+ margin-right: 1rem;
+}
+
+/*********************************************
+* Focus styling *
+**********************************************/
+
+/*
+ Blur and dim the lines that don't have the `.line-focus` class,
+ but are within a code block that contains any focus lines.
+*/
+.torchlight.has-focus-lines .line:not(.line-focus) {
+ transition: filter 0.35s, opacity 0.35s;
+ filter: blur(.095rem);
+ opacity: .65;
+}
+
+/*
+ When the code block is hovered, bring all the lines into focus.
+*/
+.torchlight.has-focus-lines:hover .line:not(.line-focus) {
+ filter: blur(0px);
+ opacity: 1;
+}
+
+/*********************************************
+* Collapse styling *
+**********************************************/
+
+.torchlight summary:focus {
+ outline: none;
+}
+
+/* Hide the default markers, as we provide our own */
+.torchlight details > summary::marker,
+.torchlight details > summary::-webkit-details-marker {
+ display: none;
+}
+
+.torchlight details .summary-caret::after {
+ pointer-events: none;
+}
+
+/* Add spaces to keep everything aligned */
+.torchlight .summary-caret-empty::after,
+.torchlight details .summary-caret-middle::after,
+.torchlight details .summary-caret-end::after {
+ content: " ";
+}
+
+/* Show a minus sign when the block is open. */
+.torchlight details[open] .summary-caret-start::after {
+ content: "-";
+}
+
+/* And a plus sign when the block is closed. */
+.torchlight details:not([open]) .summary-caret-start::after {
+ content: "+";
+}
+
+/* Hide the [...] indicator when open. */
+.torchlight details[open] .summary-hide-when-open {
+ display: none;
+}
+
+/* Show the [...] indicator when closed. */
+.torchlight details:not([open]) .summary-hide-when-open {
+ display: initial;
+} /* [tl! collapse:end]
+
+/*********************************************
+* Additional styling *
+**********************************************/
+
+/* Fix for disjointed horizontal scrollbars */
+.highlight div {
+ overflow-x: visible;
+}
+
+/* [tl! focus:start]
+Insert prompt indicators on interactive shells.
+*/
+.cmd::before {
+ color: var(--base07);
+ content: "$ ";
+}
+
+.cmd_root::before {
+ color: var(--base08);
+ content: "# ";
+}
+
+.cmd_pwsh::before {
+ color: var(--base07);
+ content: "PS> ";
+}
+
+/*
+Don't copy shell outputs
+*/
+.nocopy {
+ webkit-user-select: none;
+ user-select: none;
+} /* [tl! focus:end]
+```
+
+[^ranges]: Or ranges of lines, using the same syntax as before: `[tl! .nocopy:5]` will make this line and the following five uncopyable.
+
+The `.cmd` classes will simply insert the respective prompt _before_ each flagged line, and the `.nocopy` class will prevent those lines from being selected (and copied). Now for the tricky part...
+
+#### Copy that blocky
+There are two major pieces for the code-copy wizardry: the CSS to style/arrange the copy button and language label, and the JavaScript to make it work.
+
+I put the CSS in `assets/css/code-copy-button.css`:
+
+```css
+// torchlight! {"lineNumbers": true}
+/* adapted from https://digitaldrummerj.me/hugo-add-copy-code-snippet-button/ */
+
+.highlight {
+ position: relative;
+ z-index: 0;
+ padding: 0;
+ margin:40px 0 10px 0;
+ border-radius: 4px;
+}
+
+.copy-code-button {
+ position: absolute;
+ z-index: -1;
+ right: 0px;
+ top: -26px;
+ font-size: 13px;
+ font-weight: 700;
+ line-height: 14px;
+ letter-spacing: 0.5px;
+ width: 65px;
+ color: var(--fg);
+ background-color: var(--bg);
+ border: 1.25px solid var(--off-bg);
+ border-top-left-radius: 4px;
+ border-top-right-radius: 4px;
+ border-bottom-right-radius: 0px;
+ border-bottom-left-radius: 0px;
+ white-space: nowrap;
+ padding: 6px 6px 7px 6px;
+ margin: 0 0 0 1px;
+ cursor: pointer;
+ opacity: 0.6;
+}
+
+.copy-code-button:hover,
+.copy-code-button:focus,
+.copy-code-button:active,
+.copy-code-button:active:hover {
+ color: var(--off-bg);
+ background-color: var(--off-fg);
+ opacity: 0.8;
+}
+
+.copyable-text-area {
+ position: absolute;
+ height: 0;
+ z-index: -1;
+ opacity: .01;
+}
+
+.torchlight [data-lang]:before {
+ position: absolute;
+ z-index: -1;
+ top: -26px;
+ left: 0px;
+ content: attr(data-lang);
+ font-size: 13px;
+ font-weight: 700;
+ color: var(--fg);
+ background-color: var(--bg);
+ border-top-left-radius: 4px;
+ border-top-right-radius: 4px;
+ border-bottom-left-radius: 0;
+ border-bottom-right-radius: 0;
+ padding: 6px 6px 7px 6px;
+ line-height: 14px;
+ opacity: 0.6;
+ position: absolute;
+ letter-spacing: 0.5px;
+ border: 1.25px solid var(--off-bg);
+ margin: 0 0 0 1px;
+}
+```
+
+And, as before, I'll link this from the bottom of my `layouts/partial/head.html` so it will get loaded on the appropriate pages:
+
+```html
+
+{{ if (findRE "
+ {{ $copyCss := resources.Get "css/code-copy-button.css" | minify }}
+
+{{ end }}
+```
+
+#### Code behind the copy
+That sure makes the code blocks and accompanying button / labels look pretty great, but I still need to actually make the button work. For that, I'll need some JavaScript that (again) largely comes from Justin's post.
+
+With all the different classes and things used with Torchlight, it took a lot of (generally misguided) tinkering for me to get the script to copy just the text I wanted (and nothing else). I learned a ton in the process, so I've highlighted the major deviations from Justin's script.
+
+Anyway, here's my `assets/js/code-copy-button.js`:
+```javascript
+// torchlight! {"lineNumbers": true}
+// adapted from https://digitaldrummerj.me/hugo-add-copy-code-snippet-button/
+
+function createCopyButton(highlightDiv) {
+ const button = document.createElement("button");
+ button.className = "copy-code-button";
+ button.type = "button";
+ button.innerText = "Copy";
+ button.addEventListener("click", () => copyCodeToClipboard(button, highlightDiv));
+ highlightDiv.insertBefore(button, highlightDiv.firstChild);
+ const wrapper = document.createElement("div");
+ wrapper.className = "highlight-wrapper";
+ highlightDiv.parentNode.insertBefore(wrapper, highlightDiv);
+ wrapper.appendChild(highlightDiv);
+}
+
+document.querySelectorAll(".highlight").forEach((highlightDiv) => createCopyButton(highlightDiv)); // [tl! focus:start]
+
+async function copyCodeToClipboard(button, highlightDiv) {
+ // capture all code lines in the selected block which aren't classed `nocopy` or `line-remove`
+ let codeToCopy = highlightDiv.querySelectorAll(":last-child > .torchlight > code > .line:not(.nocopy, .line-remove)");
+ // now remove the first-child of each line if it is of class `line-number`
+ codeToCopy = Array.from(codeToCopy).reduce((accumulator, line) => {
+ if (line.firstChild.className != "line-number") {
+ return accumulator + line.innerText + "\n"; }
+ else {
+ return accumulator + Array.from(line.children).filter(
+ (child) => child.className != "line-number").reduce(
+ (accumulator, child) => accumulator + child.innerText, "") + "\n";
+ }
+ }, ""); // [tl! focus:end]
+ try {
+ var result = await navigator.permissions.query({ name: "clipboard-write" });
+ if (result.state == "granted" || result.state == "prompt") {
+ await navigator.clipboard.writeText(codeToCopy);
+ } else {
+ button.blur();
+ button.innerText = "Error!";
+ setTimeout(function () {
+ button.innerText = "Copy";
+ }, 2000);
+ }
+ } catch (_) {
+ button.blur();
+ button.innerText = "Error!";
+ setTimeout(function () {
+ button.innerText = "Copy";
+ }, 2000);
+ } finally {
+ button.blur();
+ button.innerText = "Copied!";
+ setTimeout(function () {
+ button.innerText = "Copy";
+ }, 2000);
+ }
+}
+
+```
+
+And this script gets called from the bottom of my `layouts/partials/footer.html`:
+```html
+{{ if (findRE "
+{{ end }}
+```
+
+### Going live!
+
+And at this point, I can just run my `build.sh` script again to rebuild the site locally and verify that it works as well as I think it does.
+
+It looks pretty good to me, so I'll go ahead and push this up to Netlify. If all goes well, this post and the new code block styling will go live at the same time.
+
+See you on the other side!
\ No newline at end of file
diff --git a/content/posts/spotlight-on-torchlight/netlify-env-var.png b/content/posts/spotlight-on-torchlight/netlify-env-var.png
new file mode 100644
index 0000000..ab5204f
Binary files /dev/null and b/content/posts/spotlight-on-torchlight/netlify-env-var.png differ
diff --git a/content/posts/systemctl-edit-delay-service-startup/index.md b/content/posts/systemctl-edit-delay-service-startup/index.md
index 82462fe..ba3f025 100644
--- a/content/posts/systemctl-edit-delay-service-startup/index.md
+++ b/content/posts/systemctl-edit-delay-service-startup/index.md
@@ -2,7 +2,6 @@
title: "Using `systemctl edit` to Delay Service Startup"
date: 2023-10-15
# lastmod: 2023-10-15
-draft: true
description: "Quick notes on using `systemctl edit` to override a systemd service to delay its startup."
featured: false
toc: false
@@ -17,7 +16,7 @@ Following a recent update, I found that the [Linux development environment](http
Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the `systemctl edit` command to create a quick override configuration:
```shell
-sudo systemctl edit tailscaled
+sudo systemctl edit tailscaled # [tl! .cmd]
```
This shows me the existing contents of the `tailscaled.service` definition so I can easily insert some overrides above. In this case, I just want to use `sleep 5` as the `ExecStartPre` command so that the service start will be delayed by 5 seconds:
diff --git a/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md b/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md
index f512f4b..bcbcf3c 100644
--- a/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md
+++ b/content/posts/tailscale-golink-private-shortlinks-tailnet/index.md
@@ -36,36 +36,36 @@ Sounds great - but how do you actually make golink available on your tailnet? We
There are three things I'll need to do in the Tailscale admin portal before moving on:
#### Create an ACL tag
I assign ACL tags to devices in my tailnet based on their location and/or purpose, and I'm then able to use those in a policy to restrict access between certain devices. To that end, I'm going to create a new `tag:golink` tag for this purpose. Creating a new tag in Tailscale is really just going to the [Access Controls page of the admin console](https://login.tailscale.com/admin/acls) and editing the policy to specify a `tagOwner` who is permitted to assign the tag:
-```text {hl_lines=[11]}
- "groups":
- "group:admins": ["john@example.com"],
- },
- "tagOwners": {
- "tag:home": ["group:admins"],
- "tag:cloud": ["group:admins"],
- "tag:client": ["group:admins"],
- "tag:dns": ["group:admins"],
- "tag:rsync": ["group:admins"],
- "tag:funnel": ["group:admins"],
- "tag:golink": ["group:admins"],
- },
+```json
+ "groups":
+ "group:admins": ["john@example.com"],
+ },
+ "tagOwners": {
+ "tag:home": ["group:admins"],
+ "tag:cloud": ["group:admins"],
+ "tag:client": ["group:admins"],
+ "tag:dns": ["group:admins"],
+ "tag:rsync": ["group:admins"],
+ "tag:funnel": ["group:admins"],
+ "tag:golink": ["group:admins"], // [tl! highlight]
+ },
```
#### Configure ACL access
This step is really only necessary since I've altered the default Tailscale ACL and prevent my nodes from communicating with each other unless specifically permitted. I want to make sure that everything on my tailnet can access golink:
-```text
+```json
"acls": [
- {
- // make golink accessible to everything
- "action": "accept",
- "users": ["*"],
- "ports": [
- "tag:golink:80",
- ],
- },
+ {
+ // make golink accessible to everything
+ "action": "accept",
+ "users": ["*"],
+ "ports": [
+ "tag:golink:80",
+ ],
+ },
...
- ],
+ ],
```
#### Create an auth key
@@ -81,19 +81,20 @@ After clicking the **Generate key** button, the key will be displayed. This is t
### Docker setup
The [golink repo](https://github.com/tailscale/golink) offers this command for running the container:
```shell
-docker run -it --rm ghcr.io/tailscale/golink:main
+docker run -it --rm ghcr.io/tailscale/golink:main # [tl! .cmd]
```
The doc also indicates that I can pass the auth key to the golink service via the `TS_AUTHKEY` environment variable, and that all the configuration will be stored in `/home/nonroot` (which will be owned by uid/gid `65532`). I'll take this knowledge and use it to craft a `docker-compose.yml` to simplify container management.
```shell
-mkdir -p golink/data
+mkdir -p golink/data # [tl! .cmd:3]
cd golink
chmod 65532:65532 data
vi docker-compose.yaml
```
```yaml
+# torchlight! {"lineNumbers": true}
# golink docker-compose.yaml
version: '3'
services:
@@ -138,9 +139,7 @@ Some of my other golinks:
| `ipam` | `https://ipam.lab.bowdre.net/{{with .Path}}tools/search/{{.}}{{end}}` | searches my lab phpIPAM instance |
| `pdb` | `https://www.protondb.com/{{with .Path}}search?q={{.}}{{end}}` | searches [protondb](https://www.protondb.com/), super-handy for checking game compatibility when [Tailscale is installed on a Steam Deck](https://tailscale.com/blog/steam-deck/) |
| `tailnet` | `https://login.tailscale.com/admin/machines?q={{.Path}}` | searches my Tailscale admin panel for a machine name |
-| `vpot8` | `https://www.virtuallypotato.com/{{with .Path}}search?query={{.}}{{end}}` | searches this here site |
| `sho` | `https://www.shodan.io/{{with .Path}}search?query={{.}}{{end}}` | searches Shodan for interesting internet-connected systems |
-| `tools` | `https://neeva.com/spaces/m_Bhx8tPfYQbOmaW1UHz-3a_xg3h2amlogo2GzgD` | shortcut to my [Tech Toolkit space](https://neeva.com/spaces/m_Bhx8tPfYQbOmaW1UHz-3a_xg3h2amlogo2GzgD) on Neeva |
| `randpass` | `https://www.random.org/passwords/?num=1\u0026len=24\u0026format=plain\u0026rnd=new` | generates a random 24-character string suitable for use as a password (`curl`-friendly) |
| `wx` | `https://wttr.in/{{ .Path }}` | local weather report based on geolocation or weather for a designated city (`curl`-friendly) |
@@ -149,7 +148,7 @@ You can browse to `go/.export` to see a JSON-formatted listing of all configured
To restore, just pass `--snapshot /path/to/links.json` when starting golink. What I usually do is copy the file into the `data` folder that I'm mounting as a Docker volume, and then just run:
```shell
-sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json
+sudo docker exec golink /golink --sqlitedb /home/nonroot/golink.db --snapshot /home/nonroot/links.json # [tl! .cmd]
```
### Conclusion
diff --git a/content/posts/tailscale-on-vmware-photon/index.md b/content/posts/tailscale-on-vmware-photon/index.md
index 99b259a..d9fc3f7 100644
--- a/content/posts/tailscale-on-vmware-photon/index.md
+++ b/content/posts/tailscale-on-vmware-photon/index.md
@@ -31,20 +31,20 @@ Here's a condensed list of the [steps that I took to manually install Tailscale]
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
2. Download and extract it to the system:
```shell
-wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz
+wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz # [tl! .cmd:2]
tar xvf tailscale_1.34.1_arm64.tgz
cd tailscale_1.34.1_arm64/
```
3. Install the binaries and service files:
```shell
-sudo install -m 755 tailscale /usr/bin/
+sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:4]
sudo install -m 755 tailscaled /usr/sbin/
sudo install -m 644 systemd/tailscaled.defaults /etc/default/tailscaled
sudo install -m 644 systemd/tailscaled.service /usr/lib/systemd/system/
```
4. Start the service:
```shell
-sudo systemctl enable tailscaled
+sudo systemctl enable tailscaled # [tl! .cmd:1]
sudo systemctl start tailscaled
```
diff --git a/content/posts/tanzu-community-edition-k8s-homelab/index.md b/content/posts/tanzu-community-edition-k8s-homelab/index.md
index c0e2440..f63a9cc 100644
--- a/content/posts/tanzu-community-edition-k8s-homelab/index.md
+++ b/content/posts/tanzu-community-edition-k8s-homelab/index.md
@@ -68,9 +68,9 @@ I've already got Docker installed on this machine, but if I didn't I would follo
I also verify that my install is using `cgroup` version 1 as version 2 is not currently supported:
-```bash
-❯ docker info | grep -i cgroup
- Cgroup Driver: cgroupfs
+```shell
+docker info | grep -i cgroup # [tl! .cmd]
+ Cgroup Driver: cgroupfs # [tl! .nocopy:1]
Cgroup Version: 1
```
@@ -79,60 +79,49 @@ Next up, I'll install `kubectl` [as described here](https://kubernetes.io/docs/t
I can look at the [releases page on GithHub](https://github.com/kubernetes/kubernetes/releases) to see that the latest release for me is `1.22.5`. With this newfound knowledge I can follow the [Install kubectl binary with curl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux) instructions to grab that specific version:
-```bash
-❯ curl -LO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl
-
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
-100 154 100 154 0 0 2298 0 --:--:-- --:--:-- --:--:-- 2298
-100 44.7M 100 44.7M 0 0 56.9M 0 --:--:-- --:--:-- --:--:-- 56.9M
-
-❯ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
-
+```shell
+curl -sLO https://dl.k8s.io/release/v1.22.5/bin/linux/amd64/kubectl # [tl! .cmd:1]
+sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
+# [tl! .nocopy:2]
[sudo] password for john:
-❯ kubectl version --client
-Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
+kubectl version --client # [tl! .cmd]
+Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", # [tl! .nocopy:3]
+ GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean",
+ BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc",
+ Platform:"linux/amd64"}
```
#### `kind` binary
It's not strictly a requirement, but having the `kind` executable available will be handy for troubleshooting during the bootstrap process in case anything goes sideways. It can be installed in basically the same was as `kubectl`:
-```bash
-❯ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
-
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
-100 98 100 98 0 0 513 0 --:--:-- --:--:-- --:--:-- 513
-100 655 100 655 0 0 2212 0 --:--:-- --:--:-- --:--:-- 10076
-100 6660k 100 6660k 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M
-
-❯ sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
-
-❯ kind version
-kind v0.11.1 go1.16.5 linux/amd64
+```shell
+curl -sLo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 # [tl! .cmd:2]
+sudo install -o root -g root -m 0755 kind /usr/local/bin/kind
+kind version
+kind v0.11.1 go1.16.5 linux/amd64 # [tl! .nocopy]
```
#### Tanzu CLI
The final bit of required software is the Tanzu CLI, which can be downloaded from the [project on GitHub](https://github.com/vmware-tanzu/community-edition/releases).
-```bash
-curl -H "Accept: application/vnd.github.v3.raw" \
- -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
- bash -s v0.9.1 linux
+```shell
+curl -H "Accept: application/vnd.github.v3.raw" \ # [tl! .cmd]
+ -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
+ bash -s v0.9.1 linux
```
And then unpack it and run the installer:
-```bash
-tar xf tce-linux-amd64-v0.9.1.tar.gz
+```shell
+tar xf tce-linux-amd64-v0.9.1.tar.gz # [tl! .cmd:2]
cd tce-linux-amd64-v0.9.1
./install.sh
```
I can then verify the installation is working correctly:
-```bash
-❯ tanzu version
-version: v0.2.1
+```shell
+tanzu version # [tl! .cmd]
+version: v0.2.1 # [tl! .nocopy:2]
buildDate: 2021-09-29
sha: ceaa474
```
@@ -142,15 +131,15 @@ Okay, now it's time for the good stuff - creating some shiny new Tanzu clusters!
#### Management cluster
I need to create a Management cluster first and I'd like to do that with the UI, so that's as simple as:
-```bash
-tanzu management-cluster create --ui
+```shell
+tanzu management-cluster create --ui # [tl! .cmd]
```
I should then be able to access the UI by pointing a web browser at `http://127.0.0.1:8080`... but I'm running this on a VM without a GUI, so I'll need to back up and tell it to bind on `0.0.0.0:8080` so the web installer will be accessible across the network. I can also include `--browser none` so that the installer doesn't bother with trying to launch a browser locally.
-```bash
-❯ tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none
-
+```shell
+tanzu management-cluster create --ui --bind 0.0.0.0:8080 --browser none # [tl! .cmd]
+# [tl! .nocopy:2]
Validating the pre-requisites...
Serving kickstart UI at http://[::]:8080
```
@@ -186,20 +175,22 @@ I skip the Tanzu Mission Control piece (since I'm still waiting on access to [TM
See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly **Deploy** button doesn't seem to work while connected to the web server remotely.
-```bash
-tanzu management-cluster create --file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6
+```shell
+tanzu management-cluster create \ # [tl! .cmd]
+ --file /home/john/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml -v 6
```
In fact, I'm going to copy that file into my working directory and give it a more descriptive name so that I can re-use it in the future.
-```bash
-cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml ~/projects/tanzu-homelab/tce-mgmt.yaml
+```shell
+cp ~/.config/tanzu/tkg/clusterconfigs/dr94t3m2on.yaml \ # [tl! .cmd]
+ ~/projects/tanzu-homelab/tce-mgmt.yaml
```
Now I can run the install command:
-```bash
-tanzu management-cluster create --file ./tce-mgmt.yaml -v 6
+```shell
+tanzu management-cluster create --file ./tce-mgmt.yaml -v 6 # [tl! .cmd]
```
After a moment or two of verifying prerequisites, I'm met with a polite offer to enable Tanzu Kubernetes Grid Service in vSphere:
@@ -246,9 +237,9 @@ Some addons might be getting installed! Check their status by running the follow
I can run that last command to go ahead and verify that the addon installation has completed:
-```bash
-❯ kubectl get apps -A
-NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
+```shell
+kubectl get apps -A # [tl! .cmd]
+NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE # [tl! .nocopy:5]
tkg-system antrea Reconcile succeeded 26s 6m49s
tkg-system metrics-server Reconcile succeeded 36s 6m49s
tkg-system tanzu-addons-manager Reconcile succeeded 22s 8m54s
@@ -257,9 +248,9 @@ tkg-system vsphere-csi Reconcile succeeded 36s 6m50s
```
And I can use the Tanzu CLI to get some other details about the new management cluster:
-```bash
-❯ tanzu management-cluster get tce-mgmt
- NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
+```shell
+tanzu management-cluster get tce-mgmt # [tl! .cmd]
+ NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management
@@ -281,7 +272,7 @@ Providers:
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
capi-system cluster-api CoreProvider cluster-api v0.3.23
- capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10
+ capv-system infrastructure-vsphere InfrastructureProvider vsphere v0.7.10 # [tl! .nocopy:end]
```
@@ -292,8 +283,8 @@ Excellent! Things are looking good so I can move on to create the cluster which
#### Workload cluster
I won't use the UI for this but will instead take a copy of my `tce-mgmt.yaml` file and adapt it to suit the workload needs (as described [here](https://tanzucommunityedition.io/docs/latest/workload-clusters/)).
-```bash
-cp tce-mgmt.yaml tce-work.yaml
+```shell
+cp tce-mgmt.yaml tce-work.yaml # [tl! .cmd:1]
vi tce-work.yaml
```
@@ -310,9 +301,9 @@ I *could* change a few others if I wanted to[^i_wont]:
After saving my changes to the `tce-work.yaml` file, I'm ready to deploy the cluster:
-```bash
-❯ tanzu cluster create --file tce-work.yaml
-Validating configuration...
+```shell
+tanzu cluster create --file tce-work.yaml # [tl! .cmd]
+Validating configuration... # [tl! .nocopy:start]
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'tce-work'...
Waiting for cluster to be initialized...
@@ -320,13 +311,13 @@ Waiting for cluster nodes to be available...
Waiting for addons installation...
Waiting for packages to be up and running...
-Workload cluster 'tce-work' created
+Workload cluster 'tce-work' created # [tl! .nocopy:end]
```
Right on! I'll use `tanzu cluster get` to check out the workload cluster:
-```bash
-❯ tanzu cluster get tce-work
- NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
+```shell
+tanzu cluster get tce-work # [tl! .cmd]
+ NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES # [tl! .nocopy:start]
tce-work default running 1/1 1/1 v1.21.2+vmware.1
ℹ
@@ -339,7 +330,7 @@ NAME READY SEVERITY RE
│ └─Machine/tce-work-control-plane-8km9m True 9m31s
└─Workers
└─MachineDeployment/tce-work-md-0
- └─Machine/tce-work-md-0-687444b744-cck4x True 8m31s
+ └─Machine/tce-work-md-0-687444b744-cck4x True 8m31s # [tl! .nocopy:end]
```
I can also go into vCenter and take a look at the VMs which constitute the two clusters:
@@ -356,9 +347,9 @@ Excellent, I've got a Tanzu management cluster and a Tanzu workload cluster. Wha
If I run `kubectl get nodes` right now, I'll only get information about the management cluster:
-```bash
-❯ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
+```shell
+kubectl get nodes # [tl! .cmd]
+NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
tce-mgmt-control-plane-xtdnx Ready control-plane,master 18h v1.21.2+vmware.1
tce-mgmt-md-0-745b858d44-4c9vv Ready 17h v1.21.2+vmware.1
```
@@ -366,28 +357,29 @@ tce-mgmt-md-0-745b858d44-4c9vv Ready 17h v1.21.2+v
#### Setting the right context
To be able to deploy stuff to the workload cluster, I need to tell `kubectl` how to talk to it. And to do that, I'll first need to use `tanzu` to capture the cluster's kubeconfig:
-```bash
-❯ tanzu cluster kubeconfig get tce-work --admin
-Credentials of cluster 'tce-work' have been saved
+```shell
+tanzu cluster kubeconfig get tce-work --admin # [tl! .cmd]
+Credentials of cluster 'tce-work' have been saved # [tl! .nocopy:1]
You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work'
```
I can now run `kubectl config get-contexts` and see that I have access to contexts on both management and workload clusters:
-```bash
-❯ kubectl config get-contexts
-CURRENT NAME CLUSTER AUTHINFO NAMESPACE
+```shell
+kubectl config get-contexts # [tl! .cmd]
+CURRENT NAME CLUSTER AUTHINFO NAMESPACE # [tl! .nocopy:2]
* tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin
tce-work-admin@tce-work tce-work tce-work-admin
```
And I can switch to the `tce-work` cluster like so:
-```bash
-❯ kubectl config use-context tce-work-admin@tce-work
-Switched to context "tce-work-admin@tce-work".
-❯ kubectl get nodes
-NAME STATUS ROLES AGE VERSION
+```shell
+kubectl config use-context tce-work-admin@tce-work # [tl! .cmd]
+Switched to context "tce-work-admin@tce-work". # [tl! .nocopy]
+
+kubectl get nodes # [tl! .cmd]
+NAME STATUS ROLES AGE VERSION # [tl! .nocopy:2]
tce-work-control-plane-8km9m Ready control-plane,master 17h v1.21.2+vmware.1
tce-work-md-0-687444b744-cck4x Ready 17h v1.21.2+vmware.1
```
@@ -399,12 +391,12 @@ Before I move on to deploying actually *useful* workloads, I'll start with deplo
I can check out the sample deployment that William put together [here](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb.yaml), and then deploy it with:
-```bash
-❯ kubectl create ns yelb
-namespace/yelb created
+```shell
+kubectl create ns yelb # [tl! .cmd]
+namespace/yelb created # [tl! .nocopy:1]
-❯ kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml
-service/redis-server created
+kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb.yaml # [tl! .cmd]
+service/redis-server created # [tl! .nocopy:start]
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
@@ -412,9 +404,9 @@ deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
-
-❯ kubectl -n yelb get pods
-NAME READY STATUS RESTARTS AGE
+# [tl! .nocopy:end]
+kubectl -n yelb get pods # [tl! .cmd]
+NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4]
redis-server-74556bbcb7-r9jqc 1/1 Running 0 10s
yelb-appserver-d584bb889-2jspg 1/1 Running 0 10s
yelb-db-694586cd78-wb8tt 1/1 Running 0 10s
@@ -423,35 +415,35 @@ yelb-ui-8f54fd88c-k2dw9 1/1 Running 0 10s
Once the app is running, I can point my web browser at it to see it in action. But what IP do I use?
-```bash
-❯ kubectl -n yelb get svc/yelb-ui
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+```shell
+kubectl -n yelb get svc/yelb-ui # [tl! .cmd]
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
yelb-ui NodePort 100.71.228.116 80:30001/TCP 84s
```
This demo is using a `NodePort` type service to expose the front end, which means it will be accessible on port `30001` on the node it's running on. I can find that IP by:
-```bash
-❯ kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:"
-Node: tce-work-md-0-687444b744-cck4x/192.168.1.145
+```shell
+kubectl -n yelb describe pod $(kubectl -n yelb get pods | grep yelb-ui | awk '{print $1}') | grep "Node:" # [tl! .cmd]
+Node: tce-work-md-0-687444b744-cck4x/192.168.1.145 # [tl! .nocopy]
```
So I can point my browser at `http://192.168.1.145:30001` and see the demo:
![yelb demo page](yelb_nodeport_demo.png)
After marveling at my own magnificence[^magnificence] for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the `yelb` namespace to clean up the work I just did:
-```bash
-❯ kubectl delete ns yelb
-namespace "yelb" deleted
+```shell
+kubectl delete ns yelb # [tl! .cmd]
+namespace "yelb" deleted # [tl! .nocopy]
```
Now let's move on and try to deploy `yelb` behind a `LoadBalancer` service so it will get its own IP. William has a [deployment spec](https://github.com/lamw/vmware-k8s-app-demo/blob/master/yelb-lb.yaml) for that too.
-```bash
-❯ kubectl create ns yelb
-namespace/yelb created
+```shell
+kubectl create ns yelb # [tl! .cmd]
+namespace/yelb created # [tl! .nocopy:1]
-❯ kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml
-service/redis-server created
+kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml # [tl! .cmd]
+service/redis-server created # [tl! .nocopy:8]
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
@@ -460,8 +452,8 @@ deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
-❯ kubectl -n yelb get pods
-NAME READY STATUS RESTARTS AGE
+kubectl -n yelb get pods # [tl! .cmd]
+NAME READY STATUS RESTARTS AGE # [tl! .nocopy:4]
redis-server-74556bbcb7-q6l62 1/1 Running 0 7s
yelb-appserver-d584bb889-p5qgd 1/1 Running 0 7s
yelb-db-694586cd78-hjtn4 1/1 Running 0 7s
@@ -469,9 +461,9 @@ yelb-ui-8f54fd88c-pm9qw 1/1 Running 0 7s
```
And I can take a look at that service...
-```bash
-❯ kubectl -n yelb get svc/yelb-ui
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+```shell
+kubectl -n yelb get svc/yelb-ui # [tl! .cmd]
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
yelb-ui LoadBalancer 100.67.177.185 80:32339/TCP 15s
```
@@ -482,21 +474,23 @@ Wait a minute. That external IP is *still* ``. What gives? Oh yeah I ne
#### Deploying `kube-vip` as a load balancer
Fortunately, William Lam [wrote up some tips](https://williamlam.com/2021/10/quick-tip-install-kube-vip-as-service-load-balancer-with-tanzu-community-edition-tce.html) for handling that too. It's [based on work by Scott Rosenberg](https://github.com/vrabbi/tkgm-customizations). The quick-and-dirty steps needed to make this work are:
-```bash
-git clone https://github.com/vrabbi/tkgm-customizations.git
+```shell
+git clone https://github.com/vrabbi/tkgm-customizations.git # [tl! .cmd:3]
cd tkgm-customizations/carvel-packages/kube-vip-package
kubectl apply -n tanzu-package-repo-global -f metadata.yml
kubectl apply -n tanzu-package-repo-global -f package.yaml
-cat << EOF > values.yaml
+
+cat << EOF > values.yaml # [tl! .cmd]
vip_range: 192.168.1.64-192.168.1.80
EOF
-tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml
+
+tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml # [tl! .cmd]
```
Now I can check out the `yelb-ui` service again:
-```bash
-❯ kubectl -n yelb get svc/yelb-ui
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+```shell
+kubectl -n yelb get svc/yelb-ui # [tl!.cmd]
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # [tl! .nocopy:1]
yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m
```
@@ -504,9 +498,9 @@ And it's got an IP! I can point my browser to `http://192.168.1.65` now and see:
![Successful LoadBalancer test!](yelb_loadbalancer_demo.png)
I'll keep the `kube-vip` load balancer since it'll come in handy, but I have no further use for `yelb`:
-```bash
-❯ kubectl delete ns yelb
-namespace "yelb" deleted
+```shell
+kubectl delete ns yelb # [tl! .cmd]
+namespace "yelb" deleted # [tl! .nocopy]
```
#### Persistent Volume Claims, Storage Classes, and Storage Policies
@@ -520,6 +514,7 @@ Then I create a new vSphere Storage Policy called `tkg-storage-policy` which sta
So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called `vsphere-sc.yaml`:
```yaml
+# torchlight! {"lineNumbers": true}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
@@ -530,13 +525,14 @@ parameters:
```
And then apply it with :
-```bash
-❯ kubectl apply -f vsphere-sc.yaml
-storageclass.storage.k8s.io/vsphere created
+```shell
+kubectl apply -f vsphere-sc.yaml # [tl! .cmd]
+storageclass.storage.k8s.io/vsphere created # [tl! .nocopy]
```
I can test that I can create a Persistent Volume Claim against the new `vsphere` Storage Class by putting this in a new file called `vsphere-pvc.yaml`:
```yaml
+# torchlight! {"lineNumbers": true}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
@@ -553,15 +549,15 @@ spec:
```
And applying it:
-```bash
-❯ kubectl apply -f demo-pvc.yaml
-persistentvolumeclaim/vsphere-demo-1 created
+```shell
+kubectl apply -f demo-pvc.yaml # [tl! .cmd]
+persistentvolumeclaim/vsphere-demo-1 created # [tl! .nocopy]
```
I can see the new claim, and confirm that its status is `Bound`:
-```bash
-❯ kubectl get pvc
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+```shell
+kubectl get pvc # [tl! .cmd]
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE # [tl! .nocopy:1]
vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi RWO vsphere 4m25s
```
@@ -569,9 +565,9 @@ And for bonus points, I can see that the container volume was created on the vSp
![Container Volume in vSphere](container_volume_in_vsphere.png)
So that's storage sorted. I'll clean up my test volume before moving on:
-```bash
-❯ kubectl delete -f demo-pvc.yaml
-persistentvolumeclaim "vsphere-demo-1" deleted
+```shell
+kubectl delete -f demo-pvc.yaml # [tl! .cmd]
+persistentvolumeclaim "vsphere-demo-1" deleted # [tl! .nocopy]
```
### A real workload - phpIPAM
@@ -583,9 +579,9 @@ So I set to work exploring some containerization options, and I found [phpipam-d
To start, I'll create a new namespace to keep things tidy:
-```bash
-❯ kubectl create ns ipam
-namespace/ipam created
+```shell
+kubectl create ns ipam # [tl! .cmd]
+namespace/ipam created # [tl! .nocopy]
```
I'm going to wind up with four pods:
@@ -601,6 +597,7 @@ I'll use each container's original `docker-compose` configuration and adapt that
#### phpipam-db
The phpIPAM database will live inside a MariaDB container. Here's the relevant bit from `docker-compose`:
```yaml
+# torchlight! {"lineNumbers": true}
services:
phpipam-db:
image: mariadb:latest
@@ -616,6 +613,7 @@ So it will need a `Service` exposing the container's port `3306` so that other p
It might look like this on the Kubernetes side:
```yaml
+# torchlight! {"lineNumbers": true}
# phpipam-db.yaml
apiVersion: v1
kind: Service
@@ -687,6 +685,7 @@ Moving on:
#### phpipam-www
This is the `docker-compose` excerpt for the web component:
```yaml
+# torchlight! {"lineNumbers": true}
services:
phpipam-web:
image: phpipam/phpipam-www:1.5x
@@ -705,6 +704,7 @@ Based on that, I can see that my `phpipam-www` pod will need a container running
Here's how I'd adapt that into a structure that Kubernetes will understand:
```yaml
+# torchlight! {"lineNumbers": true}
# phpipam-www.yaml
apiVersion: v1
kind: Service
@@ -753,7 +753,7 @@ spec:
labels:
app: phpipam-www
spec:
- containers:
+ containers: # [tl! focus:2]
- name: phpipam-www
image: phpipam/phpipam-www:1.5x
env:
@@ -779,6 +779,7 @@ spec:
#### phpipam-cron
This container has a pretty simple configuration in `docker-compose`:
```yaml
+# torchlight! {"lineNumbers": true}
services:
phpipam-cron:
image: phpipam/phpipam-cron:1.5x
@@ -792,6 +793,7 @@ services:
No exposed ports, no need for persistence - just a base image and a few variables to tell it how to connect to the database and how often to run the scans:
```yaml
+# torchlight! {"lineNumbers": true}
# phpipam-cron.yaml
apiVersion: apps/v1
kind: Deployment
@@ -825,6 +827,7 @@ spec:
#### phpipam-agent
And finally, my remote scan agent. Here's the `docker-compose`:
```yaml
+# torchlight! {"lineNumbers": true}
services:
phpipam-agent:
container_name: phpipam-agent
@@ -847,6 +850,7 @@ It's got a few additional variables to make it extra-configurable, but still no
For now, here's how I'd tell Kubernetes about it:
```yaml
+# torchlight! {"lineNumbers": true}
# phpipam-agent.yaml
apiVersion: apps/v1
kind: Deployment
@@ -891,32 +895,32 @@ spec:
#### Deployment and configuration of phpIPAM
I can now go ahead and start deploying these containers, starting with the database one (upon which all the others rely):
-```bash
-❯ kubectl apply -f phpipam-db.yaml
-service/phpipam-db created
+```shell
+kubectl apply -f phpipam-db.yaml # [tl! .cmd]
+service/phpipam-db created # [tl! .nocopy:2]
persistentvolumeclaim/phpipam-db-pvc created
deployment.apps/phpipam-db created
```
And the web server:
-```bash
-❯ kubectl apply -f phpipam-www.yaml
-service/phpipam-www created
+```shell
+kubectl apply -f phpipam-www.yaml # [tl! .cmd]
+service/phpipam-www created # [tl! .nocopy:2]
persistentvolumeclaim/phpipam-www-pvc created
deployment.apps/phpipam-www created
```
And the cron runner:
-```bash
-❯ kubectl apply -f phpipam-cron.yaml
-deployment.apps/phpipam-cron created
+```shell
+kubectl apply -f phpipam-cron.yaml # [tl! .cmd]
+deployment.apps/phpipam-cron created # [tl! .nocopy]
```
I'll hold off on the agent container for now since I'll need to adjust the configuration slightly after getting phpIPAM set up, but I will go ahead and check out my work so far:
-```bash
-❯ kubectl -n ipam get all
-NAME READY STATUS RESTARTS AGE
+```shell
+kubectl -n ipam get all # [tl! .cmd]
+NAME READY STATUS RESTARTS AGE # [tl! .nocopy:start]
pod/phpipam-cron-6c994897c4-6rsnp 1/1 Running 0 4m30s
pod/phpipam-db-5f4c47d4b9-sb5bd 1/1 Running 0 16m
pod/phpipam-www-769c95c68d-94klg 1/1 Running 0 5m59s
@@ -933,7 +937,7 @@ deployment.apps/phpipam-www 1/1 1 1 5m59s
NAME DESIRED CURRENT READY AGE
replicaset.apps/phpipam-cron-6c994897c4 1 1 1 4m30s
replicaset.apps/phpipam-db-5f4c47d4b9 1 1 1 16m
-replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s
+replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s # [tl! .nocopy:end]
```
And I can point my browser to the `EXTERNAL-IP` associated with the `phpipam-www` service to see the initial setup page:
@@ -963,9 +967,9 @@ I'll copy the agent code and plug it into my `phpipam-agent.yaml` file:
```
And then deploy that:
-```bash
-❯ kubectl apply -f phpipam-agent.yaml
-deployment.apps/phpipam-agent created
+```shell
+kubectl apply -f phpipam-agent.yaml # [tl! .cmd]
+deployment.apps/phpipam-agent created # [tl! .nocopy]
```
The scan agent isn't going to do anything until it's assigned to a subnet though, so now I head to **Administration > IP related management > Sections**. phpIPAM comes with a few default sections and ranges and such defined so I'll delete those and create a new one that I'll call `Lab`.
diff --git a/content/posts/upgrading-standalone-vsphere-host-with-esxcli/index.md b/content/posts/upgrading-standalone-vsphere-host-with-esxcli/index.md
index a3b42db..9e9957f 100644
--- a/content/posts/upgrading-standalone-vsphere-host-with-esxcli/index.md
+++ b/content/posts/upgrading-standalone-vsphere-host-with-esxcli/index.md
@@ -42,20 +42,20 @@ The host will need to be in maintenance mode in order to apply the upgrade, and
### 3. Place host in maintenance mode
I can do that by SSH'ing to the host and running:
```shell
-esxcli system maintenanceMode set -e true
+esxcli system maintenanceMode set -e true # [tl! .cmd]
```
And can confirm that it happened with:
```shell
-esxcli system maintenanceMode get
-Enabled
+esxcli system maintenanceMode get # [tl! .cmd]
+Enabled # [tl! .nocopy]
```
### 4. Identify the profile name
Because this is an *upgrade* from one major release to another rather than a simple *update*, I need to know the name of the profile which will be applied. I can identify that with:
```shell
-esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip
-Name Vendor Acceptance Level Creation Time Modification Time
+esxcli software sources profile list -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip # [tl! .cmd]
+Name Vendor Acceptance Level Creation Time Modification Time # [tl! .nocopy:3]
---------------------------- ------------ ---------------- ------------------- -----------------
ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
@@ -69,13 +69,12 @@ In this case, I'll use the `ESXi-8.0.0-20513097-standard` profile.
### 5. Install the upgrade
Now for the moment of truth:
```shell
-esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-2051309
-7-depot.zip -p ESXi-8.0.0-20513097-standard
+esxcli software profile update -d /vmfs/volumes/nuchost-local/_Patches/VMware-ESXi-8.0-20513097-depot.zip -p ESXi-8.0.0-20513097-standard # [tl! .cmd]
```
When it finishes (successfully), it leaves a little message that the update won't be complete until the host is rebooted, so I'll go ahead and do that as well:
```shell
-reboot
+reboot # [tl! .cmd]
```
And then wait (oh-so-patiently) for the host to come back up.
diff --git a/content/posts/using-powershell-and-a-scheduled-task-to-apply-windows-updates/index.md b/content/posts/using-powershell-and-a-scheduled-task-to-apply-windows-updates/index.md
index ad58766..cdcc144 100644
--- a/content/posts/using-powershell-and-a-scheduled-task-to-apply-windows-updates/index.md
+++ b/content/posts/using-powershell-and-a-scheduled-task-to-apply-windows-updates/index.md
@@ -11,18 +11,19 @@ toc: false
In the same vein as [my script to automagically resize a Linux LVM volume to use up free space on a disk](/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk), I wanted a way to automatically apply Windows updates for servers deployed by [my vRealize Automation environment](/series/vra8). I'm only really concerned with Windows Server 2019, which includes the [built-in Windows Update Provider PowerShell module](https://4sysops.com/archives/scan-download-and-install-windows-updates-with-powershell/). So this could be as simple as `Install-WUUpdates -Updates (Start-WUScan)` to scan for and install any available updates.
-Unfortunately, I found that this approach can take a long time to run and often exceeded the timeout limits imposed upon my ABX script, causing the PowerShell session to end and terminating the update process. I really needed a way to do this without requiring a persistent session.
+Unfortunately, I found that this approach can take a long time to run and often exceeded the timeout limits imposed upon my ABX script, causing the PowerShell session to end and terminating the update process. I really needed a way to do this without requiring a persistent session.
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found [this blog post with the solution](https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/).
So here's what I put together:
```powershell
-# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
+# torchlight! {"lineNumbers": true}
+# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
# It creates a scheduled task to start the update process after a one-minute delay so that you don't have to maintain
# the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later.
#
# This leverages the Windows Update Provider PowerShell module which is included in Windows 10 1709+ and Windows Server 2019.
-#
+#
# Adapted from https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/
$action = New-ScheduledTaskAction -Execute 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe' -Argument '-NoProfile -WindowStyle Hidden -Command "& {Install-WUUpdates -Updates (Start-WUScan); if (Get-WUIsPendingReboot) {shutdown.exe /f /r /d p:2:4 /t 120 /c `"Rebooting to apply updates`"}}"'
diff --git a/content/posts/using-vsphere-diagnostic-tool-fling/index.md b/content/posts/using-vsphere-diagnostic-tool-fling/index.md
index e3e4bb5..34d0c62 100644
--- a/content/posts/using-vsphere-diagnostic-tool-fling/index.md
+++ b/content/posts/using-vsphere-diagnostic-tool-fling/index.md
@@ -21,20 +21,20 @@ tags:
- python
comment: true # Disable comment if false.
---
-VMware vCenter does wonders for abstracting away the layers of complexity involved in managing a large virtual infrastructure, but when something goes wrong it can be challenging to find exactly where the problem lies. And it can be even harder to proactively address potential issues before they occur.
+VMware vCenter does wonders for abstracting away the layers of complexity involved in managing a large virtual infrastructure, but when something goes wrong it can be challenging to find exactly where the problem lies. And it can be even harder to proactively address potential issues before they occur.
Fortunately there's a super-handy utility which can making diagnosing vCenter significantly easier, and it comes in the form of the [vSphere Diagnostic Tool Fling](https://flings.vmware.com/vsphere-diagnostic-tool). VDT is a Python script which can be run directly on a vCenter Server appliance (version 6.5 and newer) to quickly check for problems and misconfigurations affecting:
- vCenter Basic Info
- Lookup Service
- Active Directory
-- vCenter Certificates
+- vCenter Certificates
- Core Files
- Disk Health
-- vCenter DNS
-- vCenter NTP
-- vCenter Port
-- Root Account
-- vCenter Services
+- vCenter DNS
+- vCenter NTP
+- vCenter Port
+- Root Account
+- vCenter Services
- VCHA
For any problems which are identified, VDT will provide simple instructions and/or links to Knowledge Base articles for more detailed instructions on how to proceed with resolving the issues. Sounds pretty useful, right? And yet, somehow, I keep forgetting that VDT is a thing. So here's a friendly reminder to myself of how to obtain and use VDT to fix vSphere woes. Let's get started.
@@ -55,29 +55,28 @@ This needs to be run directly on the vCenter appliance so you'll need to copy th
Once that's done, just execute this on your local workstation to copy the `.zip` from your `~/Downloads/` folder to the VCSA's `/tmp/` directory:
```shell
-scp ~/Downloads/vdt-v1.1.4.zip root@vcsa.lab.bowdre.net:/tmp/
+scp ~/Downloads/vdt-v1.1.4.zip root@vcsa.lab.bowdre.net:/tmp/ # [tl! .cmd]
```
### 3. Extract
Now pop back over to an SSH session to the VCSA, extract the `.zip`, and get ready for action:
```shell
-root@VCSA [ ~ ]# cd /tmp
-
-root@VCSA [ /tmp ]# unzip vdt-v1.1.4.zip
-Archive: vdt-v1.1.4.zip
+cd /tmp # [tl! .cmd_root:1]
+unzip vdt-v1.1.4.zip
+Archive: vdt-v1.1.4.zip # [tl! .nocopy:5]
3557676756cffd658fd61aab5a6673269104e83c
creating: vdt-v1.1.4/
...
inflating: vdt-v1.1.4/vdt.py
-root@VCSA [ /tmp ]# cd vdt-v1.1.4/
+cd vdt-v1.1.4/ # [tl! .cmd_root]
```
### 4. Execute
Now for the fun part:
```shell
-root@VCSA [ /tmp/vdt-v1.1.4 ]# python vdt.py
-_________________________
+python vdt.py # [tl! .cmd_root]
+_________________________ # [tl! .nocopy:7]
RUNNING PULSE CHECK
Today: Sunday, August 28 19:53:00
@@ -93,7 +92,7 @@ After entering the SSO password, VDT will run for a few minutes and generate an
Once the script has completed, it's time to look through the results and fix whatever can be found. As an example, here are some of the findings from my _deliberately-broken-for-the-purposes-of-this-post_ vCenter:
#### Hostname/PNID mismatch
-```log {hl_lines=[8,9,23,24]}
+```text
VCENTER BASIC INFO
BASIC:
Current Time: 2022-08-28 19:54:08.370889
@@ -101,7 +100,7 @@ BASIC:
vCenter Load Average: 0.26, 0.19, 0.12
Number of CPUs: 2
Total Memory: 11.71
- vCenter Hostname: VCSA
+ vCenter Hostname: VCSA # [tl! highlight:1]
vCenter PNID: vcsa.lab.bowdre.net
vCenter IP Address: 192.168.1.12
Proxy Configured: "no"
@@ -116,16 +115,16 @@ DETAILS:
Number of Clusters: 1
Disabled Plugins: None
-[FAIL] The hostname and PNID do not match!
+[FAIL] The hostname and PNID do not match! # [tl! highlight:1]
Please see https://kb.vmware.com/s/article/2130599 for more details.
```
Silly me - I must have changed the hostname at some point, which is not generally a Thing Which Should Be done. I can quickly [consult the referenced KB](https://kb.vmware.com/s/article/2130599) to figure out how to fix my mistake using the `/opt/vmware/share/vami/vami_config_net` utility.
#### Missing DNS
-```log {hl_lines=[3,4,5,12,13]}
+```text
Nameserver Queries
192.168.1.5
- [FAIL] DNS with UDP - unable to resolve vcsa to 192.168.1.12
+ [FAIL] DNS with UDP - unable to resolve vcsa to 192.168.1.12 # [tl! highlight:2]
[FAIL] Reverse DNS - unable to resolve 192.168.1.12 to vcsa
[FAIL] DNS with TCP - unable to resolve vcsa to 192.168.1.12
@@ -134,13 +133,13 @@ Nameserver Queries
dig +noall +answer -x
dig +short +tcp
-RESULT: [FAIL]
+RESULT: [FAIL] # [tl! highlight:1]
Please see KB: https://kb.vmware.com/s/article/54682
```
Whoops - I guess I should go recreate the appropriate DNS records.
#### Old core files
-```log
+```text
CORE FILE CHECK
INFO:
These core files are older than 72 hours. consider deleting them
@@ -166,18 +165,18 @@ at your discretion to reduce the size of log bundles.
Those core files can be useful for investigating specific issues, but holding on to them long-term doesn't really do much good. _After checking to be sure I don't need them_, I can get rid of them all pretty easily like so:
```shell
-find /storage/core/ -name "core.*" -type f -mtime +3 -exec rm {} \;
+find /storage/core/ -name "core.*" -type f -mtime +3 -exec rm {} \; # [tl! .cmd_root]
```
#### NTP status
-```log
+```text
VC NTP CHECK
[FAIL] NTP and Host time are both disabled!
```
Oh yeah, let's turn that back on with `systemctl start ntpd`.
#### Account status
-```log
+```text
Root Account Check
[FAIL] Root password expires in 13 days
Please search for 'Change the Password of the Root User'
@@ -186,13 +185,13 @@ Oh yeah, let's turn that back on with `systemctl start ntpd`.
That's a good thing to know. I'll [take care of that](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenter.configuration.doc/GUID-48BAF973-4FD3-4FF3-B1B6-5F7286C9B59A.html) while I'm thinking about it.
```shell
-chage -M -1 -E -1 root
+chage -M -1 -E -1 root # [tl! .cmd_root]
```
#### Recheck
Now that I've corrected these issues, I can run VDT again to confirm that everything is back in a good state:
-```log {hl_lines=[8,9,"25-27",32,35,"55-56",59]}
+```text {hl_lines=[8,9,"25-27",32,35,"55-56",59]}
VCENTER BASIC INFO
BASIC:
Current Time: 2022-08-28 20:13:25.192503
@@ -200,7 +199,7 @@ Now that I've corrected these issues, I can run VDT again to confirm that everyt
vCenter Load Average: 0.28, 0.14, 0.10
Number of CPUs: 2
Total Memory: 11.71
- vCenter Hostname: vcsa.lab.bowdre.net
+ vCenter Hostname: vcsa.lab.bowdre.net # [tl! highlight:1]
vCenter PNID: vcsa.lab.bowdre.net
vCenter IP Address: 192.168.1.12
Proxy Configured: "no"
@@ -217,20 +216,20 @@ DETAILS:
[...]
Nameserver Queries
192.168.1.5
- [PASS] DNS with UDP - resolved vcsa.lab.bowdre.net to 192.168.1.12
+ [PASS] DNS with UDP - resolved vcsa.lab.bowdre.net to 192.168.1.12 # [tl! highlight:2]
[PASS] Reverse DNS - resolved 192.168.1.12 to vcsa.lab.bowdre.net
[PASS] DNS with TCP - resolved vcsa.lab.bowdre.net to 192.168.1.12
Commands used:
dig +short
dig +noall +answer -x
dig +short +tcp
-RESULT: [PASS]
+RESULT: [PASS] # [tl! highlight]
[...]
CORE FILE CHECK
-[PASS] Number of core files: 0
+[PASS] Number of core files: 0 # [tl! highlight:1]
[PASS] Number of hprof files: 0
[...]
-NTP Status Check
+NTP Status Check # [tl! collapse:start]
+-----------------------------------LEGEND-----------------------------------+
| remote: NTP peer server |
| refid: server that this peer gets its time from |
@@ -244,16 +243,16 @@ NTP Status Check
| + Peer selected for possible synchronization |
| – Peer is a candidate for selection |
| ~ Peer is statically configured |
-+----------------------------------------------------------------------------+
++----------------------------------------------------------------------------+ # [tl! collapse:end]
remote refid st t when poll reach delay offset jitter
==============================================================================
*104.171.113.34 130.207.244.240 2 u 1 64 17 16.831 -34.597 0.038
-RESULT: [PASS]
+RESULT: [PASS] # [tl! highlight]
[...]
Root Account Check
-[PASS] Root password never expires
+[PASS] Root password never expires # [tl! highlight]
```
All better!
### Conclusion
-The vSphere Diagnostic Tool makes a great addition to your arsenal of troubleshooting skills and utilities. It makes it easy to troubleshoot errors which might occur in your vSphere environment, as well as to uncover dormant issues which could cause serious problems in the future.
\ No newline at end of file
+The vSphere Diagnostic Tool makes a great addition to your arsenal of troubleshooting skills and utilities. It makes it easy to troubleshoot errors which might occur in your vSphere environment, as well as to uncover dormant issues which could cause serious problems in the future.
\ No newline at end of file
diff --git a/content/posts/virtually-potato-migrated-to-github-pages/index.md b/content/posts/virtually-potato-migrated-to-github-pages/index.md
index d52966d..adedfa9 100644
--- a/content/posts/virtually-potato-migrated-to-github-pages/index.md
+++ b/content/posts/virtually-potato-migrated-to-github-pages/index.md
@@ -11,7 +11,7 @@ tags:
title: Virtually Potato migrated to GitHub Pages!
---
-After a bit less than a year of hosting my little technical blog with [Hashnode](https://hashnode.com), I spent a few days [migrating the content](/script-to-update-image-embed-links-in-markdown-files) over to a new format hosted with [GitHub Pages](https://pages.github.com/).
+After a bit less than a year of hosting my little technical blog with [Hashnode](https://hashnode.com), I spent a few days [migrating the content](/script-to-update-image-embed-links-in-markdown-files) over to a new format hosted with [GitHub Pages](https://pages.github.com/).
![Party!](20210720-party.gif)
@@ -25,36 +25,36 @@ I knew about GitHub Pages, but had never seriously looked into it. Once I did, t
I found that the quite-popular [Minimal Mistakes](https://mademistakes.com/work/minimal-mistakes-jekyll-theme/) theme for Jekyll offers a [remote theme starter](https://github.com/mmistakes/mm-github-pages-starter/generate) that can be used to quickly get things going. I just used that generator to spawn a new repository in my GitHub account ([`jbowdre.github.io`](https://github.com/jbowdre/jbowdre.github.io)). And that was it - I had a starter GitHub Pages-hosted Jekyll-powered static site with an elegant theme applied. I could even make changes to the various configuration and sample post files, point any browser to `https://jbowdre.github.io`, and see the results almost immediately. I got to work digging through the lengthy [configuration documentation](https://mmistakes.github.io/minimal-mistakes/docs/configuration/) to start making the site my own, like [connecting with my custom domain](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site) and enabling [GitHub Issue-based comments](https://github.com/apps/utterances).
#### Working locally
-A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
+A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
```shell
-sudo apt-get install ruby-full build-essential zlib1g-dev
+sudo apt-get install ruby-full build-essential zlib1g-dev # [tl! .cmd]
```
I added the following to my `~/.zshrc` file so that the gems would be installed under my home directory rather than somewhere more privileged:
```shell
-export GEM_HOME="$HOME/gems"
+export GEM_HOME="$HOME/gems" # [tl! .cmd:1]
export PATH="$HOME/gems/bin:$PATH"
```
-And then ran `source ~/.zshrc` so the change would take immediate effect.
+And then ran `source ~/.zshrc` so the change would take immediate effect.
I could then install Jekyll:
```shell
-gem install jekyll bundler
+gem install jekyll bundler # [tl! .cmd]
```
I then `cd`ed to the local repo and ran `bundle install` to also load up the components specified in the repo's `Gemfile`.
And, finally, I can run this to start up the local Jekyll server instance:
```shell
-❯ bundle exec jekyll serve -l --drafts
-Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
+bundle exec jekyll serve -l --drafts # [tl! .cmd]
+Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml # [tl! .nocopy:start]
Source: /home/jbowdre/projects/jbowdre.github.io
Destination: /home/jbowdre/projects/jbowdre.github.io/_site
Incremental build: enabled
- Generating...
+ Generating...
Remote Theme: Using theme mmistakes/minimal-mistakes
Jekyll Feed: Generating feed for posts
GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data.
@@ -62,7 +62,7 @@ Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
Auto-regeneration: enabled for '/home/jbowdre/projects/jbowdre.github.io'
LiveReload address: http://0.0.0.0:35729
Server address: http://0.0.0.0:4000
- Server running... press ctrl-c to stop.
+ Server running... press ctrl-c to stop. # [tl! .nocopy:end]
```
And there it is!
@@ -71,4 +71,4 @@ And there it is!
### `git push` time
Alright that's enough rambling for now. I'm very happy with this new setup, particularly with the automatically-generated Table of Contents to help folks navigate some of my longer posts. (I can't believe I was having to piece those together manually in this blog's previous iteration!)
-I'll continue to make some additional tweaks in the coming weeks but for now I'll `git push` this post and get back to documenting my never-ending [vRA project](/series/vra8).
\ No newline at end of file
+I'll continue to make some additional tweaks in the coming weeks but for now I'll `git push` this post and get back to documenting my never-ending [vRA project](/series/vra8).
\ No newline at end of file
diff --git a/content/posts/virtuallypotato-runtimeterror/index.md b/content/posts/virtuallypotato-runtimeterror/index.md
index cfc1613..acafb9d 100644
--- a/content/posts/virtuallypotato-runtimeterror/index.md
+++ b/content/posts/virtuallypotato-runtimeterror/index.md
@@ -13,7 +13,7 @@ tags:
---
```shell
-cp -a virtuallypotato.com runtimeterror.dev
+cp -a virtuallypotato.com runtimeterror.dev # [tl! .cmd:2]
rm -rf virtuallypotato.com
ln -s virtuallypotato.com runtimeterror.dev
```
diff --git a/content/posts/vmware-home-lab-on-intel-nuc-9/index.md b/content/posts/vmware-home-lab-on-intel-nuc-9/index.md
index e3f1601..410cc54 100644
--- a/content/posts/vmware-home-lab-on-intel-nuc-9/index.md
+++ b/content/posts/vmware-home-lab-on-intel-nuc-9/index.md
@@ -12,7 +12,7 @@ title: VMware Home Lab on Intel NUC 9
featured: false
---
-I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
+I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
![But boy would I love some more RAM](SIDah-Lag.png)
@@ -26,7 +26,7 @@ I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and h
The NUC runs ESXi 7.0u1 and currently hosts the following:
- vCenter Server 7.0u1
- Windows 2019 domain controller
-- [VyOS router](https://vyos.io/)
+- [VyOS router](https://vyos.io/)
- [Home Assistant OS 5.9](https://www.home-assistant.io/hassio/installation/)
- vRealize Lifecycle Manager 8.2
- vRealize Identity Manager 3.3.2
@@ -41,7 +41,7 @@ The NUC connects to my home network through its onboard gigabit Ethernet interfa
I used the Chromebook Recovery Utility to write the ESXi installer ISO to *another* USB drive (how-to [here](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility)), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (`192.168.1.11`). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
-I was then able to point my web browser to `https://192.168.1.11/ui/` to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (`vSwitch0`) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described [here](https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html)).
+I was then able to point my web browser to `https://192.168.1.11/ui/` to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (`vSwitch0`) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described [here](https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html)).
![Allowing promiscuous mode and forged transmits](w0HeFSi7Q.png)
I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs.
@@ -77,7 +77,7 @@ My home network uses the generic `192.168.1.0/24` address space, with internet r
Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:
| VLAN ID | Network | Purpose |
-| ---- | ---- | ---- |
+| ---- | ---- | ---- |
| 1610 | `172.16.10.0/24` | Management |
| 1620 | `172.16.20.0/24` | Servers-1 |
| 1630 | `172.16.30.0/24` | Servers-2 |
@@ -85,7 +85,7 @@ Of course, not everything that I'm going to deploy in the lab will need to be ac
| 1699 | `172.16.99.0/24` | vMotion |
#### vSwitch1
-I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
+I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
![Second vSwitch](7aNJa2Hlm.png)
@@ -95,15 +95,14 @@ Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `
After logging in to the VM, I entered the router's configuration mode:
```shell
-vyos@vyos:~$ configure
-[edit]
-vyos@vyos#
+configure # [tl! .cmd]
+[edit] # [tl! .nocopy]
```
-I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
+I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
```shell
-set interfaces ethernet eth0 address '192.168.1.8/24'
+set interfaces ethernet eth0 address '192.168.1.8/24' # [tl! .cmd_root:start]
set interfaces ethernet eth0 description 'Outside'
set interfaces ethernet eth1 mtu '9000'
set interfaces ethernet eth1 vif 1610 address '172.16.10.1/24'
@@ -118,13 +117,13 @@ set interfaces ethernet eth1 vif 1630 mtu '1500'
set interfaces ethernet eth1 vif 1698 description 'VLAN 1698 for vSAN'
set interfaces ethernet eth1 vif 1698 mtu '9000'
set interfaces ethernet eth1 vif 1699 description 'VLAN 1699 for vMotion'
-set interfaces ethernet eth1 vif 1699 mtu '9000'
+set interfaces ethernet eth1 vif 1699 mtu '9000' # [tl! .cmd_root:end]
```
I also set up NAT for the networks that should be routable:
```shell
-set nat source rule 10 outbound-interface 'eth0'
+set nat source rule 10 outbound-interface 'eth0' # [tl! .cmd_root:start]
set nat source rule 10 source address '172.16.10.0/24'
set nat source rule 10 translation address 'masquerade'
set nat source rule 20 outbound-interface 'eth0'
@@ -135,13 +134,13 @@ set nat source rule 30 source address '172.16.30.0/24'
set nat source rule 30 translation address 'masquerade'
set nat source rule 100 outbound-interface 'eth0'
set nat source rule 100 translation address 'masquerade'
-set protocols static route 0.0.0.0/0 next-hop 192.168.1.1
+set protocols static route 0.0.0.0/0 next-hop 192.168.1.1 # [tl! .cmd_root:end]
```
And I configured DNS forwarding:
```shell
-set service dns forwarding allow-from '0.0.0.0/0'
+set service dns forwarding allow-from '0.0.0.0/0' # [tl! .cmd_root:start]
set service dns forwarding domain 10.16.172.in-addr.arpa. server '192.168.1.5'
set service dns forwarding domain 20.16.172.in-addr.arpa. server '192.168.1.5'
set service dns forwarding domain 30.16.172.in-addr.arpa. server '192.168.1.5'
@@ -149,13 +148,13 @@ set service dns forwarding domain lab.bowdre.net server '192.168.1.5'
set service dns forwarding listen-address '172.16.10.1'
set service dns forwarding listen-address '172.16.20.1'
set service dns forwarding listen-address '172.16.30.1'
-set service dns forwarding name-server '192.168.1.1'
+set service dns forwarding name-server '192.168.1.1' # [tl! .cmd_root:end]
```
Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
```shell
-set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative
+set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative # [tl! .cmd_root:start]
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router '172.16.10.1'
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server '192.168.1.5'
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 domain-name 'lab.bowdre.net'
@@ -175,7 +174,7 @@ set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 domain-name 'lab.bowdre.net'
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 lease '86400'
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 start '172.16.30.100'
-set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop '172.16.30.200'
+set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop '172.16.30.200' # [tl! .cmd_root:end]
```
Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this server jockey just configured a router!
@@ -213,8 +212,8 @@ I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created ne
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
```shell
-[root@esxi01:~] vmkping -I vmk1 172.16.98.22
-PING 172.16.98.22 (172.16.98.22): 56 data bytes
+vmkping -I vmk1 172.16.98.22 # [tl! .cmd_root]
+PING 172.16.98.22 (172.16.98.22): 56 data bytes # [tl! .nocopy:start]
64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms
64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms
64 bytes from 172.16.98.22: icmp_seq=2 ttl=64 time=0.262 ms
@@ -222,16 +221,16 @@ PING 172.16.98.22 (172.16.98.22): 56 data bytes
--- 172.16.98.22 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.243/0.255/0.262 ms
-
-[root@esxi01:~] vmkping -I vmk2 172.16.99.22 -S vmotion
-PING 172.16.99.22 (172.16.99.22): 56 data bytes
+# [tl! .nocopy:end]
+vmkping -I vmk2 172.16.99.22 -S vmotion # [tl! .cmd_root]
+PING 172.16.99.22 (172.16.99.22): 56 data bytes # [tl! .nocopy:start]
64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms
64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms
64 bytes from 172.16.99.22: icmp_seq=2 ttl=64 time=0.242 ms
--- 172.16.99.22 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
-round-trip min/avg/max = 0.202/0.252/0.312 ms
+round-trip min/avg/max = 0.202/0.252/0.312 ms # [tl! .nocopy:end]
```
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
@@ -253,7 +252,7 @@ Anyhoo, each of these VMs will need to be resolvable in DNS so I started by crea
|`idm.lab.bowdre.net`|`192.168.1.41`|
|`vra.lab.bowdre.net`|`192.168.1.42`|
-I then attached the installer ISO to my Windows VM and ran through the installation from there.
+I then attached the installer ISO to my Windows VM and ran through the installation from there.
![vRealize Easy Installer](42n3aMim5.png)
Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final "Okay, do it" button to being able to log in to my shiny new vRealize Automation environment.
diff --git a/content/posts/vra8-automatic-deployment-naming-another-take/index.md b/content/posts/vra8-automatic-deployment-naming-another-take/index.md
index cda6f0a..071f50e 100644
--- a/content/posts/vra8-automatic-deployment-naming-another-take/index.md
+++ b/content/posts/vra8-automatic-deployment-naming-another-take/index.md
@@ -25,7 +25,8 @@ So this will generate a name that looks something like `[user]_[catalog_item]_[s
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
So I hop over to vRO and create a new action, which I call `getTimestamp`. It doesn't require any inputs, and returns a single string. Here's the code:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: getTimestamp action
// Inputs: None
// Returns: result (String)
diff --git a/content/posts/vra8-custom-provisioning-part-four/index.md b/content/posts/vra8-custom-provisioning-part-four/index.md
index b26ee24..dc93d30 100644
--- a/content/posts/vra8-custom-provisioning-part-four/index.md
+++ b/content/posts/vra8-custom-provisioning-part-four/index.md
@@ -12,14 +12,14 @@ tags:
title: 'vRA8 Custom Provisioning: Part Four'
---
-My [last post in this series](/vra8-custom-provisioning-part-three) marked the completion of the vRealize Orchestrator workflow that I use for pre-provisioning tasks, namely generating a unique *sequential* hostname which complies with a defined naming standard and doesn't conflict with any existing records in vSphere, Active Directory, or DNS. That takes care of many of the "back-end" tasks for a simple deployment.
+My [last post in this series](/vra8-custom-provisioning-part-three) marked the completion of the vRealize Orchestrator workflow that I use for pre-provisioning tasks, namely generating a unique *sequential* hostname which complies with a defined naming standard and doesn't conflict with any existing records in vSphere, Active Directory, or DNS. That takes care of many of the "back-end" tasks for a simple deployment.
-This post will add in some "front-end" operations, like creating a customized VM request form in Service Broker and dynamically populating a drop-down with a list of networks available at the user-selected deployment site. I'll also take care of some housekeeping items like automatically generating a unique deployment name.
+This post will add in some "front-end" operations, like creating a customized VM request form in Service Broker and dynamically populating a drop-down with a list of networks available at the user-selected deployment site. I'll also take care of some housekeeping items like automatically generating a unique deployment name.
### Getting started with Service Broker Custom Forms
-So far, I've been working either in the Cloud Assembly or Orchestrator UIs, both of which are really geared toward administrators. Now I'm going to be working with Service Broker which will provide the user-facing front-end. This is where "normal" users will be able to submit provisioning requests without having to worry about any of the underlying infrastructure or orchestration.
+So far, I've been working either in the Cloud Assembly or Orchestrator UIs, both of which are really geared toward administrators. Now I'm going to be working with Service Broker which will provide the user-facing front-end. This is where "normal" users will be able to submit provisioning requests without having to worry about any of the underlying infrastructure or orchestration.
-Before I can do anything with my Cloud Template in the Service Broker UI, though, I'll need to release it from Cloud Assembly. I do this by opening the template on the *Design* tab and clicking the *Version* button at the bottom of the screen. I'll label this as `1.0` and tick the checkbox to *Release this version to the catalog*.
+Before I can do anything with my Cloud Template in the Service Broker UI, though, I'll need to release it from Cloud Assembly. I do this by opening the template on the *Design* tab and clicking the *Version* button at the bottom of the screen. I'll label this as `1.0` and tick the checkbox to *Release this version to the catalog*.
![Releasing the Cloud Template to the Service Broker catalog](0-9BaWJqq.png)
I can then go to the Service Broker UI and add a new Content Source for my Cloud Assembly templates.
@@ -28,7 +28,7 @@ I can then go to the Service Broker UI and add a new Content Source for my Cloud
After hitting the *Create & Import* button, all released Cloud Templates in the selected Project will show up in the Service Broker *Content* section:
![New content!](Hlnnd_8Ed.png)
-In order for users to deploy from this template, I also need to go to *Content Sharing*, select the Project, and share the content. This can be done either at the Project level or by selecting individual content items.
+In order for users to deploy from this template, I also need to go to *Content Sharing*, select the Project, and share the content. This can be done either at the Project level or by selecting individual content items.
![Content sharing](iScnhmzVY.png)
That template now appears on the Service Broker *Catalog* tab:
@@ -48,7 +48,7 @@ How about that Deployment Name field? In my tests, I'd been manually creating a
### Automatic deployment naming
*[Update] I've since come up with what I think is a better approach to handling this. Check it out [here](/vra8-automatic-deployment-naming-another-take)!*
-That means it's time to dive back into the vRealize Orchestrator interface and whip up a new action for this purpose. I created a new action within my existing `net.bowdre.utility` module called `createDeploymentName`.
+That means it's time to dive back into the vRealize Orchestrator interface and whip up a new action for this purpose. I created a new action within my existing `net.bowdre.utility` module called `createDeploymentName`.
![createDeploymentName action](GMCWhns7u.png)
A good deployment name *must* be globally unique, and it would be great if it could also convey some useful information like who requested the deployment, which template it is being deployed from, and the purpose of the server. The `siteCode (String)`, `envCode (String)`, `functionCode (String)`, and `appCode (String)` variables from the request form will do a great job of describing the server's purpose. I can also pass in some additional information from the Service Broker form like `catalogItemName (String)` to get the template name and `requestedByName (String)` to identify the user making the request. So I'll set all those as inputs to my action:
@@ -58,9 +58,10 @@ I also went ahead and specified that the action will return a String.
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: createDeploymentName
-// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
+// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
// envCode (String), functionCode (String), appCode (String)
// Returns: deploymentName (String)
@@ -99,7 +100,7 @@ As a quick recap, I've got five networks available for vRA, split across my two
I'm going to add additional tags to these networks to further define their purpose.
|Name |Purpose |Tags |
-| --- | --- | --- |
+| --- | --- | --- |
| d1620-Servers-1 |Management | `net:bow`, `net:mgmt` |
| d1630-Servers-2 | Front-end | `net:bow`, `net:front` |
| d1640-Servers-3 | Back-end | `net:bow`, `net:back` |
@@ -109,7 +110,7 @@ I'm going to add additional tags to these networks to further define their purpo
I *could* just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users:
|Name |Tags |
-| --- | --- |
+| --- | --- |
| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` |
| d1630-Servers-2 | `net:bow`, `net:front`, `net:bow-front-172.16.30.0` |
| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` |
@@ -121,12 +122,13 @@ I *could* just use those tags to let users pick the appropriate network, but I'v
So I can now use a single tag to positively identify a single network, as long as I know its site and either its purpose or its IP space. I'll reference these tags in a vRO action that will populate a dropdown in the request form with the available networks for the selected site. Unfortunately I couldn't come up with an easy way to dynamically pull the tags into vRO so I create another Configuration Element to store them:
![networksPerSite configuration element](xfEultDM_.png)
-This gets filed under the existing `CustomProvisioning` folder, and I name it `networksPerSite`. Each site gets a new variable of type `Array/string`. The name of the variable matches the site ID, and the contents are just the tags minus the `net:` prefix.
+This gets filed under the existing `CustomProvisioning` folder, and I name it `networksPerSite`. Each site gets a new variable of type `Array/string`. The name of the variable matches the site ID, and the contents are just the tags minus the `net:` prefix.
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
![getNetworksForSite action](IdrT-Un8H1.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: getNetworksForSite
// Inputs: siteCode (String)
// Returns: site.value (Array/String)
@@ -164,6 +166,7 @@ inputs:
and update the resource configuration for the network entity to constrain it based on `input.network` instead of `input.site` as before:
```yaml
+# torchlight! {"lineNumbers": true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@@ -194,7 +197,7 @@ Back on the Service Broker UI, I hit my `LAB` Content Source again to Save & Imp
Now I can just go back to the Catalog tab and request a new deployment to check out my--
![Ew, an ugly error](zWFTuOYOG.png)
-Oh yeah. That vRO action gets called as soon as the request form loads - before selecting the required site code as an input. I could modify the action so that returns an empty string if the site hasn't been selected yet, but I'm kind of lazy so I'll instead just modify the custom form so that the Site field defaults to the `BOW` site.
+Oh yeah. That vRO action gets called as soon as the request form loads - before selecting the required site code as an input. I could modify the action so that returns an empty string if the site hasn't been selected yet, but I'm kind of lazy so I'll instead just modify the custom form so that the Site field defaults to the `BOW` site.
![BOW is default](yb77nH2Fp.png)
*Now* I can open up the request form and see how well it works:
@@ -214,4 +217,4 @@ And I can also confirm that the VM got named appropriately (based on the [naming
Very slick. And I think that's a great stopping point for today.
-Coming up, I'll describe how I create AD computer objects in site-specific OUs, add notes and custom attributes to the VM in vSphere, and optionally create static DNS records on a Windows DNS server.
\ No newline at end of file
+Coming up, I'll describe how I create AD computer objects in site-specific OUs, add notes and custom attributes to the VM in vSphere, and optionally create static DNS records on a Windows DNS server.
\ No newline at end of file
diff --git a/content/posts/vra8-custom-provisioning-part-one/index.md b/content/posts/vra8-custom-provisioning-part-one/index.md
index 352bb3f..a8a1699 100644
--- a/content/posts/vra8-custom-provisioning-part-one/index.md
+++ b/content/posts/vra8-custom-provisioning-part-one/index.md
@@ -47,7 +47,7 @@ Since each of my hosts only has 100GB of datastore and my Windows template speci
I created a few Flavor Mappings ranging from `micro` (1vCPU|1GB RAM) to `giant` (8vCPU|16GB) but for this resource-constrained lab I'll stick mostly to the `micro`, `tiny` (1vCPU|2GB), and `small` (2vCPU|2GB) sizes.
![T-shirt size Flavor Mappings](lodJlc8Hp.png)
-And I created an Image Mapping named `ws2019` which points to a Windows Server 2019 Core template I have stored in my lab's Content Library (cleverly-named "LABrary" for my own amusement).
+And I created an Image Mapping named `ws2019` which points to a Windows Server 2019 Core template I have stored in my lab's Content Library (cleverly-named "LABrary" for my own amusement).
![Windows Server Image Mapping](6k06ySON7.png)
And with that, my vRA infrastructure is ready for testing a *very* basic deployment.
@@ -58,6 +58,7 @@ Now it's time to leave the Infrastructure tab and visit the Design one, where I'
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
```yaml
+# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
# Image Mapping
@@ -69,11 +70,12 @@ inputs:
const: ws2019
default: ws2019
```
-`formatVersion` is always gonna be 1 so we'll skip right past that.
+`formatVersion` is always gonna be 1 so we'll skip right past that.
The first input is going to ask the user to select the desired Operating System for this deployment. The `oneOf` type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly "Windows Server 2019" `title` which is tied to the `ws2019` `const` value. For now, I'll also set the `default` value of the field so I don't have to actually click the dropdown each time I test the deployment.
```yaml
+# torchlight! {"lineNumbers": true}
# Flavor Mapping
size:
title: Resource Size
@@ -93,6 +95,7 @@ Now I'm asking the user to pick the t-shirt size of the VM. These will correspon
The `resources` section is where the data from the inputs gets applied to the deployment:
```yaml
+# torchlight! {"lineNumbers": true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@@ -113,6 +116,7 @@ So I'm connecting the selected `input.image` to the Image Mapping configured in
All together now:
```yaml
+# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
# Image Mapping
@@ -180,7 +184,7 @@ And I can pop over to the IPAM interface to confirm that the IP has been marked
Fantastic! But one of my objectives from earlier was to let the user control where a VM gets provisioned. Fortunately it's pretty easy to implement thanks to vRA 8's use of tags.
### Using tags for resource placement
-Just about every entity within vRA 8 can have tags applied to it, and you can leverage those tags in some pretty creative and useful ways. For now, I'll start by applying tags to my compute resources; I'll use `comp:bow` for the "BOW Cluster" and `comp:dre` for the "DRE Cluster".
+Just about every entity within vRA 8 can have tags applied to it, and you can leverage those tags in some pretty creative and useful ways. For now, I'll start by applying tags to my compute resources; I'll use `comp:bow` for the "BOW Cluster" and `comp:dre` for the "DRE Cluster".
![Compute tags](oz1IAp-i0.png)
I'll also use the `net:bow` and `net:dre` tags to logically divide up the networks between my sites:
@@ -189,6 +193,7 @@ I'll also use the `net:bow` and `net:dre` tags to logically divide up the networ
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
```yaml
+# torchlight! {"lineNumbers": true}
inputs:
# Datacenter location
site:
@@ -205,6 +210,7 @@ I'm using the `enum` option now instead of `oneOf` since the site names shouldn'
And then I'll add some `constraints` to the `resources` section, making use of the `to_lower` function from the [cloud template expression syntax](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html) to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
```yaml
+# torchlight! {"lineNumbers": true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
diff --git a/content/posts/vra8-custom-provisioning-part-three/index.md b/content/posts/vra8-custom-provisioning-part-three/index.md
index 6635d91..f4b0de6 100644
--- a/content/posts/vra8-custom-provisioning-part-three/index.md
+++ b/content/posts/vra8-custom-provisioning-part-three/index.md
@@ -35,12 +35,13 @@ Once it completes successfully, I can visit the Inventory section of the vRO int
![New AD endpoint](vlnle_ekN.png)
#### checkForAdConflict Action
-Since I try to keep things modular, I'm going to write a new vRO action within the `net.bowdre.utility` module called `checkForAdConflict` which can be called from the `Generate unique hostname` workflow. It will take in `computerName (String)` as an input and return a boolean `True` if a conflict is found or `False` if the name is available.
+Since I try to keep things modular, I'm going to write a new vRO action within the `net.bowdre.utility` module called `checkForAdConflict` which can be called from the `Generate unique hostname` workflow. It will take in `computerName (String)` as an input and return a boolean `True` if a conflict is found or `False` if the name is available.
![Action: checkForAdConflict](JT7pbzM-5.png)
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: checkForAdConflict action
// Inputs: computerName (String)
// Outputs: (Boolean)
@@ -65,7 +66,8 @@ Now I can pop back over to my massive `Generate unique hostname` workflow and dr
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: check for AD conflict task
// Inputs: candidateVmName (String), conflict (Boolean)
// Outputs: conflict (Boolean)
@@ -91,7 +93,7 @@ Cool, so that's the AD check in the bank. Onward to DNS!
### DNS
**[Update]** Thanks to a [kind commenter](https://github.com/jbowdre/jbowdre.github.io/issues/10#issuecomment-932541245), I've learned that my DNS-checking solution detailed below is somewhat unnecessarily complicated. I overlooked it at the time I was putting this together, but vRO _does_ provide a `System.resolveHostName()` function to easily perform DNS lookups. I've updated the [Adding it to the workflow](#adding-it-to-the-workflow-1) section below with the simplified script which eliminates the need for building an external script with dependencies and importing that as a vRO action, but I'm going to leave those notes in place as well in case anyone else (or Future John) might need to leverage a similar approach to solve another issue.
-Seriously. Go ahead and skip to [here](#adding-it-to-the-workflow-1).
+Seriously. Go ahead and skip to [here](#adding-it-to-the-workflow-1).
#### The Challenge (Deprecated)
JavaScript can't talk directly to Active Directory on its own, but in the previous action I was able to leverage the AD plugin built into vRO to bridge that gap. Unfortunately ~~there isn't~~ _I couldn't find_ a corresponding pre-installed plugin that will work as a DNS client. vRO 8 does introduce support for using other languages like (cross-platform) PowerShell or Python instead of being restricted to just JavaScript... but I wasn't able to find an easy solution for querying DNS from those languages either without requiring external modules. (The cross-platform version of PowerShell doesn't include handy Windows-centric cmdlets like `Get-DnsServerResourceRecord`.)
@@ -104,21 +106,22 @@ Luckily, vRO does provide a way to import scripts bundled with their required mo
I start by creating a folder to store the script and needed module, and then I create the required `handler.ps1` file.
```shell
-❯ mkdir checkDnsConflicts
-❯ cd checkDnsConflicts
-❯ touch handler.ps1
+mkdir checkDnsConflicts # [tl! .cmd:2]
+cd checkDnsConflicts
+touch handler.ps1
```
I then create a `Modules` folder and install the DnsClient-PS module:
```shell
-❯ mkdir Modules
-❯ pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
+mkdir Modules # [tl! .cmd:1]
+pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
```
And then it's time to write the PowerShell script in `handler.ps1`:
```powershell
+# torchlight! {"lineNumbers": true}
# PowerShell: checkForDnsConflict script
# Inputs: $inputs.hostname (String), $inputs.domain (String)
# Outputs: $queryresult (String)
@@ -148,8 +151,8 @@ function handler {
Now to package it up in a `.zip` which I can then import into vRO:
```shell
-❯ zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
- adding: Modules/ (stored 0%)
+zip -r --exclude=\*.zip -X checkDnsConflicts.zip . # [tl! .cmd]
+ adding: Modules/ (stored 0%) # [tl! .nocopy:start]
adding: Modules/DnsClient-PS/ (stored 0%)
adding: Modules/DnsClient-PS/1.0.0/ (stored 0%)
adding: Modules/DnsClient-PS/1.0.0/Public/ (stored 0%)
@@ -170,8 +173,9 @@ Now to package it up in a `.zip` which I can then import into vRO:
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%)
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%)
adding: handler.ps1 (deflated 49%)
-❯ ls
-checkDnsConflicts.zip handler.ps1 Modules
+# [tl! .nocopy:end]
+ls # [tl! .cmd]
+checkDnsConflicts.zip handler.ps1 Modules # [tl! .nocopy]
```
#### checkForDnsConflict action (Deprecated)
@@ -188,7 +192,8 @@ Just like with the `check for AD conflict` action, I'll add this onto the workfl
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: check for DNS conflict
// Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties)
// Outputs: conflict (Boolean)
@@ -212,7 +217,7 @@ if (conflict) {
Once that's all in place, I kick off another deployment to make sure that everything works correctly. After it completes, I can navigate to the **Extensibility > Workflow runs** section of the vRA interface to review the details:
![Workflow run success](GZKQbELfM.png)
-It worked!
+It worked!
But what if there *had* been conflicts? It's important to make sure that works too. I know that if I run that deployment again, the VM will get named `DRE-DTST-XXX008` and then `DRE-DTST-XXX009`. So I'm going to force conflicts by creating an AD object for one and a DNS record for the other.
![Making conflicts](6HBIUf6KE.png)
@@ -225,6 +230,6 @@ The workflow saw that the last VM was created as `-007` so it first grabbed `-00
### Next steps
So now I've got a pretty capable workflow for controlled naming of my deployed VMs. The names conform with my established naming scheme and increment predictably in response to naming conflicts in vSphere, Active Directory, and DNS.
-In the next post, I'll be enhancing my cloud template to let users pick which network to use for the deployed VM. That sounds simple, but I'll want the list of available networks to be filtered based on the selected site - that means using a Service Broker custom form to query another vRO action. I will also add the ability to create AD computer objects in a site-specific OU and automatically join the server to the domain. And I'll add notes to the VM to make it easier to remember why it was deployed.
+In the next post, I'll be enhancing my cloud template to let users pick which network to use for the deployed VM. That sounds simple, but I'll want the list of available networks to be filtered based on the selected site - that means using a Service Broker custom form to query another vRO action. I will also add the ability to create AD computer objects in a site-specific OU and automatically join the server to the domain. And I'll add notes to the VM to make it easier to remember why it was deployed.
Stay tuned!
diff --git a/content/posts/vra8-custom-provisioning-part-two/index.md b/content/posts/vra8-custom-provisioning-part-two/index.md
index fc0c8a3..2dbaf4b 100644
--- a/content/posts/vra8-custom-provisioning-part-two/index.md
+++ b/content/posts/vra8-custom-provisioning-part-two/index.md
@@ -38,6 +38,7 @@ I'll start by adding those fields as inputs on my cloud template.
I already have a `site` input at the top of the template, used for selecting the deployment location. I'll leave that there:
```yaml
+# torchlight! {"lineNumbers": true}
inputs:
site:
type: string
@@ -50,6 +51,7 @@ inputs:
I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
```yaml
+# torchlight! {"lineNumbers": true}
environment:
type: string
title: Environment
@@ -63,6 +65,7 @@ I'll add the rest of the naming components below the prompts for image selection
And a dropdown for those function options:
```yaml
+# torchlight! {"lineNumbers": true}
function:
type: string
title: Function Code
@@ -83,6 +86,7 @@ And a dropdown for those function options:
And finally a text entry field for the application descriptor. Note that this one includes the `minLength` and `maxLength` constraints to enforce the three-character format.
```yaml
+# torchlight! {"lineNumbers": true}
app:
type: string
title: Application Code
@@ -96,6 +100,7 @@ And finally a text entry field for the application descriptor. Note that this on
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for `environment` since I only want the first letter. I use the `substring()` function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a `dnsDomain` property that will be useful later when I need to query for DNS conflicts.
```yaml
+# torchlight! {"lineNumbers": true}
resources:
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
@@ -112,6 +117,7 @@ resources:
So here's the complete template:
```yaml
+# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
site:
@@ -228,7 +234,8 @@ The first thing I'll want this workflow to do (particularly for testing) is to t
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: logPayloadProperties
// Inputs: payload (Properties)
// Outputs: none
@@ -291,7 +298,8 @@ Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing
The script for this is pretty straight-forward:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: parse payload
// Inputs: inputProperties (Properties)
// Outputs: requestProperties (Properties), originalNames (Array/string)
@@ -333,7 +341,8 @@ Select **Output** at the top of the *New Variable* dialog and the complete the f
And here's the script for that task:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: Apply new names
// Inputs: inputProperties (Properties), newNames (Array/string)
// Outputs: resourceNames (Array/string)
@@ -363,7 +372,8 @@ Okay, on to the schema. This workflow may take a little while to execute, and it
The script is very short:
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: create lock
// Inputs: lockOwner (String), lockId (String)
// Outputs: none
@@ -377,7 +387,8 @@ We're getting to the meat of the operation now - another scriptable task named `
![Task: generate hostnameBase](XATryy20y.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: generate hostnameBase
// Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String)
// Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number)
@@ -415,7 +426,8 @@ I've only got the one vCenter in my lab. At work, I've got multiple vCenters so
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
![Task: prepare vCenter SDK connection](ByIWO66PC.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: prepare vCenter SDK connection
// Inputs: none
// Outputs: sdkConnections (Array/VC:SdkConnection)
@@ -432,7 +444,8 @@ Next, I'm going to drop another ForEach element onto the canvas. For each vCente
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
![Task: unpack results for all hosts](gIEFRnilq.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: unpack results for all hosts
// Inputs: vmsByHost (Array/Array)
// Outputs: vmNames (Array/string)
@@ -453,7 +466,8 @@ vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()})
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
![Task: generate hostnameSeq & candidateVmName](fWlSrD56N.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: generate hostnameSeq & candidateVmName
// Inputs: hostnameBase (String), digitCount (Number)
// Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String)
@@ -500,7 +514,8 @@ System.log("Proposed VM name: " + candidateVmName)
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
![Task: check for VM name conflicts](qmHszypww.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: check for VM name conflicts
// Inputs: candidateVmName (String), vmNames (Array/string)
// Outputs: conflict (Boolean)
@@ -527,7 +542,8 @@ I can then drag the new element away from the "everything is fine" flow, and con
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: conflict resolution
// Inputs: none
// Outputs: conflict (Boolean)
@@ -542,7 +558,8 @@ So if `check VM name conflict` encounters a collision with an existing VM name i
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
![Task: return nextVmName](5QFTPHp5H.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript: return nextVmName
// Inputs: candidateVmName (String)
// Outputs: nextVmName (String)
@@ -555,7 +572,8 @@ System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
And we should also remove that lock that we created at the start of this workflow.
![Task: remove lock](BhBnBh8VB.png)
-```js
+```javascript
+// torchlight! {"lineNumbers": true}
// JavaScript remove lock
// Inputs: lockId (String), lockOwner (String)
// Outputs: none
diff --git a/layouts/partials/archive.html b/layouts/partials/archive.html
index b63bc39..fc7d6bd 100644
--- a/layouts/partials/archive.html
+++ b/layouts/partials/archive.html
@@ -11,7 +11,7 @@
{{ .Content }}
-{{ range $pages }}
+{{- range (.Paginate $pages).Pages }}
{{- $postDate := .Date.Format "2006-01-02" }}
{{- $updateDate := .Lastmod.Format "2006-01-02" }}
@@ -27,4 +27,5 @@
-{{ end }}
\ No newline at end of file
+{{ end }}
+{{- template "_internal/pagination.html" . }}
\ No newline at end of file
diff --git a/layouts/partials/footer.html b/layouts/partials/footer.html
index e833946..6ac089d 100644
--- a/layouts/partials/footer.html
+++ b/layouts/partials/footer.html
@@ -1,4 +1,9 @@
{{- partial "lang.html" . -}}