initial hugo import
1
.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
*.lock
|
5
config.yaml
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
baseURL: https://virtuallypotato.com
|
||||||
|
disablePathToLower: true
|
||||||
|
languageCode: en-us
|
||||||
|
title: Virtually Potato
|
||||||
|
theme: m10c
|
|
@ -0,0 +1,230 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2018-09-26T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/i0UKdXleC.png
|
||||||
|
tags:
|
||||||
|
- docker
|
||||||
|
- linux
|
||||||
|
- cloud
|
||||||
|
title: BitWarden password manager self-hosted on free Google Cloud instance
|
||||||
|
---
|
||||||
|
|
||||||
|
![Bitwarden login](/assets/images/posts-2020/i0UKdXleC.png)
|
||||||
|
|
||||||
|
A friend mentioned the [BitWarden](https://bitwarden.com/) password manager to me yesterday and I had to confess that I'd never heard of it. I started researching it and was impressed by what I found: it's free, [open-source](https://github.com/bitwarden), feature-packed, fully cross-platform (with Windows/Linux/MacOS desktop clients, Android/iOS mobile apps, and browser extensions for Chrome/Firefox/Opera/Safari/Edge/etc), and even offers a self-hosted option.
|
||||||
|
|
||||||
|
I wanted to try out the self-hosted setup, and I discovered that the [official distribution](https://help.bitwarden.com/article/install-on-premise/) works beautifully on an `n1-standard-1` 1-vCPU Google Compute Engine instance - but that would cost me an estimated $25/mo to run after my free Google Cloud Platform trial runs out. And I can't really scale that instance down further because the embedded database won't start with less than 2GB of RAM.
|
||||||
|
|
||||||
|
I then came across [this comment](https://www.reddit.com/r/Bitwarden/comments/8vmwwe/best_place_to_self_host_bitwarden/e1p2f71/) on Reddit which discussed in somewhat-vague terms the steps required to get BitWarden to run on the [free](https://cloud.google.com/free/docs/always-free-usage-limits#compute_name) `f1-micro` instance, and also introduced me to the community-built [bitwarden_rs](https://github.com/dani-garcia/bitwarden_rs) project which is specifically designed to run a BW-compatible server on resource-constrained hardware. So here are the steps I wound up taking to get this up and running.
|
||||||
|
|
||||||
|
### Spin up a VM
|
||||||
|
*Easier said than done, but head over to https://console.cloud.google.com/ and fumble through:*
|
||||||
|
|
||||||
|
1. Creating a new project (or just add an instance to an existing one).
|
||||||
|
2. Creating a new Compute Engine instance, selecting `f1-micro` for the Machine Type and ticking the *Allow HTTPS traffic* box.
|
||||||
|
3. *(Optional)* Editing the instance to add an ssh-key for easier remote access.
|
||||||
|
|
||||||
|
### Configure Dynamic DNS
|
||||||
|
*Because we're cheap and don't want to pay for a static IP.*
|
||||||
|
|
||||||
|
1. Log in to the [Google Domain admin portal](https://domains.google.com/registrar) and [create a new Dynamic DNS record](https://domains.google.com/registrar). This will provide a username and password specific for that record.
|
||||||
|
2. Log in to the GCE instance and run `sudo apt-get update` followed by `sudo apt-get install ddclient`. Part of the install process prompts you to configure things... just accept the defaults and move on.
|
||||||
|
3. Edit the `ddclient` config file to look like this, substituting the username, password, and FDQN from Google Domains:
|
||||||
|
```shell
|
||||||
|
$ sudo vi /etc/ddclient.conf
|
||||||
|
# Configuration file for ddclient generated by debconf
|
||||||
|
#
|
||||||
|
# /etc/ddclient.conf
|
||||||
|
|
||||||
|
protocol=googledomains,
|
||||||
|
ssl=yes,
|
||||||
|
syslog=yes,
|
||||||
|
use=web,
|
||||||
|
server=domains.google.com,
|
||||||
|
login='[USERNAME]',
|
||||||
|
password='[PASSWORD]',
|
||||||
|
[FQDN]
|
||||||
|
```
|
||||||
|
4. `sudo vi /etc/default/ddclient` and make sure that `run_daemon="true"`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Configuration for ddclient scripts
|
||||||
|
# generated from debconf on Sat Sep 8 21:58:02 UTC 2018
|
||||||
|
#
|
||||||
|
# /etc/default/ddclient
|
||||||
|
|
||||||
|
# Set to "true" if ddclient should be run every time DHCP client ('dhclient'
|
||||||
|
# from package isc-dhcp-client) updates the systems IP address.
|
||||||
|
run_dhclient="false"
|
||||||
|
|
||||||
|
# Set to "true" if ddclient should be run every time a new ppp connection is
|
||||||
|
# established. This might be useful, if you are using dial-on-demand.
|
||||||
|
run_ipup="false"
|
||||||
|
|
||||||
|
# Set to "true" if ddclient should run in daemon mode
|
||||||
|
# If this is changed to true, run_ipup and run_dhclient must be set to false.
|
||||||
|
run_daemon="true"
|
||||||
|
|
||||||
|
# Set the time interval between the updates of the dynamic DNS name in seconds.
|
||||||
|
# This option only takes effect if the ddclient runs in daemon mode.
|
||||||
|
daemon_interval="300"
|
||||||
|
```
|
||||||
|
5. Restart the `ddclient` service - twice for good measure (daemon mode only gets activated on the second go *because reasons*):
|
||||||
|
```shell
|
||||||
|
$ sudo systemctl restart ddclient
|
||||||
|
$ sudo systemctl restart ddclient
|
||||||
|
```
|
||||||
|
6. After a few moments, refresh the Google Domains page to verify that your instance's external IP address is showing up on the new DDNS record.
|
||||||
|
|
||||||
|
### Install Docker
|
||||||
|
*Steps taken from [here](https://docs.docker.com/install/linux/docker-ce/debian/).*
|
||||||
|
1. Update `apt` package index:
|
||||||
|
```shell
|
||||||
|
$ sudo apt-get update
|
||||||
|
```
|
||||||
|
2. Install package management prereqs:
|
||||||
|
```shell
|
||||||
|
$ sudo apt-get install \
|
||||||
|
apt-transport-https \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
gnupg2 \
|
||||||
|
software-properties-common
|
||||||
|
```
|
||||||
|
3. Add Docker GPG key:
|
||||||
|
```shell
|
||||||
|
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
|
||||||
|
```
|
||||||
|
4. Add the Docker repo:
|
||||||
|
```shell
|
||||||
|
$ sudo add-apt-repository \
|
||||||
|
"deb [arch=amd64] https://download.docker.com/linux/debian \
|
||||||
|
$(lsb_release -cs) \
|
||||||
|
stable"
|
||||||
|
```
|
||||||
|
5. Update apt index again:
|
||||||
|
```shell
|
||||||
|
$ sudo apt-get update
|
||||||
|
```
|
||||||
|
6. Install Docker:
|
||||||
|
```shell
|
||||||
|
$ sudo apt-get install docker-ce
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install Certbot and generate SSL cert
|
||||||
|
*Steps taken from [here](https://certbot.eff.org/lets-encrypt/debianstretch-other.html).*
|
||||||
|
1. Add `stretch-backports` repo:
|
||||||
|
```shell
|
||||||
|
$ sudo add-apt-repository \
|
||||||
|
"deb https://ftp.debian.org/debian \
|
||||||
|
stretch-backports main"
|
||||||
|
```
|
||||||
|
2. Install Certbot:
|
||||||
|
```shell
|
||||||
|
$ sudo apt-get install certbot -t stretch-backports
|
||||||
|
```
|
||||||
|
3. Generate certificate:
|
||||||
|
```shell
|
||||||
|
$ sudo certbot certonly --standalone -d [FQDN]
|
||||||
|
```
|
||||||
|
4. Create a directory to store the new certificates and copy them there:
|
||||||
|
```shell
|
||||||
|
$ sudo mkdir -p /ssl/keys/
|
||||||
|
$ sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/
|
||||||
|
$ sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Set up bitwarden_rs
|
||||||
|
*Using the container image available [here](https://github.com/dani-garcia/bitwarden_rs).*
|
||||||
|
1. Let's just get it up and running first:
|
||||||
|
```shell
|
||||||
|
$ sudo docker run -d --name bitwarden \
|
||||||
|
-e ROCKET_TLS={certs='"/ssl/fullchain.pem", key="/ssl/privkey.pem"}' \
|
||||||
|
-e ROCKET_PORT='8000' \
|
||||||
|
-v /ssl/keys/:/ssl/ \
|
||||||
|
-v /bw-data/:/data/ \
|
||||||
|
-v /icon_cache/ \
|
||||||
|
-p 0.0.0.0:443:8000 \
|
||||||
|
mprasil/bitwarden:latest
|
||||||
|
```
|
||||||
|
2. At this point you should be able to point your web browser at `https://[FQDN]` and see the BitWarden login screen. Click on the Create button and set up a new account. Log in, look around, add some passwords, etc. Everything should basically work just fine.
|
||||||
|
3. Unless you want to host passwords for all of the Internet you'll probably want to disable signups at some point by adding the `env` option `SIGNUPS_ALLOWED=false`. And you'll need to set `DOMAIN=https://[FQDN]` if you want to use U2F authentication:
|
||||||
|
```shell
|
||||||
|
$ sudo docker stop bitwarden
|
||||||
|
$ sudo docker rm bitwarden
|
||||||
|
$ sudo docker run -d --name bitwarden \
|
||||||
|
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
|
||||||
|
-e ROCKET_PORT='8000' \
|
||||||
|
-e SIGNUPS_ALLOWED=false \
|
||||||
|
-e DOMAIN=https://[FQDN] \
|
||||||
|
-v /ssl/keys/:/ssl/ \
|
||||||
|
-v /bw-data/:/data/ \
|
||||||
|
-v /icon_cache/ \
|
||||||
|
-p 0.0.0.0:443:8000 \
|
||||||
|
mprasil/bitwarden:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install bitwarden_rs as a service
|
||||||
|
*So we don't have to keep manually firing this thing off.*
|
||||||
|
1. Create a script to stop, remove, update, and (re)start the `bitwarden_rs` container:
|
||||||
|
```shell
|
||||||
|
$ sudo vi /usr/local/bin/start-bitwarden.sh
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
docker stop bitwarden
|
||||||
|
docker rm bitwarden
|
||||||
|
docker pull mprasil/bitwarden
|
||||||
|
|
||||||
|
docker run -d --name bitwarden \
|
||||||
|
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
|
||||||
|
-e ROCKET_PORT='8000' \
|
||||||
|
-e SIGNUPS_ALLOWED=false \
|
||||||
|
-e DOMAIN=https://[FQDN] \
|
||||||
|
-v /ssl/keys/:/ssl/ \
|
||||||
|
-v /bw-data/:/data/ \
|
||||||
|
-v /icon_cache/ \
|
||||||
|
-p 0.0.0.0:443:8000 \
|
||||||
|
mprasil/bitwarden:latest
|
||||||
|
$ sudo chmod 744 /usr/local/bin/start-bitwarden.sh
|
||||||
|
```
|
||||||
|
2. And add it as a `systemd` service:
|
||||||
|
```shell
|
||||||
|
$ sudo vi /etc/systemd/system/bitwarden.service
|
||||||
|
[Unit]
|
||||||
|
Description=BitWarden container
|
||||||
|
Requires=docker.service
|
||||||
|
After=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
ExecStart=/usr/local/bin/bitwarden-start.sh
|
||||||
|
ExecStop=/usr/bin/docker stop bitwarden
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
|
$ sudo chmod 644 /etc/systemd/system/bitwarden.service
|
||||||
|
```
|
||||||
|
3. Try it out:
|
||||||
|
```shell
|
||||||
|
$ sudo systemctl start bitwarden
|
||||||
|
$ sudo systemctl status bitwarden
|
||||||
|
● bitwarden.service - BitWarden container
|
||||||
|
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: enabled)
|
||||||
|
Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago
|
||||||
|
Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS)
|
||||||
|
Main PID: 13104 (code=exited, status=0/SUCCESS); Control PID: 13229 (docker)
|
||||||
|
Tasks: 5 (limit: 4915)
|
||||||
|
Memory: 9.7M
|
||||||
|
CPU: 375ms
|
||||||
|
CGroup: /system.slice/bitwarden.service
|
||||||
|
└─control
|
||||||
|
└─13229 /usr/bin/docker stop bitwarden
|
||||||
|
|
||||||
|
Sep 09 03:43:20 bitwarden bitwarden-start.sh[13104]: Status: Image is up to date for mprasil/bitwarden:latest
|
||||||
|
Sep 09 03:43:20 bitwarden bitwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
If all went according to plan, you've now got a highly-secure open-source full-featured cross-platform password manager running on an Always Free Google Compute Engine instance resolved by Google Domains dynamic DNS. Very slick!
|
|
@ -0,0 +1,52 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2020-09-13T08:34:30Z"
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- shell
|
||||||
|
- logs
|
||||||
|
title: Finding the most popular IPs in a log file
|
||||||
|
---
|
||||||
|
|
||||||
|
I found myself with a sudden need for parsing a Linux server's logs to figure out which host(s) had been slamming it with an unexpected burst of traffic. Sure, there are proper log analysis tools out there which would undoubtedly make short work of this but none of those were installed on this hardened system. So this is what I came up with.
|
||||||
|
|
||||||
|
#### Find IP-ish strings
|
||||||
|
This will get you all occurrences of things which look vaguely like IPv4 addresses:
|
||||||
|
```shell
|
||||||
|
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT
|
||||||
|
```
|
||||||
|
(It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.)
|
||||||
|
|
||||||
|
#### Filter out `localhost`
|
||||||
|
The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression):
|
||||||
|
```shell
|
||||||
|
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Count up the duplicates
|
||||||
|
Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears):
|
||||||
|
```shell
|
||||||
|
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Sort the results
|
||||||
|
We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top:
|
||||||
|
```shell
|
||||||
|
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Top 5
|
||||||
|
And, finally, let's use `head -n 5` to only get the first five results:
|
||||||
|
```shell
|
||||||
|
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Bonus round!
|
||||||
|
You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`!
|
||||||
|
|
||||||
|
So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go:
|
||||||
|
```bash
|
||||||
|
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done
|
||||||
|
```
|
||||||
|
Nice!
|
|
@ -0,0 +1,96 @@
|
||||||
|
---
|
||||||
|
categories: null
|
||||||
|
date: "2020-09-14T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/qDTXt1jp3.png
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- chromeos
|
||||||
|
- crostini
|
||||||
|
- 3dprinting
|
||||||
|
title: 3D Modeling and Printing on Chrome OS
|
||||||
|
---
|
||||||
|
|
||||||
|
I've got an Ender 3 Pro 3D printer, a Raspberry Pi 4, and a Pixel Slate. I can't interface directly with the printer over USB from the Slate (plus having to be physically connected to things is like so lame) so I installed [Octoprint on the Raspberry Pi](https://github.com/guysoft/OctoPi) and connected that to the printer's USB interface. This gave me a pretty web interface for controlling the printer - but it's only accessible over the local network. I also installed [The Spaghetti Detective](https://www.thespaghettidetective.com/) to allow secure remote control of the printer, with the added bonus of using AI magic and a cheap camera to detect and abort failing prints.
|
||||||
|
|
||||||
|
That's a pretty sweet setup, but I still needed a way to convert STL 3D models into GCODE files which the printer can actually understand. And what if I want to create my own designs?
|
||||||
|
|
||||||
|
Enter "Crostini," Chrome OS's [Linux (Beta) feature](https://chromium.googlesource.com/chromiumos/docs/+/master/containers_and_vms.md). It consists of a hardened Linux VM named `termina` which runs (by default) a Debian Buster LXD container named `penguin` (though you can spin up just about any container for which you can find an [image](https://us.images.linuxcontainers.org/)) and some fancy plumbing to let Chrome OS and Linux interact in specific clearly-defined ways. It's a brilliant balance between offering the flexibility of Linux while preserving Chrome OS's industry-leading security posture.
|
||||||
|
|
||||||
|
|
||||||
|
![Screenshot 2020-09-14 at 10.41.47.png](/assets/images/posts-2020/lhTnVwCO3.png)
|
||||||
|
|
||||||
|
There are plenty of great guides (like [this one](https://www.computerworld.com/article/3314739/linux-apps-on-chrome-os-an-easy-to-follow-guide.html)) on how to get started with Linux on Chrome OS so I won't rehash those steps here.
|
||||||
|
|
||||||
|
One additional step you will probably want to take is make sure that your Chromebook is configured to enable hyperthreading, as it may have [hyperthreading disabled by default](https://support.google.com/chromebook/answer/9340236). Just plug `chrome://flags/#scheduler-configuration` into Chrome's address bar, set it to `Enables Hyper-Threading on relevant CPUs`, and then click the button to restart your Chromebook. You'll thank me later.
|
||||||
|
![Screenshot 2020-09-14 at 10.53.29.png](/assets/images/posts-2020/LHax6lAwh.png)
|
||||||
|
|
||||||
|
### The Software
|
||||||
|
I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modeling and [Ultimaker Cura](https://ultimaker.com/software/ultimaker-cura) for my GCODE slicer, but unfortunately getting them working cleanly wasn't entirely straightforward.
|
||||||
|
|
||||||
|
#### FreeCAD
|
||||||
|
Installing FreeCAD is as easy as:
|
||||||
|
```shell
|
||||||
|
$ sudo apt update
|
||||||
|
$ sudo apt install freecad
|
||||||
|
```
|
||||||
|
But launching `/usr/bin/freecad` caused me some weird graphical defects which rendered the application unusable. I found that I needed to pass the `LIBGL_DRI3_DISABLE=1` environment variable to eliminate these glitches:
|
||||||
|
```shell
|
||||||
|
$ env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &
|
||||||
|
```
|
||||||
|
To avoid having to type that every time I wished to launch the app, I inserted this line at the bottom of my `~/.bashrc` file:
|
||||||
|
```shell
|
||||||
|
alias freecad="env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &"
|
||||||
|
```
|
||||||
|
To be able to start FreeCAD from the Chrome OS launcher with that environment variable intact, edit it into the `Exec` line of the `/usr/share/applications/freecad.desktop` file:
|
||||||
|
```shell
|
||||||
|
$ sudo vi /usr/share/applications/freecad.desktop
|
||||||
|
[Desktop Entry]
|
||||||
|
Version=1.0
|
||||||
|
Name=FreeCAD
|
||||||
|
Name[de]=FreeCAD
|
||||||
|
Comment=Feature based Parametric Modeler
|
||||||
|
Comment[de]=Feature-basierter parametrischer Modellierer
|
||||||
|
GenericName=CAD Application
|
||||||
|
GenericName[de]=CAD-Anwendung
|
||||||
|
Exec=env LIBGL_DRI3_DISABLE=1 /usr/bin/freecad %F
|
||||||
|
Path=/usr/lib/freecad
|
||||||
|
Terminal=false
|
||||||
|
Type=Application
|
||||||
|
Icon=freecad
|
||||||
|
Categories=Graphics;Science;Engineering
|
||||||
|
StartupNotify=true
|
||||||
|
GenericName[de_DE]=Feature-basierter parametrischer Modellierer
|
||||||
|
Comment[de_DE]=Feature-basierter parametrischer Modellierer
|
||||||
|
MimeType=application/x-extension-fcstd
|
||||||
|
```
|
||||||
|
That's it! Get on with your 3D-modeling bad self.
|
||||||
|
![Screenshot 2020-09-14 at 10.40.23.png](/assets/images/posts-2020/qDTXt1jp3.png)
|
||||||
|
Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.freecadweb.org/Export_to_STL_or_OBJ) so you can import it into your slicer.
|
||||||
|
|
||||||
|
#### Ultimaker Cura
|
||||||
|
Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files > Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in `~/Applications/`.
|
||||||
|
|
||||||
|
To be able to actually execute the AppImage you'll need to adjust the permissions with 'chmod +x':
|
||||||
|
```shell
|
||||||
|
$ chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage
|
||||||
|
```
|
||||||
|
You can then start up the app by calling the file directly:
|
||||||
|
```shell
|
||||||
|
$ ~/Applications/Ultimaker_Cura-4.7.1.AppImage &
|
||||||
|
```
|
||||||
|
AppImages don't automatically appear in the Chrome OS launcher so you'll need to create its `.desktop` file. You can do this manually if you want, but I found it a lot easier to leverage `menulibre`:
|
||||||
|
```shell
|
||||||
|
$ sudo apt update && sudo apt install menulibre
|
||||||
|
$ menulibre
|
||||||
|
```
|
||||||
|
Just plug in the relevant details (you can grab the appropriate icon [here](https://github.com/Ultimaker/Cura/blob/master/icons/cura-128.png)), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher.
|
||||||
|
![Screenshot 2020-09-14 at 11.00.47.png](/assets/images/posts-2020/VTISYOKHO.png)
|
||||||
|
|
||||||
|
![Screenshot 2020-09-14 at 10.40.38.png](/assets/images/posts-2020/f8nRJcyI6.png)
|
||||||
|
|
||||||
|
From there, just import the STL mesh, configure the appropriate settings, slice, and save the resulting GCODE. You can then just upload the GCODE straight to The Spaghetti Detective and kick off the print.
|
||||||
|
|
||||||
|
![PXL_20200902_201747849.MP.jpg](/assets/images/posts-2020/2g57odtq2.jpeg)
|
||||||
|
|
||||||
|
Nice!
|
|
@ -0,0 +1,60 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Scripts
|
||||||
|
date: "2020-09-16T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/LJOcy2oqc.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- powercli
|
||||||
|
title: Logging in to Multiple vCenter Servers at Once with PowerCLI
|
||||||
|
---
|
||||||
|
|
||||||
|
I manage a large VMware environment spanning several individual vCenters, and I often need to run [PowerCLI](https://code.vmware.com/web/tool/12.0.0/vmware-powercli) queries across the entire environment. I waste valuable seconds running `Connect-ViServer` and logging in for each and every vCenter I need to talk to. Wouldn't it be great if I could just log into all of them at once?
|
||||||
|
|
||||||
|
I can, and here's how I do it.
|
||||||
|
|
||||||
|
![Annotation 2020-09-16 142625.png](/assets/images/posts-2020/LJOcy2oqc.png)
|
||||||
|
|
||||||
|
### The Script
|
||||||
|
The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
|
||||||
|
```powershell
|
||||||
|
# PowerCLI_Custom_Functions.ps1
|
||||||
|
# Usage:
|
||||||
|
# 0) Edit $vCenterList to reference the vCenters in your environment.
|
||||||
|
# 1) Call 'Update-Credentials' to create/update a ViCredentialStoreItem to securely store your username and password.
|
||||||
|
# 2) Call 'Connect-vCenters' to open simultaneously connections to all the vCenters in your environment.
|
||||||
|
# 3) Do PowerCLI things.
|
||||||
|
# 4) Call 'Disconnect-vCenters' to cleanly close all ViServer connections because housekeeping.
|
||||||
|
Import-Module VMware.PowerCLI
|
||||||
|
|
||||||
|
$vCenterList = @("vcenter1", "vcenter2", "vcenter3", "vcenter4", "vcenter5")
|
||||||
|
|
||||||
|
function Update-Credentials {
|
||||||
|
$newCredential = Get-Credential
|
||||||
|
ForEach ($vCenter in $vCenterList) {
|
||||||
|
New-ViCredentialStoreItem -Host $vCenter -User $newCredential.UserName -Password $newCredential.GetNetworkCredential().password
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function Connect-vCenters {
|
||||||
|
ForEach ($vCenter in $vCenterList) {
|
||||||
|
Connect-ViServer -Server $vCenter
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function Disconnect-vCenters {
|
||||||
|
Disconnect-ViServer -Server * -Force -Confirm:$false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
### The Setup
|
||||||
|
Edit whatever shortcut you use for launching PowerCLI (I use a tab in [Windows Terminal](https://github.com/microsoft/terminal) - I'll do another post on that setup later) to reference the custom init script. Here's the commandline I use:
|
||||||
|
```powershell
|
||||||
|
powershell.exe -NoExit -Command ". C:\Scripts\PowerCLI_Custom_Functions.ps1"
|
||||||
|
```
|
||||||
|
### The Usage
|
||||||
|
Now just use that shortcut to open up PowerCLI when you wish to do things. The custom functions will be loaded and waiting for you.
|
||||||
|
1. Start by running `Update-Credentials`. It will prompt you for the username+password needed to log into each vCenter listed in `$vCenterList`. These can be the same or different accounts, but you will need to enter the credentials for each vCenter since they get stored in a separate `ViCredentialStoreItem`. You'll also run this function again if you need to change the password(s) in the future.
|
||||||
|
2. Log in to all the things by running `Connect-vCenters`.
|
||||||
|
3. Do your work.
|
||||||
|
4. When you're finished, be sure to call `Disconnect-vCenters` so you don't leave sessions open in the background.
|
83
content/post/2020-09-22-docker-on-windows-10-with-wsl2.md
Normal file
|
@ -0,0 +1,83 @@
|
||||||
|
---
|
||||||
|
categories: null
|
||||||
|
date: "2020-09-22T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/8p-PSHx1R.png
|
||||||
|
tags:
|
||||||
|
- docker
|
||||||
|
- windows
|
||||||
|
- wsl
|
||||||
|
title: Docker on Windows 10 with WSL2
|
||||||
|
---
|
||||||
|
|
||||||
|
Microsoft's Windows Subsystem for Linux (WSL) 2 [was recently updated](https://devblogs.microsoft.com/commandline/wsl-2-support-is-coming-to-windows-10-versions-1903-and-1909/) to bring support for less-bleeding-edge Windows 10 versions (like 1903 and 1909). WSL2 is a big improvement over the first iteration (particularly with [better Docker support](https://www.docker.com/blog/docker-desktop-wsl-2-backport-update/)) so I was really looking forward to getting WSL2 loaded up on my work laptop.
|
||||||
|
|
||||||
|
Here's how.
|
||||||
|
|
||||||
|
### WSL2
|
||||||
|
|
||||||
|
#### Step Zero: Prereqs
|
||||||
|
You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running `ver` from a Command Prompt:
|
||||||
|
```powershell
|
||||||
|
C:\> ver
|
||||||
|
Microsoft Windows [Version 10.0.18363.1082]
|
||||||
|
```
|
||||||
|
We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go!
|
||||||
|
|
||||||
|
#### Step One: Enable the WSL feature
|
||||||
|
*(Not needed if you've already been using WSL1.)*
|
||||||
|
You can do this by dropping the following into an elevated Powershell prompt:
|
||||||
|
```powershell
|
||||||
|
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step Two: Enable the Virtual Machine Platform feature
|
||||||
|
Drop this in an elevated Powershell:
|
||||||
|
```powershell
|
||||||
|
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
|
||||||
|
```
|
||||||
|
And then reboot (this is still Windows, after all).
|
||||||
|
|
||||||
|
#### Step Three: Install the WSL2 kernel update package
|
||||||
|
Download it from [here](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi), and double-click the downloaded file to install it.
|
||||||
|
|
||||||
|
#### Step Four: Set WSL2 as your default
|
||||||
|
Open a Powershell window and run:
|
||||||
|
```powershell
|
||||||
|
wsl --set-default-version 2
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step Five: Install a Linux distro, or upgrade an existing one
|
||||||
|
If you're brand new to this WSL thing, head over to the [Microsoft Store](https://aka.ms/wslstore) and download your favorite Linux distribution. Once it's installed, launch it and you'll be prompted to set up a Linux username and password.
|
||||||
|
|
||||||
|
If you've already got a WSL1 distro installed, first run `wsl -l -v` in Powershell to make sure you know the distro name:
|
||||||
|
```powershell
|
||||||
|
PS C:\Users\jbowdre> wsl -l -v
|
||||||
|
NAME STATE VERSION
|
||||||
|
* Debian Running 2
|
||||||
|
```
|
||||||
|
And then upgrade the distro to WSL2 with `wsl --set-version <distro_name> 2`:
|
||||||
|
```powershell
|
||||||
|
PS C:\Users\jbowdre> wsl --set-version Debian 2
|
||||||
|
Conversion in progress, this may take a few minutes...
|
||||||
|
```
|
||||||
|
Cool!
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
#### Step One: Download
|
||||||
|
Download Docker Desktop for Windows from [here](https://hub.docker.com/editions/community/docker-ce-desktop-windows/), making sure to grab the "Edge" version since it includes support for the backported WSL2 bits.
|
||||||
|
|
||||||
|
#### Step Two: Install
|
||||||
|
Run the installer, and make sure to tick the box for installing the WSL2 engine.
|
||||||
|
|
||||||
|
#### Step Three: Configure Docker Desktop
|
||||||
|
Launch Docker Desktop from the Start menu, and you should be presented with this friendly prompt:
|
||||||
|
![2020-09-22.png](/assets/images/posts-2020/lY2FTflbK.png)
|
||||||
|
|
||||||
|
Hit that big friendly "gimme WSL2" button. Then open the Docker Settings from the system tray, and make sure that **General > Use the WSL 2 based engine** is enabled. Now navigate to **Resources > WSL Integration**, confirm that **Enable integration with my default WSL distro** is enabled as well. Smash the "Apply & Restart" button if you've made any changes.
|
||||||
|
|
||||||
|
### Test it!
|
||||||
|
Fire up a WSL session and confirm that everything is working with `docker run hello-world`:
|
||||||
|
![2020-09-22 (1).png](/assets/images/posts-2020/8p-PSHx1R.png)
|
||||||
|
|
||||||
|
It's beautiful!
|
|
@ -0,0 +1,73 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2020-09-24T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/fmLDUWjia.png
|
||||||
|
tags:
|
||||||
|
- chrome
|
||||||
|
title: Abusing Chrome's Custom Search Engines for Fun and Profit
|
||||||
|
---
|
||||||
|
|
||||||
|
Do you (like me) find yourself frequently searching for information within the same websites over and over? Wouldn't it be great if you could just type your query into your browser's address bar (AKA the Chrome Omnibox) and go straight to the results you need? Well you totally can - and probably already *are* for certain sites which have inserted themselves as search engines.
|
||||||
|
|
||||||
|
### The basics
|
||||||
|
Point your browser to `chrome://settings/searchEngines` to see which sites are registered as Custom Search Engines:
|
||||||
|
![Screenshot 2020-09-24 at 09.51.07.png](/assets/images/posts-2020/RuIrsHDqC.png)
|
||||||
|
|
||||||
|
Each of these search engine entries has three parts: a name ("Search engine"), a Keyword, and a Query URL. The "Search engine" title is just what will appear in the Omnibox when the search engine gets triggered, the Keyword is what you'll type in the Omnibox to trigger it, and the Query URL tells Chrome how to handle the search. All you have to do is type the keyword, hit your Tab key to activate the search, input your query, and hit Enter:
|
||||||
|
![recording.gif](/assets/images/posts-2020/o_o7rt4pA.gif)
|
||||||
|
|
||||||
|
For sites which register themselves automatically, the keyword is often set to something like `domain.tld` so it might make sense to assign it as something shorter or more descriptive.
|
||||||
|
|
||||||
|
The Query URL is basically just what appears in the address bar when you search the site directly, with `%s` placed where your query text would normally go. You can view these details for a given search entry by tapping the three-dot menu button and selecting "Edit", and you can manually create new entries by hitting that big friendly "Add" button:
|
||||||
|
![Screenshot 2020-09-24 at 10.16.01.png](/assets/images/posts-2020/fmLDUWjia.png)
|
||||||
|
|
||||||
|
By searching the site directly, you might find that it supports additional search filters which get appended to the URL:
|
||||||
|
![Screenshot 2020-09-24 at 10.35.08.png](/assets/images/posts-2020/iHsYd7lbw.png)
|
||||||
|
|
||||||
|
You can add those filters to the Query URL to further customize your Custom Search Engine:
|
||||||
|
![Screenshot 2020-09-24 at 10.38.18.png](/assets/images/posts-2020/EBkQTGmNb.png)
|
||||||
|
|
||||||
|
I spend a lot of my free time helping out on Google's support forums as a part of their [Product Experts program](https://productexperts.withgoogle.com/what-it-is), and I often need to quickly look up a Help Center article or previous forum discussion to assist users. I created a set of Custom Search Engines to make that easier:
|
||||||
|
![Screenshot 2020-09-24 at 10.42.57.png](/assets/images/posts-2020/630ix7uVw.png)
|
||||||
|
![Screenshot 2020-09-24 at 10.45.54.png](/assets/images/posts-2020/V3qLmfi50.png)
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
### Creating search where there is none
|
||||||
|
Even if the site doesn't have a built-in native search, you can leverage Google's `sitesearch` operator to create one. I often want to look up a Linux command's `man` page, so I use this Query URL to search https://www.man7.org/linux/man-pages/:
|
||||||
|
```
|
||||||
|
http://google.com/search?q=%s&sitesearch=man7.org%2Flinux%2Fman-pages
|
||||||
|
```
|
||||||
|
![Screenshot 2020-09-24 at 10.51.17.png](/assets/images/posts-2020/EkmgtRYN4.png)
|
||||||
|
![recording (4).gif](/assets/images/posts-2020/YKADY8YQR.gif)
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
### Speak foreign to me
|
||||||
|
This works for pretty much any site which parses the URL to render certain content. I use this for getting words/phrases instantly translated:
|
||||||
|
![Screenshot 2020-09-24 at 11.21.58.png](/assets/images/posts-2020/ELly_F6x6.png)
|
||||||
|
![recording (2).gif](/assets/images/posts-2020/1LDP5zxCU.gif)
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
### Shorter shortcuts
|
||||||
|
Your Query URL doesn't even need to include a query at all! You can use the Custom Search Engines as a sort of hyper-fast shortcut to pages you visit frequently. If I create a new entry with the Keyword `searchax` and `abusing-chromes-custom-search-engines-for-fun-and-profit` as the query URL, I can quickly open to this page by typing `searchax[tab][enter]`:
|
||||||
|
![Screenshot 2020-09-24 at 12.10.28.png](/assets/images/posts-2020/YilNCaHil.png)
|
||||||
|
|
||||||
|
I use that trick pretty regularly for getting back to vCenter appliance management interfaces without having to type out the full FQDN and port number and all that.
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
### Scratchpad hack
|
||||||
|
You can do some other creative stuff too, like speedily accessing a temporary scratchpad for quickly jotting down notes, complete with spellcheck! Just drop this into the Query URL field:
|
||||||
|
```
|
||||||
|
data:text/html;charset=utf-8, <title>Scratchpad</title><style>body {padding: 5%; font-size: 1.5em; font-family: Arial; }"></style><link rel="shortcut icon" href="https://ssl.gstatic.com/docs/documents/images/kix-favicon6.ico"/><body OnLoad='document.body.focus();' contenteditable spellcheck="true" >
|
||||||
|
```
|
||||||
|
And give it a nice short keyword - like the single letter 's':
|
||||||
|
![recording (3).gif](/assets/images/posts-2020/h6dUCApdV.gif)
|
||||||
|
|
||||||
|
------
|
||||||
|
|
||||||
|
With just a bit of tweaking, you can really supercharge Chrome's Omnibox capabilities. Let me know if you come across any other clever uses for this!
|
|
@ -0,0 +1,29 @@
|
||||||
|
---
|
||||||
|
categories: null
|
||||||
|
date: "2020-10-07T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/MnmMuA0HC.png
|
||||||
|
tags:
|
||||||
|
- windows
|
||||||
|
- linux
|
||||||
|
- wsl
|
||||||
|
- vpn
|
||||||
|
title: Fixing WSL2 connectivity when connected to a VPN with wsl-vpnkit
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
I was pretty excited to get [WSL2 and Docker working on my Windows 10 1909](docker-on-windows-10-with-wsl2) laptop a few weeks ago, but I quickly encountered a problem: WSL2 had no network connectivity when connected to my work VPN.
|
||||||
|
|
||||||
|
Well, that's not *entirely* true; Docker worked just fine, but nothing else could talk to anything outside of the WSL environment. I found a few open issues for this problem in the [WSL2 Github](https://github.com/microsoft/WSL/issues?q=is%3Aissue+is%3Aopen+VPN) with suggested workarounds including modifying Windows registry entries, adjusting the metrics assigned to various virtual network interfaces within Windows, and manually setting DNS servers in `/etc/resolv.conf`. None of these worked for me.
|
||||||
|
|
||||||
|
I eventually came across a solution [here](https://github.com/sakai135/wsl-vpnkit) which did the trick. This takes advantage of the fact that Docker for Windows is already utilizing `vpnkit` for connectivity - so you may also want to be sure Docker Desktop is configured to start at login.
|
||||||
|
|
||||||
|
The instructions worked well for me so I won't rehash them all here. When it came time to modify my `/etc/resolv.conf` file, I added in two of the internal DNS servers followed by the IP for my home router's DNS service. This allows me to use WSL2 both on and off the corporate network without having to reconfigure things.
|
||||||
|
|
||||||
|
All I need to do now is execute `sudo ./wsl-vpnkit` and leave that running in the background when I need to use WSL while connected to the corporate VPN.
|
||||||
|
|
||||||
|
|
||||||
|
![Annotation 2020-10-07 083947.png](/assets/images/posts-2020/MnmMuA0HC.png)
|
||||||
|
|
||||||
|
Whew! Okay, back to work.
|
||||||
|
|
|
@ -0,0 +1,182 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2020-10-27T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/XtmaR9Z0J.png
|
||||||
|
tags:
|
||||||
|
- chromeos
|
||||||
|
- linux
|
||||||
|
- crostini
|
||||||
|
- docker
|
||||||
|
- shell
|
||||||
|
title: Setting up Linux on a new Lenovo Chromebook Duet (bonus arm64 complications!)
|
||||||
|
---
|
||||||
|
|
||||||
|
I've [written in the past](3d-modeling-and-printing-on-chrome-os) about the Linux setup I've been using on my Pixel Slate. My Slate's keyboard stopped working over the weekend, though, and there don't seem to be any replacements (either Google or Brydge) to be found. And then I saw that [Walmart had the 64GB Lenovo Chromebook Duet temporarily marked down](https://twitter.com/johndotbowdre/status/1320733614426988544) to a mere $200 - just slightly more than the Slate's *keyboard* originally cost. So I jumped on that deal, and the little Chromeblet showed up today.
|
||||||
|
|
||||||
|
![PXL_20201027_154908725.PORTRAIT.jpg](/assets/images/posts-2020/kULHPeDuc.jpeg)
|
||||||
|
|
||||||
|
I'll be putting the Duet through the paces in the coming days to see if/how it can replace my now-tablet-only Slate, but first things first: I need Linux. And this may be a little bit different than the setup on the Slate since the Duet's Mediatek processor uses the aarch64/arm64 architecture instead of amd64.
|
||||||
|
|
||||||
|
So journey with me as I get this little guy set up!
|
||||||
|
|
||||||
|
### Installing Linux
|
||||||
|
This part is dead simple. Just head into **Settings > Linux (Beta)** and hit the **Turn on** button:
|
||||||
|
![Screenshot 2020-10-27 at 15.59.12.png](/assets/images/posts-2020/oLso9Wyzj.png)
|
||||||
|
|
||||||
|
Click **Next**, review the options for username and initial disk size (which can be easily increased later so there's no real need to change it right now), and then select **Install**:
|
||||||
|
![Screenshot 2020-10-27 at 16.01.19.png](/assets/images/posts-2020/ACUKsohq6.png)
|
||||||
|
|
||||||
|
It takes just a few minutes to download and initialize the `termina` VM and then create the default `penguin` container:
|
||||||
|
![Screenshot 2020-10-27 at 16.04.07.png](/assets/images/posts-2020/2LTaCEdWH.png)
|
||||||
|
|
||||||
|
You're ready to roll once the Terminal opens and gives you a prompt:
|
||||||
|
![Screenshot 2020-10-27 at 16.05.23.png](/assets/images/posts-2020/0-h1flLZs.png)
|
||||||
|
|
||||||
|
Your first action should be to go ahead and install any patches:
|
||||||
|
```shell
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
### Zsh, Oh My Zsh, and powerlevel10k theme
|
||||||
|
I've been really getting into this shell setup recently so let's go on and make things comfortable before we move on too much further. Getting `zsh` is straight forward:
|
||||||
|
```shell
|
||||||
|
sudo apt install zsh
|
||||||
|
```
|
||||||
|
Go ahead and launch `zsh` (by typing '`zsh`') and go through the initial setup wizard to configure preferences for things like history, completion, and other settings. I leave history on the defaults, enable the default completion options, switch the command-line editor to `vi`-style, and enable both `autocd` and `appendhistory`. Once you're back at the (new) `penguin%` prompt we can move on to installing the [Oh My Zsh plugin framework](https://github.com/ohmyzsh/ohmyzsh).
|
||||||
|
|
||||||
|
Just grab the installer script like so:
|
||||||
|
```shell
|
||||||
|
wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
|
||||||
|
```
|
||||||
|
Review it if you'd like, and then execute it:
|
||||||
|
```shell
|
||||||
|
sh install.sh
|
||||||
|
```
|
||||||
|
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
|
||||||
|
![Screenshot 2020-10-27 at 16.30.01.png](/assets/images/posts-2020/8q-WT0AyC.png)
|
||||||
|
|
||||||
|
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
|
||||||
|
```shell
|
||||||
|
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
|
||||||
|
```
|
||||||
|
Now we just need to edit `~/.zshrc` to point to the new theme:
|
||||||
|
```shell
|
||||||
|
sed -i s/^ZSH_THEME=.\*$/ZSH_THEME='"powerlevel10k\/powerlevel10k"'/ ~/.zshrc
|
||||||
|
```
|
||||||
|
We'll need to launch another instance of `zsh` for the theme change to take effect so first lets go ahead and manually set `zsh` as our default shell. We can use `sudo` to get around the whole "don't have a password set" inconvenience:
|
||||||
|
```shell
|
||||||
|
sudo chsh -s /bin/zsh [username]
|
||||||
|
```
|
||||||
|
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
|
||||||
|
![Screenshot 2020-10-27 at 16.47.02.png](/assets/images/posts-2020/K1ScSuWcg.png)
|
||||||
|
|
||||||
|
This theme is crazy-configurable, but fortunately the configurator wizard does a great job of helping you choose the options that work best for you.
|
||||||
|
I pick the Classic prompt style, Unicode character set, Dark prompt color, 24-hour time, Angled separators, Sharp prompt heads, Flat prompt tails, 2-line prompt height, Dotted prompt connection, Right prompt frame, Sparse prompt spacing, Fluent prompt flow, Enabled transient prompt, Verbose instant prompt, and (finally) Yes to apply the changes.
|
||||||
|
![New P10k prompt](/assets/images/posts-2021/08/20210804_p10k_prompt.png)
|
||||||
|
Looking good!
|
||||||
|
|
||||||
|
### Visual Studio Code
|
||||||
|
I'll need to do some light development work so VS Code is next on the hit list. You can grab the installer [here](https://code.visualstudio.com/Download#) or just copy/paste the following to stay in the Terminal. Definitely be sure to get the arm64 version!
|
||||||
|
```shell
|
||||||
|
curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb
|
||||||
|
sudo apt install ./code_arm64.deb
|
||||||
|
```
|
||||||
|
VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with `code [filename]`:
|
||||||
|
![Screenshot 2020-10-27 at 17.01.30.png](/assets/images/posts-2020/XtmaR9Z0J.png)
|
||||||
|
Nice!
|
||||||
|
|
||||||
|
### Android platform tools (adb and fastboot)
|
||||||
|
I sometimes don't want to wait for my Pixel to get updated naturally, so I love using `adb sideload` to manually update my phones. Here's what it takes to set that up. Installing adb is as simple as `sudo apt install adb`. To use it, enable the USB Debugging Developer Option on your phone, and then connect the phone to the Chromebook. You'll get a prompt to connect the phone to Linux:
|
||||||
|
![Screenshot 2020-10-27 at 18.02.17.png](/assets/images/posts-2020/MkGu29HKl.png)
|
||||||
|
|
||||||
|
Once you connect the phone to Linux, check the phone to approve the debugging connection. You can then issue `adb devices` to verify the phone is connected:
|
||||||
|
![Screenshot 2020-10-27 at 18.06.49.png](/assets/images/posts-2020/a0uqHkJiC.png)
|
||||||
|
|
||||||
|
*I've since realized that the platform-tools (adb/fastboot) available in the repos are much older than what are required for flashing a factory image or sideloading an OTA image to a modern Pixel phone. This'll do fine for installing APKs either to your Chromebook or your phone, but I had to pull out my trusty Pixelbook to flash GrapheneOS to my Pixel 4a.*
|
||||||
|
|
||||||
|
### Microsoft PowerShell and VMware PowerCLI
|
||||||
|
*[Updated 5/20/2021 with Microsoft's newer instructions]*
|
||||||
|
I'm working on setting up a [VMware homelab on an Intel NUC 9](https://twitter.com/johndotbowdre/status/1317558182936563714) so being able to automate things with PowerCLI will be handy.
|
||||||
|
|
||||||
|
PowerShell for ARM is still in an early stage so while [it is supported](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#support-for-arm-processors) it must be installed manually. Microsoft has instructions for installing PowerShell from binary archives [here](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7.2#linux), and I grabbed the latest `-linux-arm64.tar.gz` release I could find [here](https://github.com/PowerShell/PowerShell/releases).
|
||||||
|
```shell
|
||||||
|
curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.2.0-preview.5/powershell-7.2.0-preview.5-linux-arm64.tar.gz
|
||||||
|
sudo mkdir -p /opt/microsoft/powershell/7
|
||||||
|
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
|
||||||
|
sudo chmod +x /opt/microsoft/powershell/7/pwsh
|
||||||
|
sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh
|
||||||
|
```
|
||||||
|
You can then just run `pwsh`:
|
||||||
|
![Screenshot 2020-10-27 at 17.28.44.png](/assets/images/posts-2020/QRP4iyLnu.png)
|
||||||
|
That was the hard part. To install PowerCLI into your new Powershell environment, just run `Install-Module -Name VMware.PowerCLI` at the `PS >` prompt, and accept the warning about installing a module from an untrusted repository.
|
||||||
|
|
||||||
|
I'm planning to use PowerCLI against my homelab without trusted SSL certificates so (note to self) I need to run `Set-PowerCLIConfiguration -InvalidCertificateAction Ignore` before I try to connect.
|
||||||
|
![Screenshot 2020-10-27 at 17.34.39.png](/assets/images/posts-2020/YaFNJJG_c.png)
|
||||||
|
|
||||||
|
Woot!
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
The Linux (Beta) environment consists of a hardened virtual machine (named `termina`) running an LXC Debian container (named `penguin`). Know what would be even more fun? Let's run some other containers inside our container!
|
||||||
|
|
||||||
|
The docker installation has a few prerequisites:
|
||||||
|
```shell
|
||||||
|
sudo apt install \
|
||||||
|
apt-transport-https \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
gnupg-agent \
|
||||||
|
software-properties-common
|
||||||
|
```
|
||||||
|
Then we need to grab the Docker repo key:
|
||||||
|
```shell
|
||||||
|
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
|
||||||
|
```
|
||||||
|
And then we can add the repo:
|
||||||
|
```shell
|
||||||
|
sudo add-apt-repository \
|
||||||
|
"deb [arch=arm64] https://download.docker.com/linux/debian \
|
||||||
|
$(lsb_release -cs) \
|
||||||
|
stable"
|
||||||
|
```
|
||||||
|
And finally update the package cache and install `docker` and its friends:
|
||||||
|
```shell
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install docker-ce docker-ce-cli containerd.io
|
||||||
|
```
|
||||||
|
![Screenshot 2020-10-27 at 18.48.34.png](/assets/images/posts-2020/k2uiYi5e8.png)
|
||||||
|
Xzibit would be proud!
|
||||||
|
|
||||||
|
### 3D printing utilities
|
||||||
|
Just like last time, I'll want to be sure I can do light 3D part design and slicing on this Chromebook. Once again, I can install FreeCAD with `sudo apt install freecad`, and this time I didn't have to implement any workarounds for graphical issues:
|
||||||
|
![Screenshot 2020-10-27 at 19.16.31.png](/assets/images/posts-2020/q1inyuUOb.png)
|
||||||
|
|
||||||
|
Unfortunately, though, I haven't found a slicer application compiled with support for aarch64/arm64. There's a *much* older version of Cura available in the default Debian repos but it crashes upon launch. Neither Cura nor PrusaSlicer (or the Slic3r upstream) offer arm64 releases.
|
||||||
|
|
||||||
|
So while I can use the Duet for designing 3D models, I won't be able to actually prepare those models for printing without using another device. I'll need to keep looking for another solution here. (If you know of an option I've missed, please let me know!)
|
||||||
|
|
||||||
|
### Jupyter Notebook
|
||||||
|
I came across [a Reddit post](https://www.reddit.com/r/Crostini/comments/jnbqv3/successfully_running_jupyter_notebook_on_samsung/) today describing how to install `conda` and get a Jupyter Notebook running on arm64 so I had to give it a try. It actually wasn't that bad!
|
||||||
|
|
||||||
|
The key is to grab the appropriate version of [conda Miniforge](https://github.com/conda-forge/miniforge), make it executable, and run the installer:
|
||||||
|
```shell
|
||||||
|
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
|
||||||
|
chmod +x Miniforge3-Linux-aarch64.sh
|
||||||
|
./Miniforge3-Linux-aarch64.sh
|
||||||
|
```
|
||||||
|
Exit the terminal and relaunch it, and then install Jupyter:
|
||||||
|
```shell
|
||||||
|
conda install -c conda-forge notebook
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then launch the notebook with `jupyter notebook` and it will automatically open up in a Chrome OS browser tab:
|
||||||
|
|
||||||
|
![Screenshot 2020-11-03 at 14.34.09.png](/assets/images/posts-2020/U5E556eXf.png)
|
||||||
|
|
||||||
|
Cool! Now I just need to learn what I'm doing with Jupyter - but at least I don't have an excuse about "my laptop won't run it".
|
||||||
|
|
||||||
|
|
||||||
|
### Wrap-up
|
||||||
|
I'm sure I'll be installing a few more utilities in the coming days but this covers most of my immediate must-have Linux needs. I'm eager to see how this little Chromeblet does now that I'm settled in.
|
|
@ -0,0 +1,62 @@
|
||||||
|
---
|
||||||
|
categories: null
|
||||||
|
date: "2020-11-06T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/P-x5qEg_9.jpeg
|
||||||
|
tags:
|
||||||
|
- chromeos
|
||||||
|
title: 'Showdown: Lenovo Chromebook Duet vs. Google Pixel Slate'
|
||||||
|
---
|
||||||
|
|
||||||
|
Okay, okay, this isn't actually going to be a comparison review between the two wildly-mismatched-but-also-kind-of-similar [Chromeblets](https://www.reddit.com/r/chromeos/comments/bp1nwo/branding/), but rather a (hopefully) brief summary of my experience moving from an $800 Pixel Slate + $200 Google keyboard to a Lenovo Chromebook Duet I picked up on sale for just $200.
|
||||||
|
|
||||||
|
![PXL_20201104_160532096.MP.jpg](/assets/images/posts-2020/P-x5qEg_9.jpeg)
|
||||||
|
|
||||||
|
### Background
|
||||||
|
Up until last week, I'd been using the Slate as my primary personal computing device for the previous 20 months or so, mainly in laptop mode (as opposed to tablet mode). I do a lot of casual web browsing, and I spend a significant portion of my free time helping other users on Google's product support forums as a part of the [Google Product Experts program](https://productexperts.withgoogle.com/what-it-is). I also work a lot with the [Chrome OS Linux (Beta) environment](setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications), but I avoid Android apps as much as I can. And I also used the Slate for a bit of Stadia gaming when I wasn't near a Chromecast.
|
||||||
|
|
||||||
|
So the laptop experience is generally more important to me than the tablet one. I need to be able to work with a large number of browser tabs, but I don't typically need to do any heavy processing directly on the computer.
|
||||||
|
|
||||||
|
I was pretty happy with the Slate, but its expensive keyboard stopped working recently and replacements aren't really available anywhere. Remember, laptop mode is key for my use case so the Pixel Slate became instantly unusable to me.
|
||||||
|
|
||||||
|
### Size
|
||||||
|
When you put these machines side by side, the first difference that jumps out is the size disparity. The 12.3" Pixel Slate is positively massive next to the 10.1" Lenovo Duet.
|
||||||
|
![PXL_20201104_160825979.MP (1).jpg](/assets/images/posts-2020/gVj7d_2Nu.jpeg)
|
||||||
|
|
||||||
|
The Duet is physically smaller so the display itself is of course smaller. I had a brief moment of panic when I first logged in and the setup wizard completely filled the screen. Dialing Chrome OS's display scaling down to 80% strikes a good balance for me between fonts being legible while still displaying enough content to be worthwhile. It can get a bit tight when you've got windows docked side-by-side but I'm getting by okay.
|
||||||
|
|
||||||
|
Of course, the smaller size of the Duet also makes it work better as a tablet in my mind. It's comfortable enough to hold with one hand while you interact with the other, whereas the Slate always felt a little too big for that to me.
|
||||||
|
![PXL_20201104_213309828.MP.jpg](/assets/images/posts-2020/qne9SybLi.jpeg)
|
||||||
|
|
||||||
|
### Keyboard
|
||||||
|
A far more impactful size difference is the keyboards though. The Duet keyboard gets a bit cramped, particularly over toward the right side (you know, those pesky braces and semicolons that are *never* needed when coding):
|
||||||
|
![PXL_20201104_160747877.MP.jpg](/assets/images/posts-2020/CBziPHD8A.jpeg)
|
||||||
|
|
||||||
|
Getting used to typing on this significantly smaller keyboard has been the biggest adjustment so far. The pad on my pinky finger is wider than the last few keys at the right edge of the keyboard so I've struggled with accurately hitting the correct `[` or `]`, and also with smacking Return (and inevitably sending a malformed chat message) when trying to insert an apostrophe. I feel like I'm slowly getting the hang of it, but like I said, it's been an adjustment.
|
||||||
|
|
||||||
|
### Cover
|
||||||
|
![PXL_20201104_160703333._exported_1604610747029.jpg](/assets/images/posts-2020/yiCW6XZbF.jpeg)
|
||||||
|
The Pixel Slate's keyboard + folio cover is a single (floppy) piece. The keyboard connects to contacts on the bottom edge of the Slate, and magnets hold it in place. The rear cover then folds and sticks to the back of the Slate with magnets to prop up the tablet in different angles. The magnet setup means you can smoothly transition it through varying levels of tilt, which is pretty nice. But being a single piece means the keyboard might get in the way if you're trying to use it as just a propped-up tablet. And the extra folding in the back takes up a bit of space so the Slate may not work well as a laptop on your actual lap.
|
||||||
|
|
||||||
|
![PXL_20201104_160949342.MP.jpg](/assets/images/posts-2020/9_Ze3zyBk.jpeg)
|
||||||
|
|
||||||
|
The Duet's rear cover has a fabric finish kind of similar to the cases Google offers for their phones, and it provides a great texture for holding the tablet. It sticks to the back of the Duet through the magic of magnets, and the lower half of it folds out to create a really sturdy kickstand. And it's completely separate from the keyboard which is great for when you're using the Duet as a tablet (either handheld or propped up for watching a movie or gaming with Stadia).
|
||||||
|
|
||||||
|
![PXL_20201104_161022969.MP.jpg](/assets/images/posts-2020/nWRu2TB8i.jpeg)
|
||||||
|
|
||||||
|
And this little kickstand can go *low*, much lower than the Slate. This makes it perfect for my late-night Stadia sessions while sitting in bed. I definitely prefer this approach compared to what Google did with the Pixel Slate.
|
||||||
|
|
||||||
|
![PXL_20201104_161057794.MP.jpg](/assets/images/posts-2020/BAf7knBk5.jpeg)
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
The Duet does struggle a bit here. It's basically got a [smartphone processor](https://www.notebookcheck.net/Mediatek-Helio-P60T-Processor-Benchmarks-and-Specs.470711.0.html) and half the RAM of the Slate. Switching between windows and tabs sometimes takes an extra moment or two to catch up (particularly if said tab has been silently suspended in the background). Similarly, working with Linux apps is just a bit slower than you'd like it to be. Still, I've spent a bit more than a week now with the Duet as my go-to computer and it's never really been slow enough to bother me.
|
||||||
|
|
||||||
|
That arm64 processor does make finding compatible Linux packages a little more difficult than it's been on amd64 architectures but a [little bit of digging](setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications) will get past that limitation in most cases.
|
||||||
|
|
||||||
|
The upside of that smartphone processor is that the battery life is *insane*. After about seven hours of light usage today I'm sitting at 63% - with an estimated nine hours remaining. This thing keeps going and going, even while Stadia-ing for hours. Being able to play Far Cry 5 without being tethered to a wall is so nice.
|
||||||
|
|
||||||
|
### Fingerprint sensor
|
||||||
|
The Duet doesn't have one, and that makes me sad.
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
The Lenovo Chromebook Duet is an incredible little Chromeblet for the price. It clearly can't compete with the significantly-more-expensive Google Pixel Slate *on paper*, but I'm finding its size to be fantastic for the sort of on-the-go computing I do a lot of. It works better as a lap-top laptop, works better as a tablet in my hand, works better for playing Stadia in bed, and just feels more portable. It's a little sluggish at times and the squished keyboard takes some getting used to but overall I'm pretty happy with this move.
|
|
@ -0,0 +1,46 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2020-11-14T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/aeIOr8w6k.png
|
||||||
|
tags:
|
||||||
|
- android
|
||||||
|
- tasker
|
||||||
|
- automation
|
||||||
|
- homeassistant
|
||||||
|
title: Safeguard your Android's battery with Tasker + Home Assistant
|
||||||
|
---
|
||||||
|
|
||||||
|
A few months ago, I started using the [Accubattery app](https://play.google.com/store/apps/details?id=com.digibites.accubattery) to keep a closer eye on how I'd been charging my phones. The app has a handy feature that notifies you once the battery level reaches a certain threshold so you can pull the phone off the charger and extend the lithium battery's service life, and it even offers an estimate for what that impact might be. For instance, right now the app indicates that charging my Pixel 5 from 51% to 100% would cause 0.92 wear cycles, while stopping the charge at 80% would impose just 0.17 cycles.
|
||||||
|
|
||||||
|
![Screenshot_20201114-135308.png](/assets/images/posts-2020/aeIOr8w6k.png)
|
||||||
|
|
||||||
|
But that depends on me being near my phone and conscious so I can take action when the notification goes off. That's often a big assumption to make - and, frankly, I'm lazy.
|
||||||
|
|
||||||
|
I'm fortunately also fairly crafty, so I came up with a way to combine my favorite Android automation app with my chosen home automation platform to take my laziness out of the picture.
|
||||||
|
|
||||||
|
### The Ingredients
|
||||||
|
- [Wemo Mini Smart Plug](https://amzn.to/32G75Nt)
|
||||||
|
- [Raspberry Pi 3](https://amzn.to/331ZHwb) with [Home Assistant](https://www.home-assistant.io/) installed
|
||||||
|
- [Tasker](https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm)
|
||||||
|
- [Home Assistant Plug-In for Tasker](https://play.google.com/store/apps/details?id=com.markadamson.taskerplugin.homeassistant)
|
||||||
|
|
||||||
|
I'm not going to go through how to install Home Assistant on the Pi or how to configure it beyond what's strictly necessary for this particular recipe. The official [getting started documentation](https://www.home-assistant.io/getting-started/) is a great place to start.
|
||||||
|
|
||||||
|
### The Recipe
|
||||||
|
1. Plug the Wemo into a wall outlet, and plug a phone charger into the Wemo. Add the Belkin Wemo integration in Home Assistant, and configure the device and entity. I named mine `switchy`. Make a note of the Entity ID: `switch.switchy`. We'll need that later.
|
||||||
|
![Screenshot 2020-11-14 at 15.28.53.png](/assets/images/posts-2020/Gu5I3LUep.png)
|
||||||
|
2. Either point your phone's browser to your [Home Assistant instance's local URL](http://homeassistant.local:8123/), or use the [Home Assistant app](https://play.google.com/store/apps/details?id=io.homeassistant.companion.android) to access it. Tap your username at the bottom of the menu and scroll all the way down to the Long-Lived Access Tokens section. Tap to create a new token. It doesn't matter what you name it, but be sure to copy to token data once it is generated since you won't be able to display it again.
|
||||||
|
3. Install the [Home Assistant Plug-In for Tasker](https://play.google.com/store/apps/details?id=com.markadamson.taskerplugin.homeassistant). Open Tasker, create a new Task called 'ChargeOff', and set the action to `Plugin > Home Assistant Plug-in for Tasker > Call Service`. Tap the pencil icon to edit the configuration, and then tap the plus sign to add a new server. Give it whatever name you like, and then enter your Home Assistant's IP address for the Base URL, followed by the port number `8123`. For example, `http://192.168.1.99:8123`. Paste in the Long-Lived Access Token you generated earlier. Go on and hit the Test Server button to make sure you got it right. It'll wind up looking something like this:
|
||||||
|
![Screenshot_20201114-160839.png](/assets/images/posts-2020/8Jg4zgrgB.png)
|
||||||
|
For the Service field, you need to tell HA what you want it to do. We want it to turn off a switch so enter `switch.turn_off`. We'll use the Service Data field to tell it which switch, in JSON format:
|
||||||
|
```json
|
||||||
|
{"entity_id": "switch.switchy"}
|
||||||
|
```
|
||||||
|
Tap Test Service to make sure it works - and verify that the switch does indeed turn off.
|
||||||
|
![Screenshot_20201114-164514.png](/assets/images/posts-2020/U3LfmEJ_7.png)
|
||||||
|
4. Hard part is over. Now we just need to set up a profile in Tasker to fire our new task. I named mine 'Charge Limiter'. I started with `State > Power > Battery Level` and set it to trigger between 81-100%., and also added `State > Power > Source: Any` so it will only be active while charging. I also only want this to trigger while my phone is charging at home, so I added `State > Net > Wifi Connected` and then specified my home SSID. Link this profile to the Task you created earlier, and never worry about overcharging your phone again.
|
||||||
|
![Screenshot_20201114-172454.png](/assets/images/posts-2020/h7tl6facr.png)
|
||||||
|
|
||||||
|
You can use a similar Task to turn the switch back on at a set time - or you could configure that automation directly in Home Assistant. I added an action to turn on the switch to my Google Assistant bedtime routine and that works quite well for my needs.
|
|
@ -0,0 +1,179 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2020-11-24T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/Ki7jo65t3.png
|
||||||
|
tags:
|
||||||
|
- android
|
||||||
|
- automation
|
||||||
|
- tasker
|
||||||
|
- vpn
|
||||||
|
title: Auto-connect to ProtonVPN on untrusted WiFi with Tasker [Update!]
|
||||||
|
---
|
||||||
|
|
||||||
|
*[Update 2021-03-12] This solution recently stopped working for me. While looking for a fix, I found that OpenVPN had published [some notes](https://openvpn.net/faq/how-do-i-use-tasker-with-openvpn-connect-for-android/) on controlling the [official OpenVPN Connect app](https://play.google.com/store/apps/details?id=net.openvpn.openvpn) from Tasker. Jump to the [Update](#update) below to learn how I adapted my setup with this new knowledge.*
|
||||||
|
|
||||||
|
I recently shared how I use [Tasker and Home Assistant to keep my phone from charging past 80%](safeguard-your-androids-battery-with-tasker-home-assistant). Today, I'm going to share the setup I use to automatically connect my phone to a VPN on networks I *don't* control.
|
||||||
|
|
||||||
|
![Tasker + OpenVPN](/assets/images/posts-2020/Ki7jo65t3.png)
|
||||||
|
|
||||||
|
### Background
|
||||||
|
Android has an option to [set a VPN as Always-On](https://support.google.com/android/answer/9089766#always-on_VPN) so for maximum security I could just use that. I'm not *overly* concerned (yet?) with my internet traffic being intercepted upstream of my ISP, though, and often need to connect to other devices on my home network without passing through a VPN (or introducing split-tunnel complexity). But I do want to be sure that my traffic is protected whenever I'm connected to a WiFi network controlled by someone else.
|
||||||
|
|
||||||
|
I've recently started using [ProtonVPN](https://protonvpn.com/) in conjunction with my paid ProtonMail account so these instructions are tailored to that particular VPN provider. I'm paying for the ProtonVPN Plus subscription but these instructions should also work for the [free tier](https://protonvpn.com/free-vpn) as well. (And this should work for any VPN which provides an OpenVPN config file - you'll just have to find that on your own.)
|
||||||
|
|
||||||
|
ProtonVPN does provide a quite excellent [Android app](https://play.google.com/store/apps/details?id=ch.protonvpn.android) but I couldn't find a way to automate it without root. (If your phone is rooted, you should be able to use a Tasker shell to run `cmd statusbar click-tile ch.protonvpn.android/com.protonvpn.android.components.QuickTileService` and avoid needing to use OpenVPN at all.)
|
||||||
|
|
||||||
|
### The apps
|
||||||
|
You'll need a few apps to make this work:
|
||||||
|
|
||||||
|
- [Tasker](https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm)
|
||||||
|
- [OpenVPN for Android](https://play.google.com/store/apps/details?id=de.blinkt.openvpn)
|
||||||
|
- [OpenVpn Tasker Plugin](https://play.google.com/store/apps/details?id=com.ffrog8.openVpnTaskerPlugin)
|
||||||
|
|
||||||
|
It's important to use the [open-source](https://github.com/schwabe/ics-openvpn) 'OpenVPN for Android' app by Arne Schwabe rather than the 'OpenVPN Connect' app as <s>the latter doesn't work with the Tasker plugin</s> that's what I used when I originally wrote this guide.
|
||||||
|
|
||||||
|
### OpenVPN config file
|
||||||
|
You can find instructions for configuring the OpenVPN client to work with ProtonVPN [here](https://protonvpn.com/support/android-vpn-setup/) but I'll go ahead and hit the highlights. You'll probably want to go ahead and do all this from your phone so you don't have to fuss with transferring files around, but hey, *you do you*.
|
||||||
|
|
||||||
|
1. Log in to your ProtonVPN account (or sign up for a new free one) at https://account.protonvpn.com/login.
|
||||||
|
2. Use the panel on the left side to navigate to **[Downloads > OpenVPN configuration files](https://account.protonvpn.com/downloads#openvpn-configuration-files)**.
|
||||||
|
3. Select the **Android** platform and **UDP** as the protocol, unless you have a [particular reason to use TCP](https://protonvpn.com/support/udp-tcp/#:~:text=When%20to%20use%20UDP%20vs.%20TCP).
|
||||||
|
4. Select and download the desired config file:
|
||||||
|
- **Secure Core configs** utilize the [Secure Core](https://protonvpn.com/support/secure-core-vpn/) feature which connects you to a VPN node in your target country by way of a Proton-owned-and-managed server in privacy-friendly Iceland, Sweden, or Switzerland
|
||||||
|
- **Country configs** connect to a random VPN node in your target country
|
||||||
|
- **Standard server configs** let you choose the specific VPN node to use
|
||||||
|
- **Free server configs** connect you to one of the VPN nodes available in the free tier
|
||||||
|
![Client config download page](/assets/images/posts-2020/vdIG0jHmk.png)
|
||||||
|
|
||||||
|
Feel free to download more than one if you'd like to have different profiles available within the OpenVPN app.
|
||||||
|
|
||||||
|
ProtonVPN automatically generates a set of user credentials to use with a third-party VPN client so that you don't have to share your personal creds. You'll want to make a note of that randomly-generated username and password so you can plug them in to the OpenVPN app later. You can find the details at **[Account > OpenVPN / IKEv2 username](https://account.protonvpn.com/account#openvpn)**.
|
||||||
|
|
||||||
|
**Now that you've got the profile file, skip on down to [The Update](#update) to import it into OpenVPN Connect.**
|
||||||
|
|
||||||
|
### Configuring OpenVPN for Android
|
||||||
|
Now what you've got the config file(s) and your client credentials, it's time to actually configure that client.
|
||||||
|
|
||||||
|
![OpenVPN connection list](/assets/images/posts-2020/9WdA6HRch.png)
|
||||||
|
|
||||||
|
1. Launch the OpenVPN for Android app and tap the little 'downvote-in-a-box' "Import" icon.
|
||||||
|
2. Browse to wherever you saved the `.ovpn` config files and select the one you'd like to use.
|
||||||
|
3. You can rename if it you'd like but I feel that `us.protonvpn.com.udp` is pretty self-explanatory and will do just fine to distinguish between my profiles. Tap the check mark at the top-right or the floppy icon at the bottom right to confirm the import.
|
||||||
|
4. Now tap the pencil icon next to the new entry to edit its settings, and paste in the OpenVPN username and password where appropriate. Use your phone's back button/gesture to save the config and return to the list.
|
||||||
|
5. Repeat for any other configurations you'd like to import. We'll only use one for this particular Tasker profile but you might come up with different needs for different scenarios.
|
||||||
|
6. And finally, tap on the config name to test the connection. The OpenVPN Log window will appear, and you want the line at the top to (eventually) display something like `Connected: SUCCESS`.
|
||||||
|
|
||||||
|
Success!
|
||||||
|
|
||||||
|
I don't like to have a bunch of persistent notification icons hanging around (and Android already shows a persistent status icon when a VPN connection is active). If you're like me, long-press the OpenVPN notification and tap the gear icon. Then tap on the **Connection statistics** category and activate the **Minimized** slider. The notification will still appear, but it will collapse to the bottom of your notification stack and you won't get bugged by the icon.
|
||||||
|
|
||||||
|
![Notification settings](/assets/images/posts-2020/WWuHwVvrk.png)
|
||||||
|
|
||||||
|
### Tasker profiles
|
||||||
|
Open up Tasker and get ready to automate! We're going to wind up with at least two new Tasker profiles so (depending on how many you already have) you might want to create a new project by long-pressing the Home icon at the bottom-left of the screen and selecting the **Add** option. I chose to group all my VPN-related profiles in a project named (oh-so-creatively) "VPN". Totally your call though.
|
||||||
|
|
||||||
|
Let's start with a profile to track whether or not we're connected to one of our preferred/trusted WiFi networks:
|
||||||
|
|
||||||
|
#### Trusted WiFi
|
||||||
|
1. Tap the '+' sign to create a new profile, and add a new **State > Net > Wifi Connected** context. This profile will become active whenever your phone connects to WiFi.
|
||||||
|
2. Tap the magnifying glass next to the **SSID** field, which will pop up a list of all detected nearby network identifiers. Tap to select whichever network(s) you'd like to be considered "safe". You can also manually enter the SSID names, separating multiple options with a `/` (ex, `FBI Surveillance Van/TellMyWifiLoveHer/Pretty fly for a WiFi`). Or, for more security, identify the networks based on the MACs instead of the SSIDs - just be sure to capture the MACs for any extenders or mesh nodes too!
|
||||||
|
3. Once you've got your networks added, tap the back button to move *forward* to the next task (Ah, Android!): configuring the *action* which will occur when the context is satisfied.
|
||||||
|
4. Tap the **New Task** option and then tap the check mark to skip giving it a name (no need).
|
||||||
|
5. Hit the '+' button to add an action and select **Variables > Variable Set**.
|
||||||
|
6. For **Name**, enter `%TRUSTED_WIFI` (all caps to make it a "public" variable), and for the **To** field just enter `1`.
|
||||||
|
7. Hit back to save the action, and back again to save the profile.
|
||||||
|
8. Back at the profile list, long-press on the **Variable Set...** action and then select **Add Exit Task**.
|
||||||
|
9. We want to un-set the variable when no longer connected to a trusted WiFi network so add a new **Variables > Variable Clear** action and set the name to `%TRUSTED_WIFI`.
|
||||||
|
10. And back back out to admire your handiwork. Here's a recap of the profile:
|
||||||
|
```
|
||||||
|
Profile: Trusted Wifi
|
||||||
|
State: Wifi Connected [ SSID:FBI Surveillance Van/TellMyWifiLoveHer/Pretty fly for a WiFi MAC:* IP:* Active:Any ]
|
||||||
|
Enter: Anon
|
||||||
|
A1: Variable Set [ Name:%TRUSTED_WIFI To:1 Recurse Variables:Off Do Maths:Off Append:Off Max Rounding Digits:0 ]
|
||||||
|
Exit: Anon
|
||||||
|
A1: Variable Clear [ Name:%TRUSTED_WIFI Pattern Matching:Off Local Variables Only:Off Clear All Variables:Off ]
|
||||||
|
```
|
||||||
|
Onward!
|
||||||
|
|
||||||
|
#### VPN on Strange WiFi
|
||||||
|
This profile will kick in if the phone connects to a WiFi network which isn't on the "approved" list - when the `%TRUSTED_WIFI` variable is not set.
|
||||||
|
|
||||||
|
1. It starts out the same way by creating a new profile with the **State > Net > Wifi Connected** context but this time don't add any network names to the list.
|
||||||
|
2. For the action, select **Plugin > OpenVpn Tasker Plugin**, tap the pencil icon to edit the configuration, and select your VPN profile from the list under **Connect using profile**
|
||||||
|
3. Back at the Action Edit screen, tap the checkbox next to **If** and enter the variable name `%TRUSTED_WIFI`. Tap the '~' button to change the condition operator to **Isn't Set**. So while this profile will activate every time you connect to WiFi, the action which connects to the VPN will only fire if the WiFi isn't a trusted network.
|
||||||
|
4. Back out to the profile list and add a new Exit Task.
|
||||||
|
5. Add another **Plugin > OpenVpn Tasker Plugin** task and this time configure it to **Disconnect VPN**.
|
||||||
|
|
||||||
|
To recap:
|
||||||
|
```
|
||||||
|
Profile: VPN on Strange Wifi
|
||||||
|
State: Wifi Connected [ SSID:* MAC:* IP:* Active:Any ]
|
||||||
|
Enter: Anon
|
||||||
|
A1: OpenVPN [ Configuration:Connect (us.protonvpn.com.udp) Timeout (Seconds):0 ] If [ %TRUSTED_WIFI !Set ]
|
||||||
|
Exit: Anon
|
||||||
|
A1: OpenVPN [ Configuration:Disconnect Timeout (Seconds):0 ]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
Give it a try - the VPN should automatically activate the next time you connect to a network that's not on your list. If you find that it's not working correctly, you might try adding a short 3-5 second **Task > Wait** action before the connect/disconnect actions just to give a brief cooldown between state changes.
|
||||||
|
|
||||||
|
### Epilogue: working with Google's VPN
|
||||||
|
My Google Pixel 5 has a neat option at **Settings > Network & internet > Wi-Fi > Wi-Fi preferences > Connect to public networks** which will automatically connect the phone to known-decent public WiFi networks and automatically tunnel the connection through a Google VPN. It doesn't provide quite as much privacy as ProtonVPN, of course, but it's enough to keep my traffic safe from prying eyes on those public networks, and the auto-connection option really comes in handy sometimes. Of course, my Tasker setup would see that I'm connected to an unknown network and try to connect to ProtonVPN at the same time the phone was trying to connect to the Google VPN. That wasn't ideal.
|
||||||
|
|
||||||
|
I came up with a workaround to treat any network with the Google VPN as "trusted" as long as that VPN was active. I inserted a 10-second Wait before the Connect and Disconnect actions to give the VPN time to stand up, and added two new profiles to detect the Google VPN connection and disconnection.
|
||||||
|
|
||||||
|
#### Google VPN On
|
||||||
|
This one uses an **Event > System > Logcat Entry**. The first time you try to use that you'll be prompted to use adb to grant Tasker the READ_LOGS permission but the app actually does a great job of walking you through that setup. We'll watch the `Vpn` component and filter for `Established by com.google.android.apps.gcs on tun0`, and then set the `%TRUSTED_WIFI` variable:
|
||||||
|
```
|
||||||
|
Profile: Google VPN On
|
||||||
|
Event: Logcat Entry [ Output Variables:* Component:Vpn Filter:Established by com.google.android.apps.gcs on tun0 Grep Filter (Check Help):Off ]
|
||||||
|
Enter: Anon
|
||||||
|
A1: Variable Set [ Name:%TRUSTED_WIFI To:1 Recurse Variables:Off Do Maths:Off Append:Off Max Rounding Digits:3 ]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Google VPN Off
|
||||||
|
This one is pretty much the same but the opposite:
|
||||||
|
```
|
||||||
|
Profile: Google VPN Off
|
||||||
|
Event: Logcat Entry [ Output Variables:* Component:Vpn Filter:setting state=DISCONNECTED, reason=agentDisconnect Grep Filter (Check Help):Off ]
|
||||||
|
Enter: Anon
|
||||||
|
A1: Variable Clear [ Name:%TRUSTED_WIFI Pattern Matching:Off Local Variables Only:Off Clear All Variables:Off ]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update
|
||||||
|
#### OpenVPN Connect app configuration
|
||||||
|
After installing and launching the official [OpenVPN Connect app](https://play.google.com/store/apps/details?id=net.openvpn.openvpn), tap the "+" button at the bottom right to create a new profile. Swipe over to the "File" tab and import the `*.ovpn` file you downloaded from ProtonVPN. Paste in the username, tick the "Save password" box, and paste in the password as well. I also chose to rename the profile to something a little bit more memorable - you'll need this name later. From there, hit the "Add" button and then go ahead and tap on your profile to test the connection.
|
||||||
|
|
||||||
|
![Creating a profile in OpenVPN Connect](/assets/images/posts-2020/KjGOX8Yiv.png)
|
||||||
|
|
||||||
|
#### Tasker profiles
|
||||||
|
Go ahead and create the [Trusted Wifi profile](#trusted-wifi) as described above.
|
||||||
|
|
||||||
|
The condition for the [VPN on Strange Wifi profile](#vpn-on-strange-wifi) will be the same, but the task will be different. This time, add a **System > Send Intent** action. You'll need to enter the following details, leaving the other fields blank/default:
|
||||||
|
|
||||||
|
```
|
||||||
|
Action: net.openvpn.openvpn.CONNECT
|
||||||
|
Cat: None
|
||||||
|
Extra: net.openvpn.openvpn.AUTOSTART_PROFILE_NAME:PC us.protonvpn.com.udp (replace with your profile name)
|
||||||
|
Extra: net.openvpn.openvpn.AUTOCONNECT:true
|
||||||
|
Extra: net.openvpn.openvpn.APP_SECTION:PC
|
||||||
|
Package: net.openvpn.openvpn
|
||||||
|
Class: net.openvpn.unified.MainActivity
|
||||||
|
Target: Activity
|
||||||
|
If: %TRUSTED_WIFI !Set
|
||||||
|
```
|
||||||
|
|
||||||
|
The Exit Task to disconnect from the VPN uses a similar intent:
|
||||||
|
|
||||||
|
```
|
||||||
|
Action: net.openvpn.openvpn.DISCONNECT
|
||||||
|
Cat: None
|
||||||
|
Extra: net.openvpn.openvpn.STOP:true
|
||||||
|
Package: net.openvpn.openvpn
|
||||||
|
Class: net.openvpn.unified.MainActivity
|
||||||
|
Target: Activity
|
||||||
|
```
|
||||||
|
|
||||||
|
All set! You can pop back up to the [Epilogue](#epilogue-working-with-googles-vpn) section to continue tweaking to avoid conflicts with Google's auto-connect VPN if you'd like.
|
|
@ -0,0 +1,34 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2020-12-23T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/-lp1-DGiM.png
|
||||||
|
tags:
|
||||||
|
- chromeos
|
||||||
|
title: Burn an ISO to USB with the Chromebook Recovery Utility
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
There are a number of fantastic Windows applications for creating bootable USB drives from ISO images - but those don't work on a Chromebook. Fortunately there's an easily-available tool which will do the trick: Google's own [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/jndclpdbaamdhonoechobihbbiimdgai) app.
|
||||||
|
|
||||||
|
Normally that tool is used to creating bootable media to [reinstall Chrome OS on a broken Chromebook](https://support.google.com/chromebook/answer/1080595) (hence the name) but it also has the capability to write other arbitrary images as well. So if you find yourself needing to create a USB drive for installing ESXi on a computer in your [home lab](https://twitter.com/johndotbowdre/status/1341767090945077248) (more on that soon!) here's what you'll need to do:
|
||||||
|
|
||||||
|
1. Install the [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm).
|
||||||
|
2. Download the ISO you intend to use.
|
||||||
|
3. Rename the file to append `.bin` on the end, after the `.iso` bit:
|
||||||
|
![Screenshot 2020-12-23 at 15.42.40.png](/assets/images/posts-2020/uoTjgtbN1.png)
|
||||||
|
4. Plug in the USB drive you're going to sacrifice for this effort - remember that ALL data on the drive will be erased.
|
||||||
|
5. Open the recovery utility, click on the gear icon at the top right, and select the *Use local image* option:
|
||||||
|
![Screenshot 2020-12-23 at 15.44.04.png](/assets/images/posts-2020/vdTpW9t7Q.png)
|
||||||
|
6. Browse to and select the `*.iso.bin` file.
|
||||||
|
7. Choose the USB drive, and click *Continue*.
|
||||||
|
![Screenshot 2020-12-23 at 15.45.59.png](/assets/images/posts-2020/p_Ieqsw4p.png)
|
||||||
|
8. Click *Create now* to start the writing!
|
||||||
|
![Screenshot 2020-12-23 at 15.53.03.png](/assets/images/posts-2020/lhw5EEqSD.png)
|
||||||
|
9. All done! It probably won't work great for actually recovering your Chromebook but will do wonders for installing ESXi (or whatever) on another computer!
|
||||||
|
![Screenshot 2020-12-23 at 15.53.32.png](/assets/images/posts-2020/-lp1-DGiM.png)
|
||||||
|
|
||||||
|
You can also use the CRU to make a bootable USB from a `.zip` archive containing a single `.img` file, such as those commonly used to distribute [Raspberry Pi images](https://www.raspberrypi.org/documentation/installation/installing-images/chromeos.md).
|
||||||
|
|
||||||
|
Very cool!
|
|
@ -0,0 +1,72 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2021-01-30T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/XTaU9VDy8.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
title: 'PSA: halt replication before snapshotting linked vCenters'
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
It's a good idea to take a snapshot of your virtual appliances before applying any updates, just in case. When you have multiple vCenter appliances operating in Enhanced Link Mode, though, it's important to make sure that the snapshots are in a consistent state. The vCenter `vmdird` service is responsible for continuously syncing data between the vCenters within a vSphere Single Sign-On (SSO) domain. Reverting to a snapshot where `vmdird`'s knowledge of the environment dramatically differed from that of the other vCenters could cause significant problems down the road or even result in having to rebuild a vCenter from scratch.
|
||||||
|
|
||||||
|
*(Yes, that's a lesson I learned the hard way - and warnings about that are tragically hard to come by from what I've seen. So I'm sharing my notes so that you can avoid making the same mistake.)*
|
||||||
|
|
||||||
|
![Screenshot 2021-01-30 16.09.02.png](/assets/images/posts-2020/XTaU9VDy8.png)
|
||||||
|
|
||||||
|
Take these steps when you need to snapshot linked vCenters to avoid breaking replication:
|
||||||
|
|
||||||
|
1. Open an SSH session to *all* the vCenters within the SSO domain.
|
||||||
|
2. Log in and enter `shell` to access the shell on each vCenter.
|
||||||
|
3. Verify that replication is healthy by running `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that `Partner is 0 changes behind.`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@vcsa [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
|
||||||
|
Partner: vcsa2.lab.bowdre.net
|
||||||
|
Host available: Yes
|
||||||
|
Status available: Yes
|
||||||
|
My last change number: 9346
|
||||||
|
Partner has seen my change number: 9346
|
||||||
|
Partner is 0 changes behind.
|
||||||
|
|
||||||
|
root@vcsa2 [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass
|
||||||
|
Partner: vcsa.lab.bowdre.net
|
||||||
|
Host available: Yes
|
||||||
|
Status available: Yes
|
||||||
|
My last change number: 9518
|
||||||
|
Partner has seen my change number: 9518
|
||||||
|
Partner is 0 changes behind.
|
||||||
|
```
|
||||||
|
4. Stop `vmdird` on each vCenter by running `/bin/service-control --stop vmdird`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@vcsa [ ~ ]# /bin/service-control --stop vmdird
|
||||||
|
Operation not cancellable. Please wait for it to finish...
|
||||||
|
Performing stop operation on service vmdird...
|
||||||
|
Successfully stopped service vmdird
|
||||||
|
|
||||||
|
root@vcsa2 [ ~ ]# /bin/service-control --stop vmdird
|
||||||
|
Operation not cancellable. Please wait for it to finish...
|
||||||
|
Performing stop operation on service vmdird...
|
||||||
|
Successfully stopped service vmdird
|
||||||
|
```
|
||||||
|
5. Snapshot the vCenter appliance VMs.
|
||||||
|
6. Start replication on each server again with `/bin/service-control --start vmdird`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@vcsa [ ~ ]# /bin/service-control --start vmdird
|
||||||
|
Operation not cancellable. Please wait for it to finish...
|
||||||
|
Performing start operation on service vmdird...
|
||||||
|
Successfully started service vmdird
|
||||||
|
|
||||||
|
root@vcsa2 [ ~ ]# /bin/service-control --start vmdird
|
||||||
|
Operation not cancellable. Please wait for it to finish...
|
||||||
|
Performing start operation on service vmdird...
|
||||||
|
Successfully started service vmdird
|
||||||
|
```
|
||||||
|
7. Check the replication status with `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` again just to be sure. Don't proceed with whatever else you were planning to do until you've confirmed that the vCenters are in sync.
|
||||||
|
|
||||||
|
You can learn more about the `vdcrepadmin` utility here:
|
||||||
|
https://kb.vmware.com/s/article/2127057
|
263
content/post/2021-02-05-vmware-home-lab-on-intel-nuc-9.md
Normal file
|
@ -0,0 +1,263 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-02-05T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/SIDah-Lag.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- homelab
|
||||||
|
- vra
|
||||||
|
title: VMware Home Lab on Intel NUC 9
|
||||||
|
---
|
||||||
|
|
||||||
|
I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
|
||||||
|
|
||||||
|
![Screenshot 2020-12-23 at 12.30.07.png](/assets/images/posts-2020/SIDah-Lag.png)
|
||||||
|
|
||||||
|
### Hardware
|
||||||
|
*(Caution: here be affiliate links)*
|
||||||
|
- [Intel NUC 9 Extreme (NUC9i9QNX)](https://amzn.to/2JezeEH)
|
||||||
|
- [Crucial 64GB DDR4 SO-DIMM kit (CT2K32G4SFD8266)](https://amzn.to/34BtPPy)
|
||||||
|
- [Intel 665p 1TB NVMe SSD (SSDPEKNW010T9X1)](https://amzn.to/3nMi5kW)
|
||||||
|
- Random 8GB USB thumbdrive I found in a drawer somewhere
|
||||||
|
|
||||||
|
The NUC runs ESXi 7.0u1 and currently hosts the following:
|
||||||
|
- vCenter Server 7.0u1
|
||||||
|
- Windows 2019 domain controller
|
||||||
|
- [VyOS router](https://vyos.io/)
|
||||||
|
- [Home Assistant OS 5.9](https://www.home-assistant.io/hassio/installation/)
|
||||||
|
- vRealize Lifecycle Manager 8.2
|
||||||
|
- vRealize Identity Manager 3.3.2
|
||||||
|
- vRealize Automation 8.2
|
||||||
|
- 3-node [nested ESXi 7.0u1](https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance) vSAN cluster
|
||||||
|
|
||||||
|
I'm leveraging my $200 [vMUG Advantage subscription](https://www.vmug.com/membership/vmug-advantage-membership) to provide 365-day licenses for all the VMware bits (particularly vRA, which doesn't come with a built-in evaluation option).
|
||||||
|
|
||||||
|
### Basic Infrastructure
|
||||||
|
#### Setting up the NUC
|
||||||
|
The NUC connects to my home network through its onboard gigabit Ethernet interface (`vmnic0`). (The NUC does have a built-in WiFi adapter but for some reason VMware hasn't yet allowed their hypervisor to connect over WiFi - weird, right?) I wanted to use a small 8GB thumbdrive as the host's boot device so I installed that in one of the NUC's internal USB ports. For the purpose of installation, I connected a keyboard and monitor to the NUC, and I configured the BIOS to automatically boot up when power is restored after a power failure.
|
||||||
|
|
||||||
|
I used the Chromebook Recovery Utility to write the ESXi installer ISO to *another* USB drive (how-to [here](burn-an-iso-to-usb-with-the-chromebook-recovery-utility)), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (`192.168.1.11`). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
|
||||||
|
|
||||||
|
I was then able to point my web browser to `https://192.168.1.11/ui/` to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (`vSwitch0`) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described [here](https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html)).
|
||||||
|
![ink (2).png](/assets/images/posts-2020/w0HeFSi7Q.png)
|
||||||
|
|
||||||
|
I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs.
|
||||||
|
![Screenshot 2020-12-28 at 12.24.57.png](/assets/images/posts-2020/XDe98S4Fx.png)
|
||||||
|
|
||||||
|
#### Domain Controller
|
||||||
|
I created a new Windows VM with 2 vCPUs, 4GB of RAM, and a 90GB virtual hard drive, and I booted it off a [Server 2019 evaluation ISO](https://www.microsoft.com/en-US/evalcenter/evaluate-windows-server-2019?filetype=ISO). I gave it a name, a static IP address, and proceeded to install and configure the Active Directory Domain Services and DNS Server roles. I created static A and PTR records for the vCenter Server Appliance I'd be deploying next (`vcsa.`) and the physical host (`nuchost.`). I configured ESXi to use this new server for DNS resolutions, and confirmed that I could resolve the VCSA's name from the host.
|
||||||
|
|
||||||
|
![Screenshot 2020-12-30 at 13.10.58.png](/assets/images/posts-2020/4o5bqRiTJ.png)
|
||||||
|
|
||||||
|
Before moving on, I installed the Chrome browser on this new Windows VM and also set up remote access via [Chrome Remote Desktop](https://remotedesktop.google.com/access/). This will let me remotely access and manage my lab environment without having to punch holes in the router firewall (or worry about securing said holes). And it's got "chrome" in the name so it will work just fine from my Chromebooks!
|
||||||
|
|
||||||
|
#### vCenter
|
||||||
|
I attached the vCSA installation ISO to the Windows VM and performed the vCenter deployment from there. (See, I told you that Chrome Remote Desktop would come in handy!)
|
||||||
|
![Screenshot 2020-12-30 at 14.51.09.png](/assets/images/posts-2020/OOP_lstyM.png)
|
||||||
|
|
||||||
|
After the vCenter was deployed and the basic configuration completed, I created a new cluster to contain the physical host. There's likely only ever going to be the one physical host but I like being able to logically group hosts in this way, particularly when working with PowerCLI. I then added the host to the vCenter by its shiny new FQDN.
|
||||||
|
![Screenshot 2021-01-05 10.39.54.png](/assets/images/posts-2020/Wu3ZIIVTs.png)
|
||||||
|
|
||||||
|
I've now got a fully-functioning VMware lab, complete with a physical hypervisor to run the workloads, a vCenter server to manage the workloads, and a Windows DNS server to tell the workloads how to talk to each other. Since the goal is to ultimately simulate a (small) production environment, let's set up some additional networking before we add anything else.
|
||||||
|
|
||||||
|
### Networking
|
||||||
|
#### Overview
|
||||||
|
My home network uses the generic `192.168.1.0/24` address space, with internet router providing DHCP addresses in the range `.100-.250`. I'm using the range `192.168.1.2-.99` for statically-configured IPs, particularly those within my lab environment. Here are the addresses being used by the lab so far:
|
||||||
|
|
||||||
|
| IP Address | Hostname | Purpose |
|
||||||
|
| ---- | ---- | ---- |
|
||||||
|
| `192.168.1.1` | | Gateway |
|
||||||
|
| `192.168.1.5` | `win01` | AD DC, DNS |
|
||||||
|
| `192.168.1.11` | `nuchost` | Physical ESXi host |
|
||||||
|
| `192.168.1.12` | `vcsa` | vCenter Server |
|
||||||
|
|
||||||
|
Of course, not everything that I'm going to deploy in the lab will need to be accessible from outside the lab environment. This goes for obvious things like the vMotion and vSAN networks of the nested ESXi hosts, but it will also be useful to have internal networks that can be used by VMs provisioned by vRA. So I'll be creating these networks:
|
||||||
|
|
||||||
|
| VLAN ID | Network | Purpose |
|
||||||
|
| ---- | ---- | ---- |
|
||||||
|
| 1610 | `172.16.10.0/24` | Management |
|
||||||
|
| 1620 | `172.16.20.0/24` | Servers-1 |
|
||||||
|
| 1630 | `172.16.30.0/24` | Servers-2 |
|
||||||
|
| 1698 | `172.16.98.0/24` | vSAN |
|
||||||
|
| 1699 | `172.16.99.0/24` | vMotion |
|
||||||
|
|
||||||
|
#### vSwitch1
|
||||||
|
I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
|
||||||
|
|
||||||
|
![Screenshot 2021-01-05 16.37.57.png](/assets/images/posts-2020/7aNJa2Hlm.png)
|
||||||
|
|
||||||
|
#### VyOS
|
||||||
|
Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `1620`, and `1630` VLANs could still have their traffic routed out of the internal networks? But doing routing requires a router (or so my network friends tell me)... so I deployed a VM running the open-source VyOS router platform. I used [William Lam's instructions for installing VyOS](https://williamlam.com/2020/02/how-to-automate-the-creation-multiple-routable-vlans-on-single-l2-network-using-vyos.html), making sure to attach the first network interface to the Home-Network portgroup and the second to the Isolated portgroup (VLAN 4095). I then set to work [configuring the router](https://docs.vyos.io/en/latest/quick-start.html).
|
||||||
|
|
||||||
|
After logging in to the VM, I entered the router's configuration mode:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
vyos@vyos:~$ configure
|
||||||
|
[edit]
|
||||||
|
vyos@vyos#
|
||||||
|
```
|
||||||
|
|
||||||
|
I then started with setting up the interfaces - `eth0` for the `192.168.1.0/24` network, `eth1` on the trunked portgroup, and a number of VIFs on `eth1` to handle the individual VLANs I'm interested in using.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
set interfaces ethernet eth0 address '192.168.1.8/24'
|
||||||
|
set interfaces ethernet eth0 description 'Outside'
|
||||||
|
set interfaces ethernet eth1 mtu '9000'
|
||||||
|
set interfaces ethernet eth1 vif 1610 address '172.16.10.1/24'
|
||||||
|
set interfaces ethernet eth1 vif 1610 description 'VLAN 1610 for Management'
|
||||||
|
set interfaces ethernet eth1 vif 1610 mtu '1500'
|
||||||
|
set interfaces ethernet eth1 vif 1620 address '172.16.20.1/24'
|
||||||
|
set interfaces ethernet eth1 vif 1620 description 'VLAN 1620 for Servers-1'
|
||||||
|
set interfaces ethernet eth1 vif 1620 mtu '1500'
|
||||||
|
set interfaces ethernet eth1 vif 1630 address '172.16.30.1/24'
|
||||||
|
set interfaces ethernet eth1 vif 1630 description 'VLAN 1630 for Servers-2'
|
||||||
|
set interfaces ethernet eth1 vif 1630 mtu '1500'
|
||||||
|
set interfaces ethernet eth1 vif 1698 description 'VLAN 1698 for vSAN'
|
||||||
|
set interfaces ethernet eth1 vif 1698 mtu '9000'
|
||||||
|
set interfaces ethernet eth1 vif 1699 description 'VLAN 1699 for vMotion'
|
||||||
|
set interfaces ethernet eth1 vif 1699 mtu '9000'
|
||||||
|
```
|
||||||
|
|
||||||
|
I also set up NAT for the networks that should be routable:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
set nat source rule 10 outbound-interface 'eth0'
|
||||||
|
set nat source rule 10 source address '172.16.10.0/24'
|
||||||
|
set nat source rule 10 translation address 'masquerade'
|
||||||
|
set nat source rule 20 outbound-interface 'eth0'
|
||||||
|
set nat source rule 20 source address '172.16.20.0/24'
|
||||||
|
set nat source rule 20 translation address 'masquerade'
|
||||||
|
set nat source rule 30 outbound-interface 'eth0'
|
||||||
|
set nat source rule 30 source address '172.16.30.0/24'
|
||||||
|
set nat source rule 30 translation address 'masquerade'
|
||||||
|
set nat source rule 100 outbound-interface 'eth0'
|
||||||
|
set nat source rule 100 translation address 'masquerade'
|
||||||
|
set protocols static route 0.0.0.0/0 next-hop 192.168.1.1
|
||||||
|
```
|
||||||
|
|
||||||
|
And I configured DNS forwarding:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
set service dns forwarding allow-from '0.0.0.0/0'
|
||||||
|
set service dns forwarding domain 10.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
|
set service dns forwarding domain 20.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
|
set service dns forwarding domain 30.16.172.in-addr.arpa. server '192.168.1.5'
|
||||||
|
set service dns forwarding domain lab.bowdre.net server '192.168.1.5'
|
||||||
|
set service dns forwarding listen-address '172.16.10.1'
|
||||||
|
set service dns forwarding listen-address '172.16.20.1'
|
||||||
|
set service dns forwarding listen-address '172.16.30.1'
|
||||||
|
set service dns forwarding name-server '192.168.1.1'
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, I also configured VyOS's DHCP server so that I won't have to statically configure the networking for VMs deployed from vRA:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT authoritative
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 default-router '172.16.10.1'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 dns-server '192.168.1.5'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 domain-name 'lab.bowdre.net'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 lease '86400'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 range 0 start '172.16.10.100'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_10_MGMT subnet 172.16.10.0/24 range 0 stop '172.16.10.200'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS authoritative
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 default-router '172.16.20.1'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 dns-server '192.168.1.5'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 domain-name 'lab.bowdre.net'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 lease '86400'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 range 0 start '172.16.20.100'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_20_SERVERS subnet 172.16.20.0/24 range 0 stop '172.16.20.200'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS authoritative
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 default-router '172.16.30.1'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 dns-server '192.168.1.5'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 domain-name 'lab.bowdre.net'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 lease '86400'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 start '172.16.30.100'
|
||||||
|
set service dhcp-server shared-network-name SCOPE_30_SERVERS subnet 172.16.30.0/24 range 0 stop '172.16.30.200'
|
||||||
|
```
|
||||||
|
|
||||||
|
Satisfied with my work, I ran the `commit` and `save` commands. BOOM, this server jockey just configured a router!
|
||||||
|
|
||||||
|
### Nested vSAN Cluster
|
||||||
|
Alright, it's time to start building up the nested environment. To start, I grabbed the latest [Nested ESXi Virtual Appliance .ova](https://williamlam.com/nested-virtualization/nested-esxi-virtual-appliance), courtesy of William Lam. I went ahead and created DNS records for the hosts I'd be deploying, and I mapped out what IPs would be used on each VLAN:
|
||||||
|
|
||||||
|
|Hostname|1610-Management|1698-vSAN|1699-vMotion|
|
||||||
|
|----|----|----|----|
|
||||||
|
|`esxi01.lab.bowdre.net`|`172.16.10.21`|`172.16.98.21`|`172.16.99.21`|
|
||||||
|
|`esxi02.lab.bowdre.net`|`172.16.10.22`|`172.16.98.22`|`172.16.99.22`|
|
||||||
|
|`esxi03.lab.bowdre.net`|`172.16.10.23`|`172.16.98.23`|`172.16.99.23`|
|
||||||
|
|
||||||
|
Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
|
||||||
|
|
||||||
|
![Screenshot 2021-01-07 10.54.50.png](/assets/images/posts-2020/zOJp-jqVb.png)
|
||||||
|
|
||||||
|
And I set the networking properties accordingly:
|
||||||
|
|
||||||
|
![Screenshot 2021-01-07 11.09.36.png](/assets/images/posts-2020/PZ6FzmJcx.png)
|
||||||
|
|
||||||
|
These virtual appliances come with 3 hard drives. The first will be used as the boot device, the second for vSAN caching, and the third for vSAN capacity. I doubled the size of the second and third drives, to 8GB and 16GB respectively:
|
||||||
|
|
||||||
|
![Screenshot 2021-01-07 13.01.19.png](/assets/images/posts-2020/nkdH7Jfxw.png)
|
||||||
|
|
||||||
|
After booting the new host VMs, I created a new cluster in vCenter and then added the nested hosts:
|
||||||
|
![Screenshot 2021-01-07 13.28.03.png](/assets/images/posts-2020/z8fvzu4Km.png)
|
||||||
|
|
||||||
|
Next, I created a new Distributed Virtual Switch to break out the VLAN trunk on the nested host "physical" adapters into the individual VLANs I created on the VyOS router. Again, each port group will need to allow Promiscuous Mode and Forged Transmits, and I set the dvSwitch MTU size to 9000 (to support Jumbo Frames on the vSAN and vMotion portgroups).
|
||||||
|
![Screenshot 2021-01-08 10.04.24.png](/assets/images/posts-2020/arA7gurqh.png)
|
||||||
|
|
||||||
|
I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created new vmkernel interfaces for vMotion and vSAN traffic on each of the nested hosts:
|
||||||
|
![Screenshot 2021-01-19 10.03.27.png](/assets/images/posts-2020/6-auEYd-W.png)
|
||||||
|
|
||||||
|
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
[root@esxi01:~] vmkping -I vmk1 172.16.98.22
|
||||||
|
PING 172.16.98.22 (172.16.98.22): 56 data bytes
|
||||||
|
64 bytes from 172.16.98.22: icmp_seq=0 ttl=64 time=0.243 ms
|
||||||
|
64 bytes from 172.16.98.22: icmp_seq=1 ttl=64 time=0.260 ms
|
||||||
|
64 bytes from 172.16.98.22: icmp_seq=2 ttl=64 time=0.262 ms
|
||||||
|
|
||||||
|
--- 172.16.98.22 ping statistics ---
|
||||||
|
3 packets transmitted, 3 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.243/0.255/0.262 ms
|
||||||
|
|
||||||
|
[root@esxi01:~] vmkping -I vmk2 172.16.99.22 -S vmotion
|
||||||
|
PING 172.16.99.22 (172.16.99.22): 56 data bytes
|
||||||
|
64 bytes from 172.16.99.22: icmp_seq=0 ttl=64 time=0.202 ms
|
||||||
|
64 bytes from 172.16.99.22: icmp_seq=1 ttl=64 time=0.312 ms
|
||||||
|
64 bytes from 172.16.99.22: icmp_seq=2 ttl=64 time=0.242 ms
|
||||||
|
|
||||||
|
--- 172.16.99.22 ping statistics ---
|
||||||
|
3 packets transmitted, 3 packets received, 0% packet loss
|
||||||
|
round-trip min/avg/max = 0.202/0.252/0.312 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
|
||||||
|
![Screenshot 2021-01-23 17.35.34.png](/assets/images/posts-2020/mw-rsq_1a.png)
|
||||||
|
|
||||||
|
It'll take a few minutes for vSAN to get configured on the cluster.
|
||||||
|
![Screenshot 2021-01-23 17.41.13.png](/assets/images/posts-2020/mye0LdtNj.png)
|
||||||
|
|
||||||
|
Huzzah! Next stop:
|
||||||
|
|
||||||
|
### vRealize Automation 8.2
|
||||||
|
The [vRealize Easy Installer](https://docs.vmware.com/en/vRealize-Automation/8.2/installing-vrealize-automation-easy-installer/GUID-CEF1CAA6-AD6F-43EC-B249-4BA81AA2B056.html) makes it, well, *easy* to install vRealize Automation (and vRealize Orchestrator, on the same appliance) and its prerequisites, vRealize Suite Lifecycle Manager (LCM) and Workspace ONE Access (formerly VMware Identity Manager) - provided that you've got enough resources. The vRA virtual appliance deploys with a whopping **40GB** of memory allocated to it. Post-deployment, I found that I was able to trim that down to 30GB without seeming to break anything, but going much lower than that would result in services failing to start.
|
||||||
|
|
||||||
|
Anyhoo, each of these VMs will need to be resolvable in DNS so I started by creating some A records:
|
||||||
|
|
||||||
|
|FQDN|IP|
|
||||||
|
|----|----|
|
||||||
|
|`lcm.lab.bowdre.net`|`192.168.1.40`|
|
||||||
|
|`idm.lab.bowdre.net`|`192.168.1.41`|
|
||||||
|
|`vra.lab.bowdre.net`|`192.168.1.42`|
|
||||||
|
|
||||||
|
I then attached the installer ISO to my Windows VM and ran through the installation from there.
|
||||||
|
![Screenshot 2021-02-05 16.28.41.png](/assets/images/posts-2020/42n3aMim5.png)
|
||||||
|
|
||||||
|
Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final "Okay, do it" button to being able to log in to my shiny new vRealize Automation environment.
|
||||||
|
|
||||||
|
### Wrap-up
|
||||||
|
So that's a glimpse into how I built my nested ESXi lab - all for the purpose of being able to develop and test vRealize Automation templates and vRealize Orchestrator workflows in a semi-realistic environment. I've used this setup to write a [vRA integration for using phpIPAM](https://github.com/jbowdre/phpIPAM-for-vRA8) to assign static IP addresses to deployed VMs. I wrote a complicated vRO workflow for generating unique hostnames which fit a corporate naming standard *and* don't conflict with any other names in vCenter, Active Directory, or DNS. I also developed a workflow for (optionally) creating AD objects under appropriate OUs based on properties generated on the cloud template; VMware [just announced](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) similar functionality with vRA 8.3 and, honestly, my approach works much better for my needs anyway. And, most recently, I put the finishing touches on a solution for (optionally) creating static records in a Microsoft DNS server from vRO.
|
||||||
|
|
||||||
|
I'll post more about all that work soon but this post has already gone on long enough. Stay tuned!
|
|
@ -0,0 +1,33 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2021-02-18T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/PPZu_UOGO.png
|
||||||
|
tags:
|
||||||
|
- logs
|
||||||
|
- vmware
|
||||||
|
title: Using VS Code to explore giant log bundles
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
I recently ran into a peculiar issue after upgrading my vRealize Automation homelab to the new 8.3 release, and the error message displayed in the UI didn't give me a whole lot of information to work with:
|
||||||
|
![Screenshot 2021-02-18 10.27.41.png](/assets/images/posts-2020/IL29_Shlg.png)
|
||||||
|
|
||||||
|
I connected to the vRA appliance to try to find the relevant log excerpt, but [doing so isn't all that straightforward](https://www.stevenbright.com/2020/01/vmware-vrealize-automation-8-0-logs/#:~:text=Access%20Logs%20from%20the%20CLI) given the containerized nature of the services.
|
||||||
|
So instead I used the `vracli log-bundle` command to generate a bundle of all relevant logs, and I then transferred the resulting (2.2GB!) `log-bundle.tar` to my workstation for further investigation. I expanded the tar and ran `tree -P '*.log'` to get a quick idea of what I've got to deal with:
|
||||||
|
![Screenshot 2021-02-18 11.01.56.png](/assets/images/posts-2020/wAa9KjBHO.png)
|
||||||
|
Ugh. Even if I knew which logs I wanted to look at (and I don't) it would take ages to dig through all of this. There's got to be a better way.
|
||||||
|
|
||||||
|
And there is! Visual Studio Code lets you open an entire directory tree in the editor:
|
||||||
|
![Screenshot 2021-02-18 12.19.17.png](/assets/images/posts-2020/SBKtJ8K1p.png)
|
||||||
|
|
||||||
|
You can then "Find in Files" with `Ctrl`+`Shift`+`F`, and VS Code will *very* quickly search through all the files to find what you're looking for:
|
||||||
|
![Screenshot 2021-02-18 12.25.01.png](/assets/images/posts-2020/PPZu_UOGO.png)
|
||||||
|
|
||||||
|
You can also click the "Open in editor" link at the top of the search results to open the matching snippets in a single view:
|
||||||
|
![Screenshot 2021-02-18 12.31.46.png](/assets/images/posts-2020/kJ_l7gPD2.png)
|
||||||
|
|
||||||
|
Adjusting the number at the far top right of that view will dynamically tweak how many context lines are included with each line containing the search term.
|
||||||
|
|
||||||
|
In this case, the logs didn't actually tell me what was going wrong - but I felt much better for having explored them! Maybe this little trick will help you track down what's ailing you.
|
|
@ -0,0 +1,721 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-02-22T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/7_QI-Ti8g.png
|
||||||
|
tags:
|
||||||
|
- python
|
||||||
|
- rest
|
||||||
|
- vmware
|
||||||
|
title: Integrating {php}IPAM with vRealize Automation 8
|
||||||
|
---
|
||||||
|
|
||||||
|
In a [previous post](vmware-home-lab-on-intel-nuc-9), I described some of the steps I took to stand up a homelab including vRealize Automation (vRA) on an Intel NUC 9. One of my initial goals for that lab was to use it for developing and testing a way for vRA to leverage [phpIPAM](https://phpipam.net/) for static IP assignments. The homelab worked brilliantly for that purpose, and those extra internal networks were a big help when it came to testing. I was able to deploy and configure a new VM to host the phpIPAM instance, install the [VMware vRealize Third-Party IPAM SDK](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk) on my [Chromebook's Linux environment](setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications), develop and build the integration component, import it to my vRA environment, and verify that deployments got addressed accordingly.
|
||||||
|
|
||||||
|
The resulting integration is available on Github [here](https://github.com/jbowdre/phpIPAM-for-vRA8). This was actually the second integration I'd worked on, having fumbled my way through a [Solarwinds integration](https://github.com/jbowdre/SWIPAMforvRA8) earlier last year. [VMware's documentation](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-4A5A481C-FC45-47FB-A120-56B73EB28F01.html) on how to build these things is pretty good, but I struggled to find practical information on how a novice like me could actually go about developing the integration. So maybe these notes will be helpful to anyone seeking to write an integration for a different third-party IP Address Management solution.
|
||||||
|
|
||||||
|
If you'd just like to import a working phpIPAM integration into your environment without learning how the sausage is made, you can grab my latest compiled package [here](https://github.com/jbowdre/phpIPAM-for-vRA8/releases/latest). You'll probably still want to look through Steps 0-2 to make sure your IPAM instance is set up similarly to mine.
|
||||||
|
|
||||||
|
### Step 0: phpIPAM installation and base configuration
|
||||||
|
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
|
||||||
|
|
||||||
|
Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection:
|
||||||
|
```shell
|
||||||
|
sudo mkdir /etc/apache2/certificate
|
||||||
|
cd /etc/apache2/certificate/
|
||||||
|
sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key
|
||||||
|
```
|
||||||
|
I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443:
|
||||||
|
```xml
|
||||||
|
<VirtualHost *:80>
|
||||||
|
ServerName ipam.lab.bowdre.net
|
||||||
|
Redirect permanent / https://ipam.lab.bowdre.net
|
||||||
|
</VirtualHost>
|
||||||
|
<VirtualHost *:443>
|
||||||
|
DocumentRoot "/var/www/html/phpipam"
|
||||||
|
ServerName ipam.lab.bowdre.net
|
||||||
|
<Directory "/var/www/html/phpipam">
|
||||||
|
Options Indexes FollowSymLinks
|
||||||
|
AllowOverride All
|
||||||
|
Require all granted
|
||||||
|
</Directory>
|
||||||
|
ErrorLog "/var/log/apache2/phpipam-error_log"
|
||||||
|
CustomLog "/var/log/apache2/phpipam-access_log" combined
|
||||||
|
SSLEngine on
|
||||||
|
SSLCertificateFile /etc/apache2/certificate/apache-certificate.crt
|
||||||
|
SSLCertificateKeyFile /etc/apache2/certificate/apache.key
|
||||||
|
</VirtualHost>
|
||||||
|
```
|
||||||
|
After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` redirected me to `https://ipam.lab.bowdre.net`, and that the connection was secured with the shiny new certificate.
|
||||||
|
|
||||||
|
Remember how I've got a "Home" network as well as [several internal networks](vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
|
||||||
|
|
||||||
|
This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom:
|
||||||
|
```yaml
|
||||||
|
network:
|
||||||
|
version: 2
|
||||||
|
renderer: networkd
|
||||||
|
ethernets:
|
||||||
|
ens160:
|
||||||
|
dhcp4: no
|
||||||
|
dhcp6: no
|
||||||
|
addresses:
|
||||||
|
- 192.168.1.14/24
|
||||||
|
gateway4: 192.168.1.1
|
||||||
|
nameservers:
|
||||||
|
search:
|
||||||
|
- lab.bowdre.net
|
||||||
|
addresses:
|
||||||
|
- 192.168.1.5
|
||||||
|
routes:
|
||||||
|
- to: 172.16.0.0/16
|
||||||
|
via: 192.168.1.100
|
||||||
|
metric: 100
|
||||||
|
```
|
||||||
|
I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network:
|
||||||
|
```
|
||||||
|
john@ipam:~$ sudo netplan apply
|
||||||
|
john@ipam:~$ ip route
|
||||||
|
default via 192.168.1.1 dev ens160 proto static
|
||||||
|
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
|
||||||
|
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
|
||||||
|
john@ipam:~$ ping 172.16.10.12
|
||||||
|
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data.
|
||||||
|
64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms
|
||||||
|
64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms
|
||||||
|
64 bytes from 172.16.10.12: icmp_seq=3 ttl=64 time=0.241 ms
|
||||||
|
^C
|
||||||
|
--- 172.16.10.12 ping statistics ---
|
||||||
|
3 packets transmitted, 3 received, 0% packet loss, time 2043ms
|
||||||
|
rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes:
|
||||||
|
```
|
||||||
|
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php
|
||||||
|
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 1: Configuring phpIPAM API access
|
||||||
|
Okay, let's now move on to the phpIPAM web-based UI to continue the setup. After logging in at `https://ipam.lab.bowdre.net/`, I clicked on the red **Administration** menu at the right side and selected **phpIPAM Settings**. Under the **Site Settings** section, I enabled the *Prettify links* option, and under the **Feature Settings** section I toggled on the *API* component. I then hit *Save* at the bottom of the page to apply the changes.
|
||||||
|
|
||||||
|
Next, I went to the **Users** item on the left-hand menu to create a new user account which will be used by vRA. I named it `vra`, set a password for the account, and made it a member of the `Operators` group, but didn't grant any special module access.
|
||||||
|
![Screenshot 2021-02-20 14.18.47.png](/assets/images/posts-2020/DiqyOlf5S.png)
|
||||||
|
![Screenshot 2021-02-20 14.20.49.png](/assets/images/posts-2020/QoxVKC11t.png)
|
||||||
|
|
||||||
|
The last step in configuring API access is to create an API key. This is done by clicking the **API** item on that left side menu and then selecting *Create API key*. I gave it the app ID `vra`, granted Read/Write permissions, and set the *App Security* option to "SSL with User token".
|
||||||
|
![Screenshot 2021-02-20 14.23.50.png](/assets/images/posts-2020/-aPGJhSvz.png)
|
||||||
|
|
||||||
|
Once we get things going, our API calls will authenticate with the username and password to get a token and bind that to the app ID.
|
||||||
|
|
||||||
|
### Step 2: Configuring phpIPAM subnets
|
||||||
|
Our fancy new IPAM solution is ready to go - except for the whole bit about managing IPs. We need to tell it about the network segments we'd like it to manage. phpIPAM uses "Sections" to group subnets together, so we start by creating a new Section at **Administration > IP related management > Sections**. I named my new section `Lab`, and pretty much left all the default options. Be sure that the `Operators` group has read/write access to this section and the subnets we're going to create inside it!
|
||||||
|
![Screenshot 2021-02-20 14.33.39.png](/assets/images/posts-2020/6yo39lXI7.png)
|
||||||
|
|
||||||
|
We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at **Administration > IP related management > Nameservers**. I created a new entry called `Lab` and pointed it at my internal DNS server, `192.168.1.5`.
|
||||||
|
![Screenshot 2021-02-20 14.40.57.png](/assets/images/posts-2020/pDsEh18bx.png)
|
||||||
|
|
||||||
|
Okay, we're finally ready to start entering our subnets at **Administration > IP related management > Subnets**. For each one, I entered the Subnet in CIDR format, gave it a useful description, and associated it with my `Lab` section. I expanded the *VLAN* dropdown and used the *Add new VLAN* option to enter the corresponding VLAN information, and also selected the Nameserver I had just created.
|
||||||
|
![Screenshot 2021-02-20 14.44.20.png](/assets/images/posts-2020/-PHf9oUyM.png)
|
||||||
|
I also enabled the options *Mark as pool*, *Check hosts status*, *Discover new hosts*, and *Resolve DNS names*.
|
||||||
|
![Screenshot 2021-02-20 15.03.13.png](/assets/images/posts-2020/SR7oD0jsG.png)
|
||||||
|
|
||||||
|
I then used the *Scan subnets for new hosts* button to run a discovery scan against the new subnet.
|
||||||
|
![Screenshot 2021-02-20 15.06.41.png](/assets/images/posts-2020/4WQ8HWJ2N.png)
|
||||||
|
|
||||||
|
The scan only found a single host, `172.16.20.1`, which is the subnet's gateway address hosted by the Vyos router. I used the pencil icon to edit the IP and mark it as the gateway:
|
||||||
|
![Screenshot 2021-02-20 15.08.43.png](/assets/images/posts-2020/2otDJvqRP.png)
|
||||||
|
|
||||||
|
phpIPAM now knows the network address, mask, gateway, VLAN, and DNS configuration for this subnet - all things that will be useful for clients seeking an address. I then repeated these steps for the remaining subnets.
|
||||||
|
![Screenshot 2021-02-20 15.13.38.png](/assets/images/posts-2020/09RIXJc12.png)
|
||||||
|
|
||||||
|
Now for the *real* fun!
|
||||||
|
|
||||||
|
### Step 3: Testing the API
|
||||||
|
Before moving on to developing the integration, it would be good to first get a little bit familiar with the phpIPAM API - and, in the process, validate that everything is set up correctly. First I read through the [API documentation](https://phpipam.net/api/api_documentation/) and some [example API calls](https://phpipam.net/news/api_example_curl/) to get a feel for it. I then started by firing up a `python3` interpreter and defining a few variables as well as importing the `requests` module for interacting with the REST API:
|
||||||
|
```python
|
||||||
|
>>> username = 'vra'
|
||||||
|
>>> password = 'passw0rd'
|
||||||
|
>>> hostname = 'ipam.lab.bowdre.net'
|
||||||
|
>>> apiAppId = 'vra'
|
||||||
|
>>> uri = f'https://{hostname}/api/{apiAppId}/'
|
||||||
|
>>> auth = (username, password)
|
||||||
|
>>> import requests
|
||||||
|
```
|
||||||
|
Based on reading the API docs, I'll need to use the username and password for initial authentication which will provide me with a token to use for subsequent calls. So I'll construct the URI used for auth, submit a `POST` to authenticate, verify that the authentication was successful (`status_code == 200`), and take a look at the response to confirm that I got a token. (For testing, I'm calling `requests` with `verify=False`; we'll be able to use certificate verification when these calls are made from vRA.)
|
||||||
|
```python
|
||||||
|
>>> auth_uri = f'{uri}/user/'
|
||||||
|
>>> req = requests.post(auth_uri, auth=auth, verify=False)
|
||||||
|
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:849: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
|
||||||
|
InsecureRequestWarning)
|
||||||
|
>>> req.status_code
|
||||||
|
200
|
||||||
|
>>> req.json()
|
||||||
|
{'code': 200, 'success': True, 'data': {'token': 'Q66bVm8FTpnmBEJYhl5I4ITp', 'expires': '2021-02-22 00:52:35'}, 'time': 0.01}
|
||||||
|
```
|
||||||
|
Sweet! There's our token! Let's save it to `token` to make it easier to work with:
|
||||||
|
```python
|
||||||
|
>>> token = {"token": req.json()['data']['token']}
|
||||||
|
>>> token
|
||||||
|
{'token': 'Q66bVm8FTpnmBEJYhl5I4ITp'}
|
||||||
|
```
|
||||||
|
Let's see if we can use our new token against the `subnets` controller to get a list of subnets known to phpIPAM:
|
||||||
|
```python
|
||||||
|
subnet_uri = f'{uri}/subnets/'
|
||||||
|
>>> subnets = requests.get(subnet_uri, headers=token, verify=False)
|
||||||
|
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:849: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
|
||||||
|
InsecureRequestWarning)
|
||||||
|
>>> req.status_code
|
||||||
|
200
|
||||||
|
>>> subnets.json()
|
||||||
|
{'code': 200, 'success': True, 'data': [{'id': '7', 'subnet': '192.168.1.0', 'mask': '24', 'sectionId': '1', 'description': 'Home Network', 'linked_subnet': None, 'firewallAddressObject': None, 'vrfId': None, 'masterSubnetId': '0', 'allowRequests': '0', 'vlanId': None, 'showName': '0', 'device': None, 'permissions': [{'group_id': 3, 'permission': '1', 'name': 'Guests', 'desc': 'default Guest group (viewers)', 'members': False}, {'group_id': 2, 'permission': '2', 'name': 'Operators', 'desc': 'default Operator group', 'members': [{'username': 'vra'}]}], 'pingSubnet': '1', 'discoverSubnet': '1', 'resolveDNS': '1', 'DNSrecursive': '0', 'DNSrecords': '0', 'nameserverId': '1', 'scanAgent': '1', 'customer_id': None, 'isFolder': '0', 'isFull': '0', 'isPool': '0', 'tag': '2', 'threshold': '0', 'location': [], 'editDate': '2021-02-21 22:45:01', 'lastScan': '2021-02-21 22:45:01', 'lastDiscovery': '2021-02-21 22:45:01', 'nameservers': {'id': '1', 'name': 'Google NS', 'namesrv1': '8.8.8.8;8.8.4.4', 'description': 'Google public nameservers', 'permissions': '1;2', 'editDate': None}},...
|
||||||
|
```
|
||||||
|
Nice! Let's make it a bit more friendly:
|
||||||
|
```python
|
||||||
|
>>> subnets = subnets.json()['data']
|
||||||
|
>>> for subnet in subnets:
|
||||||
|
... print("Found subnet: " + subnet['description'])
|
||||||
|
...
|
||||||
|
Found subnet: Home Network
|
||||||
|
Found subnet: 1610-Management
|
||||||
|
Found subnet: 1620-Servers-1
|
||||||
|
Found subnet: 1630-Servers-2
|
||||||
|
Found subnet: VPN Subnet
|
||||||
|
Found subnet: 1640-Servers-3
|
||||||
|
Found subnet: 1650-Servers-4
|
||||||
|
Found subnet: 1660-Servers-5
|
||||||
|
```
|
||||||
|
We're in business!
|
||||||
|
|
||||||
|
Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out how to get vRA to speak that language.
|
||||||
|
|
||||||
|
### Step 4: Getting started with the vRA Third-Party IPAM SDK
|
||||||
|
I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration.
|
||||||
|
|
||||||
|
The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration:
|
||||||
|
```xml
|
||||||
|
<properties>
|
||||||
|
<provider.name>phpIPAM</provider.name>
|
||||||
|
<provider.description>phpIPAM integration for vRA</provider.description>
|
||||||
|
<provider.version>1.0.3</provider.version>
|
||||||
|
|
||||||
|
<provider.supportsAddressSpaces>false</provider.supportsAddressSpaces>
|
||||||
|
<provider.supportsUpdateRecord>true</provider.supportsUpdateRecord>
|
||||||
|
<provider.supportsOnDemandNetworks>false</provider.supportsOnDemandNetworks>
|
||||||
|
|
||||||
|
<user.id>1000</user.id>
|
||||||
|
</properties>
|
||||||
|
```
|
||||||
|
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
|
||||||
|
|
||||||
|
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"layout":{
|
||||||
|
"pages":[
|
||||||
|
{
|
||||||
|
"id":"Sample IPAM",
|
||||||
|
"title":"Sample IPAM endpoint",
|
||||||
|
"sections":[
|
||||||
|
{
|
||||||
|
"id":"section_1",
|
||||||
|
"fields":[
|
||||||
|
{
|
||||||
|
"id":"apiAppId",
|
||||||
|
"display":"textField"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id":"privateKeyId",
|
||||||
|
"display":"textField"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id":"privateKey",
|
||||||
|
"display":"passwordField"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id":"hostName",
|
||||||
|
"display":"textField"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"schema":{
|
||||||
|
"apiAppId":{
|
||||||
|
"type":{
|
||||||
|
"dataType":"string"
|
||||||
|
},
|
||||||
|
"label":"API App ID",
|
||||||
|
"constraints":{
|
||||||
|
"required":true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"privateKeyId":{
|
||||||
|
"type":{
|
||||||
|
"dataType":"string"
|
||||||
|
},
|
||||||
|
"label":"Username",
|
||||||
|
"constraints":{
|
||||||
|
"required":true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"privateKey":{
|
||||||
|
"label":"Password",
|
||||||
|
"type":{
|
||||||
|
"dataType":"secureString"
|
||||||
|
},
|
||||||
|
"constraints":{
|
||||||
|
"required":true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"hostName":{
|
||||||
|
"type":{
|
||||||
|
"dataType":"string"
|
||||||
|
},
|
||||||
|
"label":"Hostname",
|
||||||
|
"constraints":{
|
||||||
|
"required":true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"options":{
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
We've now got the framework in place so let's move on to the first operation we'll need to write. Each operation has its own subfolder under `./src/main/python/`, and each contains (among other things) a `requirements.txt` file which will tell Maven what modules need to be imported and a `source.py` file which is where the magic happens.
|
||||||
|
|
||||||
|
### Step 5: 'Validate Endpoint' action
|
||||||
|
We already basically wrote this earlier with the manual tests against the phpIPAM API. This operation basically just needs to receive the endpoint details and credentials from vRA, test the connection against the API, and let vRA know whether or not it was able to authenticate successfully. So let's open `./src/main/python/validate_endpoint/source.py` and get to work.
|
||||||
|
|
||||||
|
It's always a good idea to start by reviewing the example payload section so that we'll know what data we have to work with:
|
||||||
|
```python
|
||||||
|
'''
|
||||||
|
Example payload:
|
||||||
|
|
||||||
|
"inputs": {
|
||||||
|
"authCredentialsLink": "/core/auth/credentials/13c9cbade08950755898c4b89c4a0",
|
||||||
|
"endpointProperties": {
|
||||||
|
"hostName": "sampleipam.sof-mbu.eng.vmware.com"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
```
|
||||||
|
|
||||||
|
The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code:
|
||||||
|
```python
|
||||||
|
def do_validate_endpoint(self, auth_credentials, cert):
|
||||||
|
# Your implemention goes here
|
||||||
|
|
||||||
|
username = auth_credentials["privateKeyId"]
|
||||||
|
password = auth_credentials["privateKey"]
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password))
|
||||||
|
```
|
||||||
|
The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit:
|
||||||
|
```python
|
||||||
|
def do_validate_endpoint(self, auth_credentials, cert):
|
||||||
|
# Build variables
|
||||||
|
username = auth_credentials["privateKeyId"]
|
||||||
|
password = auth_credentials["privateKey"]
|
||||||
|
hostname = self.inputs["endpointProperties"]["hostName"]
|
||||||
|
apiAppId = self.inputs["endpointProperties"]["apiAppId"]
|
||||||
|
```
|
||||||
|
As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable:
|
||||||
|
```python
|
||||||
|
uri = f'https://{hostname}/api/{apiAppId}/
|
||||||
|
auth = (username, password)
|
||||||
|
```
|
||||||
|
I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`.
|
||||||
|
```python
|
||||||
|
def auth_session(uri, auth, cert):
|
||||||
|
auth_uri = f'{uri}/user/'
|
||||||
|
req = requests.post(auth_uri, auth=auth, verify=cert)
|
||||||
|
return req
|
||||||
|
```
|
||||||
|
And we'll call that function from `do_validate_endpoint()`:
|
||||||
|
```python
|
||||||
|
# Test auth connection
|
||||||
|
try:
|
||||||
|
response = auth_session(uri, auth, cert)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return {
|
||||||
|
"message": "Validated successfully",
|
||||||
|
"statusCode": "200"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
You can view the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/validate_endpoint/source.py).
|
||||||
|
|
||||||
|
After completing each operation, run `mvn package -PcollectDependencies -Duser.id=${UID}` to build again, and then import the package to vRA again. This time, you'll see the new "API App ID" field on the form:
|
||||||
|
![Screenshot 2021-02-21 16.30.33.png](/assets/images/posts-2020/bpx8iKUHF.png)
|
||||||
|
|
||||||
|
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
|
||||||
|
![Screenshot 2021-02-21 19.18.43.png](/assets/images/posts-2020/e4PTJxfqH.png)
|
||||||
|
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"__metadata": {
|
||||||
|
"headers": {
|
||||||
|
"tokenId": "c/FqqI+i9WF47JkCxBsy8uoQxjyq+nlH0exxLYDRzTk="
|
||||||
|
},
|
||||||
|
"sourceType": "ipam"
|
||||||
|
},
|
||||||
|
"endpointProperties": {
|
||||||
|
"dcId": "onprem",
|
||||||
|
"apiAppId": "vra",
|
||||||
|
"hostName": "ipam.lab.bowdre.net",
|
||||||
|
"properties": "[{\"prop_key\":\"phpIPAM.IPAM.apiAppId\",\"prop_value\":\"vra\"}]",
|
||||||
|
"providerId": "301de00f-d267-4be2-8065-fabf48162dc1",
|
||||||
|
```
|
||||||
|
And we can see that the Outputs reflect our successful result:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "Validated successfully",
|
||||||
|
"statusCode": "200"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
That's one operation in the bank!
|
||||||
|
|
||||||
|
### Step 6: 'Get IP Ranges' action
|
||||||
|
So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token:
|
||||||
|
```python
|
||||||
|
def auth_session(uri, auth, cert):
|
||||||
|
auth_uri = f'{uri}/user/'
|
||||||
|
req = requests.post(auth_uri, auth=auth, verify=cert)
|
||||||
|
if req.status_code != 200:
|
||||||
|
raise requests.exceptions.RequestException('Authentication Failure!')
|
||||||
|
token = {"token": req.json()['data']['token']}
|
||||||
|
return token
|
||||||
|
```
|
||||||
|
We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token:
|
||||||
|
```python
|
||||||
|
def do_get_ip_ranges(self, auth_credentials, cert):
|
||||||
|
# Build variables
|
||||||
|
username = auth_credentials["privateKeyId"]
|
||||||
|
password = auth_credentials["privateKey"]
|
||||||
|
hostname = self.inputs["endpoint"]["endpointProperties"]["hostName"]
|
||||||
|
apiAppId = self.inputs["endpoint"]["endpointProperties"]["apiAppId"]
|
||||||
|
uri = f'https://{hostname}/api/{apiAppId}/'
|
||||||
|
auth = (username, password)
|
||||||
|
|
||||||
|
# Auth to API
|
||||||
|
token = auth_session(uri, auth, cert)
|
||||||
|
```
|
||||||
|
We can then query for the list of subnets, just like we did earlier:
|
||||||
|
```python
|
||||||
|
# Request list of subnets
|
||||||
|
subnet_uri = f'{uri}/subnets/'
|
||||||
|
ipRanges = []
|
||||||
|
subnets = requests.get(f'{subnet_uri}?filter_by=isPool&filter_value=1', headers=token, verify=cert)
|
||||||
|
subnets = subnets.json()['data']
|
||||||
|
```
|
||||||
|
I decided to add the extra `filter_by=isPool&filter_value=1` argument to the query so that it will only return subnets marked as a pool in phpIPAM. This way I can use phpIPAM for monitoring address usage on a much larger set of subnets while only presenting a handful of those to vRA.
|
||||||
|
|
||||||
|
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
|
||||||
|
|
||||||
|
For instance, these are pretty direct matches:
|
||||||
|
```python
|
||||||
|
ipRange['id'] = str(subnet['id'])
|
||||||
|
ipRange['description'] = str(subnet['description'])
|
||||||
|
ipRange['subnetPrefixLength'] = str(subnet['mask'])
|
||||||
|
```
|
||||||
|
phpIPAM doesn't return a `name` field but I can construct one that will look like `172.16.20.0/24`:
|
||||||
|
```python
|
||||||
|
ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses:
|
||||||
|
```python
|
||||||
|
network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask']))
|
||||||
|
ipRange['ipVersion'] = 'IPv' + str(network.version)
|
||||||
|
ipRange['startIPAddress'] = str(network[1])
|
||||||
|
ipRange['endIPAddress'] = str(network[-2])
|
||||||
|
```
|
||||||
|
I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
|
||||||
|
except:
|
||||||
|
ipRange['dnsServerAddresses'] = []
|
||||||
|
```
|
||||||
|
I can also nest another API request to find which address is marked as the gateway for a given subnet:
|
||||||
|
```python
|
||||||
|
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
|
||||||
|
if gw_req.status_code == 200:
|
||||||
|
gateway = gw_req.json()['data'][0]['ip']
|
||||||
|
ipRange['gatewayAddress'] = gateway
|
||||||
|
```
|
||||||
|
And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA:
|
||||||
|
```python
|
||||||
|
ipRanges.append(ipRange)
|
||||||
|
```
|
||||||
|
After rearranging a bit and tossing in some logging, here's what I've got:
|
||||||
|
```python
|
||||||
|
for subnet in subnets:
|
||||||
|
ipRange = {}
|
||||||
|
ipRange['id'] = str(subnet['id'])
|
||||||
|
ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}"
|
||||||
|
ipRange['description'] = str(subnet['description'])
|
||||||
|
logging.info(f"Found subnet: {ipRange['name']} - {ipRange['description']}.")
|
||||||
|
network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask']))
|
||||||
|
ipRange['ipVersion'] = 'IPv' + str(network.version)
|
||||||
|
ipRange['startIPAddress'] = str(network[1])
|
||||||
|
ipRange['endIPAddress'] = str(network[-2])
|
||||||
|
ipRange['subnetPrefixLength'] = str(subnet['mask'])
|
||||||
|
# return empty set if no nameservers are defined in IPAM
|
||||||
|
try:
|
||||||
|
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
|
||||||
|
except:
|
||||||
|
ipRange['dnsServerAddresses'] = []
|
||||||
|
# try to get the address marked as the gateway in IPAM
|
||||||
|
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
|
||||||
|
if gw_req.status_code == 200:
|
||||||
|
gateway = gw_req.json()['data'][0]['ip']
|
||||||
|
ipRange['gatewayAddress'] = gateway
|
||||||
|
logging.debug(ipRange)
|
||||||
|
ipRanges.append(ipRange)
|
||||||
|
# Return results to vRA
|
||||||
|
result = {
|
||||||
|
"ipRanges" : ipRanges
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
The full code can be found [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/get_ip_ranges/source.py). You may notice that I removed all the bits which were in the VMware-provided skeleton about paginating the results. I honestly wasn't entirely sure how to implement that, and I also figured that since I'm already limiting the results by the `is_pool` filter I shouldn't have a problem with the IPAM server returning an overwhelming number of IP ranges. That could be an area for future improvement though.
|
||||||
|
|
||||||
|
In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA.
|
||||||
|
|
||||||
|
vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checking back on the **Extensibility > Action Runs** view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up:
|
||||||
|
```log
|
||||||
|
[2021-02-21 23:14:04,026] [INFO] - Querying for auth credentials
|
||||||
|
[2021-02-21 23:14:04,051] [INFO] - Credentials obtained successfully!
|
||||||
|
[2021-02-21 23:14:04,089] [INFO] - Found subnet: 172.16.10.0/24 - 1610-Management.
|
||||||
|
[2021-02-21 23:14:04,101] [INFO] - Found subnet: 172.16.20.0/24 - 1620-Servers-1.
|
||||||
|
[2021-02-21 23:14:04,114] [INFO] - Found subnet: 172.16.30.0/24 - 1630-Servers-2.
|
||||||
|
[2021-02-21 23:14:04,126] [INFO] - Found subnet: 172.16.40.0/24 - 1640-Servers-3.
|
||||||
|
[2021-02-21 23:14:04,138] [INFO] - Found subnet: 172.16.50.0/24 - 1650-Servers-4.
|
||||||
|
[2021-02-21 23:14:04,149] [INFO] - Found subnet: 172.16.60.0/24 - 1660-Servers-5.
|
||||||
|
```
|
||||||
|
Note that it *did not* pick up my "Home Network" range since it wasn't set to be a pool.
|
||||||
|
|
||||||
|
We can also navigate to **Infrastructure > Networks > IP Ranges** to view them in all their glory:
|
||||||
|
![Screenshot 2021-02-21 17.49.12.png](/assets/images/posts-2020/7_QI-Ti8g.png)
|
||||||
|
|
||||||
|
You can then follow [these instructions](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) to associate the external IP ranges with networks available for vRA deployments.
|
||||||
|
|
||||||
|
Next, we need to figure out how to allocate an IP.
|
||||||
|
|
||||||
|
### Step 7: 'Allocate IP' action
|
||||||
|
I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over.
|
||||||
|
```python
|
||||||
|
def auth_session(uri, auth, cert):
|
||||||
|
auth_uri = f'{uri}/user/'
|
||||||
|
req = requests.post(auth_uri, auth=auth, verify=cert)
|
||||||
|
if req.status_code != 200:
|
||||||
|
raise requests.exceptions.RequestException('Authentication Failure!')
|
||||||
|
token = {"token": req.json()['data']['token']}
|
||||||
|
return token
|
||||||
|
|
||||||
|
def do_allocate_ip(self, auth_credentials, cert):
|
||||||
|
# Build variables
|
||||||
|
username = auth_credentials["privateKeyId"]
|
||||||
|
password = auth_credentials["privateKey"]
|
||||||
|
hostname = self.inputs["endpoint"]["endpointProperties"]["hostName"]
|
||||||
|
apiAppId = self.inputs["endpoint"]["endpointProperties"]["apiAppId"]
|
||||||
|
uri = f'https://{hostname}/api/{apiAppId}/'
|
||||||
|
auth = (username, password)
|
||||||
|
|
||||||
|
# Auth to API
|
||||||
|
token = auth_session(uri, auth, cert)
|
||||||
|
bundle = {
|
||||||
|
'uri': uri,
|
||||||
|
'token': token,
|
||||||
|
'cert': cert
|
||||||
|
}
|
||||||
|
```
|
||||||
|
I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included:
|
||||||
|
```python
|
||||||
|
allocation_result = []
|
||||||
|
try:
|
||||||
|
resource = self.inputs["resourceInfo"]
|
||||||
|
for allocation in self.inputs["ipAllocations"]:
|
||||||
|
allocation_result.append(allocate(resource, allocation, self.context, self.inputs["endpoint"], bundle))
|
||||||
|
except Exception as e:
|
||||||
|
try:
|
||||||
|
rollback(allocation_result, bundle)
|
||||||
|
except Exception as rollback_e:
|
||||||
|
logging.error(f"Error during rollback of allocation result {str(allocation_result)}")
|
||||||
|
logging.error(rollback_e)
|
||||||
|
raise e
|
||||||
|
```
|
||||||
|
I also added `bundle` to the `allocate()` function:
|
||||||
|
```python
|
||||||
|
def allocate(resource, allocation, context, endpoint, bundle):
|
||||||
|
|
||||||
|
last_error = None
|
||||||
|
for range_id in allocation["ipRangeIds"]:
|
||||||
|
|
||||||
|
logging.info(f"Allocating from range {range_id}")
|
||||||
|
try:
|
||||||
|
return allocate_in_range(range_id, resource, allocation, context, endpoint, bundle)
|
||||||
|
except Exception as e:
|
||||||
|
last_error = e
|
||||||
|
logging.error(f"Failed to allocate from range {range_id}: {str(e)}")
|
||||||
|
|
||||||
|
logging.error("No more ranges. Raising last error")
|
||||||
|
raise last_error
|
||||||
|
```
|
||||||
|
The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables:
|
||||||
|
```python
|
||||||
|
def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle):
|
||||||
|
if int(allocation['size']) ==1:
|
||||||
|
vmName = resource['name']
|
||||||
|
owner = resource['owner']
|
||||||
|
uri = bundle['uri']
|
||||||
|
token = bundle['token']
|
||||||
|
cert = bundle['cert']
|
||||||
|
else:
|
||||||
|
# TODO: implement allocation of continuous block of IPs
|
||||||
|
pass
|
||||||
|
raise Exception("Not implemented")
|
||||||
|
```
|
||||||
|
I construct a `payload` that will be passed to the phpIPAM API when an IP gets allocated to a VM:
|
||||||
|
```python
|
||||||
|
payload = {
|
||||||
|
'hostname': vmName,
|
||||||
|
'description': f'Reserved by vRA for {owner} at {datetime.now()}'
|
||||||
|
}
|
||||||
|
```
|
||||||
|
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
|
||||||
|
|
||||||
|
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP.
|
||||||
|
```python
|
||||||
|
allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/'
|
||||||
|
allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert)
|
||||||
|
allocate_req = allocate_req.json()
|
||||||
|
```
|
||||||
|
Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file).
|
||||||
|
```python
|
||||||
|
if allocate_req['success']:
|
||||||
|
version = ipaddress.ip_address(allocate_req['data']).version
|
||||||
|
result = {
|
||||||
|
"ipAllocationId": allocation['id'],
|
||||||
|
"ipRangeId": range_id,
|
||||||
|
"ipVersion": "IPv" + str(version),
|
||||||
|
"ipAddresses": [allocate_req['data']]
|
||||||
|
}
|
||||||
|
logging.info(f"Successfully reserved {str(result['ipAddresses'])} for {vmName}.")
|
||||||
|
else:
|
||||||
|
raise Exception("Unable to allocate IP!")
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation:
|
||||||
|
```python
|
||||||
|
def rollback(allocation_result, bundle):
|
||||||
|
uri = bundle['uri']
|
||||||
|
token = bundle['token']
|
||||||
|
cert = bundle['cert']
|
||||||
|
for allocation in reversed(allocation_result):
|
||||||
|
logging.info(f"Rolling back allocation {str(allocation)}")
|
||||||
|
ipAddresses = allocation.get("ipAddresses", None)
|
||||||
|
for ipAddress in ipAddresses:
|
||||||
|
rollback_uri = f'{uri}/addresses/{allocation.get("id")}/'
|
||||||
|
requests.delete(rollback_uri, headers=token, verify=cert)
|
||||||
|
|
||||||
|
return
|
||||||
|
```
|
||||||
|
The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/allocate_ip/source.py). Once more, run `mvn package -PcollectDependencies -Duser.id=${UID}` and import the new `phpIPAM.zip` package into vRA. You can then open a Cloud Assembly Cloud Template associated with one of the specified networks and hit the "Test" button to see if it works. You should see a new `phpIPAM_AllocateIP` action run appear on the **Extensibility > Action runs** tab. Check the Log for something like this:
|
||||||
|
```log
|
||||||
|
[2021-02-22 01:31:41,729] [INFO] - Querying for auth credentials
|
||||||
|
[2021-02-22 01:31:41,757] [INFO] - Credentials obtained successfully!
|
||||||
|
[2021-02-22 01:31:41,773] [INFO] - Allocating from range 12
|
||||||
|
[2021-02-22 01:31:41,790] [INFO] - Successfully reserved ['172.16.40.2'] for BOW-VLTST-XXX41.
|
||||||
|
```
|
||||||
|
You can also check for a reserved address in phpIPAM:
|
||||||
|
![Screenshot 2021-02-21 19.32.38.png](/assets/images/posts-2020/3BQnEd0bY.png)
|
||||||
|
|
||||||
|
Almost done!
|
||||||
|
|
||||||
|
### Step 8: 'Deallocate IP' action
|
||||||
|
The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization:
|
||||||
|
```python
|
||||||
|
def auth_session(uri, auth, cert):
|
||||||
|
auth_uri = f'{uri}/user/'
|
||||||
|
req = requests.post(auth_uri, auth=auth, verify=cert)
|
||||||
|
if req.status_code != 200:
|
||||||
|
raise requests.exceptions.RequestException('Authentication Failure!')
|
||||||
|
token = {"token": req.json()['data']['token']}
|
||||||
|
return token
|
||||||
|
|
||||||
|
def do_deallocate_ip(self, auth_credentials, cert):
|
||||||
|
# Build variables
|
||||||
|
username = auth_credentials["privateKeyId"]
|
||||||
|
password = auth_credentials["privateKey"]
|
||||||
|
hostname = self.inputs["endpoint"]["endpointProperties"]["hostName"]
|
||||||
|
apiAppId = self.inputs["endpoint"]["endpointProperties"]["apiAppId"]
|
||||||
|
uri = f'https://{hostname}/api/{apiAppId}/'
|
||||||
|
auth = (username, password)
|
||||||
|
|
||||||
|
# Auth to API
|
||||||
|
token = auth_session(uri, auth, cert)
|
||||||
|
bundle = {
|
||||||
|
'uri': uri,
|
||||||
|
'token': token,
|
||||||
|
'cert': cert
|
||||||
|
}
|
||||||
|
|
||||||
|
deallocation_result = []
|
||||||
|
for deallocation in self.inputs["ipDeallocations"]:
|
||||||
|
deallocation_result.append(deallocate(self.inputs["resourceInfo"], deallocation, bundle))
|
||||||
|
|
||||||
|
assert len(deallocation_result) > 0
|
||||||
|
return {
|
||||||
|
"ipDeallocations": deallocation_result
|
||||||
|
}
|
||||||
|
```
|
||||||
|
And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action:
|
||||||
|
```python
|
||||||
|
def deallocate(resource, deallocation, bundle):
|
||||||
|
uri = bundle['uri']
|
||||||
|
token = bundle['token']
|
||||||
|
cert = bundle['cert']
|
||||||
|
ip_range_id = deallocation["ipRangeId"]
|
||||||
|
ip = deallocation["ipAddress"]
|
||||||
|
|
||||||
|
logging.info(f"Deallocating ip {ip} from range {ip_range_id}")
|
||||||
|
|
||||||
|
deallocate_uri = f'{uri}/addresses/{ip}/{ip_range_id}/'
|
||||||
|
requests.delete(deallocate_uri, headers=token, verify=cert)
|
||||||
|
return {
|
||||||
|
"ipDeallocationId": deallocation["id"],
|
||||||
|
"message": "Success"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/deallocate_ip/source.py). Build the package with Maven, import to vRA, and run another test deployment. The `phpIPAM_DeallocateIP` action should complete successfully. Something like this will be in the log:
|
||||||
|
```log
|
||||||
|
[2021-02-22 01:36:29,438] [INFO] - Querying for auth credentials
|
||||||
|
[2021-02-22 01:36:29,461] [INFO] - Credentials obtained successfully!
|
||||||
|
[2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12
|
||||||
|
```
|
||||||
|
And the Outputs section of the Details tab will show:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ipDeallocations": [
|
||||||
|
{
|
||||||
|
"message": "Success",
|
||||||
|
"ipDeallocationId": "/resources/network-interfaces/8e149a2c-d7aa-4e48-b6c6-153ed288aef3"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Success!
|
||||||
|
That's it! You can now use phpIPAM for assigning IP addresses to VMs deployed from vRealize Automation 8.x. VMware provides a few additional operations that could be added to this integration in the future (like updating existing records or allocating entire ranges rather than individual IPs) but what I've written so far satisfies the basic requirements, and it works well for my needs.
|
||||||
|
|
||||||
|
And maybe, *just maybe*, the steps I went through developing this integration might help with integrating another IPAM solution.
|
245
content/post/2021-03-29-vra8-custom-provisioning-part-one.md
Normal file
|
@ -0,0 +1,245 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-03-29T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/VZaK4btzl.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
title: 'vRA8 Custom Provisioning: Part One'
|
||||||
|
---
|
||||||
|
|
||||||
|
I recently shared [some details about my little self-contained VMware homelab](vmware-home-lab-on-intel-nuc-9) as well as how I [integrated {php}IPAM into vRealize Automation 8 for assigning IPs to deployed VMs](integrating-phpipam-with-vrealize-automation-8). For my next trick, I'll be crafting a flexible Cloud Template and accompanying vRealize Orchestrator workflow that will help to deploy and configure virtual machines based on a vRA user's input. Buckle up, this is going to be A Ride.
|
||||||
|
|
||||||
|
### Objectives
|
||||||
|
Before getting into the *how* it would be good to start with the *what* - what exactly are we hoping to accomplish here? For my use case, I'll need a solution which can:
|
||||||
|
- use a single Cloud Template to provision Windows VMs to one of several designated compute resources across multiple sites.
|
||||||
|
- generate a unique VM name which matches a defined naming standard.
|
||||||
|
- allow the requester to specify which site-specific network should be used, and leverage {php}IPAM to assign a static IP.
|
||||||
|
- pre-stage a computer account in Active Directory in a site-specific Organizational Unit and automatically join the new computer to the domain.
|
||||||
|
- create a static record in Microsoft DNS for non-domain systems.
|
||||||
|
- expand the VM's virtual disk *and extend the volume inside the guest* based on request input.
|
||||||
|
- add specified domain accounts to the guest's local Administrators group based on request input.
|
||||||
|
- annotate the VM in vCenter with a note to describe the server's purpose and custom attributes to identify the responsible party and originating ticket number.
|
||||||
|
|
||||||
|
Looking back, that's kind of a lot. I can see why I've been working on this for months!
|
||||||
|
|
||||||
|
### vSphere setup
|
||||||
|
In production, I'll want to be able to deploy to different computer clusters spanning multiple vCenters. That's a bit difficult to do on a single physical server, but I still wanted to be able to simulate that sort of dynamic resource selection. So for development and testing in my lab, I'll be using two sites - `BOW` and `DRE`. I ditched the complicated "just because I can" vSAN I'd built previously and instead spun up two single-host nested clusters, one for each of my sites:
|
||||||
|
![vCenter showing the BOW and DRE clusters](/assets/images/posts-2020/KUCwEgEhN.png)
|
||||||
|
|
||||||
|
Those hosts have one virtual NIC each on a standard switch connected to my home network, and a second NIC each connected to the ["isolated" internal lab network](vmware-home-lab-on-intel-nuc-9#networking) with all the VLANs for the guests to run on:
|
||||||
|
![dvSwitch showing attached hosts and dvPortGroups](/assets/images/posts-2020/y8vZEnWqR.png)
|
||||||
|
|
||||||
|
### vRA setup
|
||||||
|
On the vRA side of things, I logged in to the Cloud Assembly portion and went to the Infrastructure tab. I first created a Project named `LAB`, added the vCenter as a Cloud Account, and then created a Cloud Zone for the vCenter as well. On the Compute tab of the Cloud Zone properties, I manually added both the `BOW` and `DRE` clusters.
|
||||||
|
![BOW and DRE Clusters added to Cloud Zone](/assets/images/posts-2020/sCQKUH07e.png)
|
||||||
|
|
||||||
|
I also created a Network Profile and added each of the nested dvPortGroups I had created for this purpose.
|
||||||
|
![Network Profile with added vSphere networks](/assets/images/posts-2020/LST4LisFl.png)
|
||||||
|
|
||||||
|
Each network also gets associated with the related IP Range which was [imported from {php}IPAM](integrating-phpipam-with-vrealize-automation-8).
|
||||||
|
![IP Range bound to a network](/assets/images/posts-2020/AZsVThaRO.png)
|
||||||
|
|
||||||
|
Since each of my hosts only has 100GB of datastore and my Windows template specifies a 60GB VMDK, I went ahead and created a Storage Profile so that deployments would default to being Thin Provisioned.
|
||||||
|
![Thin-provision storage profile](/assets/images/posts-2020/3vQER.png)
|
||||||
|
|
||||||
|
I created a few Flavor Mappings ranging from `micro` (1vCPU|1GB RAM) to `giant` (8vCPU|16GB) but for this resource-constrained lab I'll stick mostly to the `micro`, `tiny` (1vCPU|2GB), and `small` (2vCPU|2GB) sizes.
|
||||||
|
![T-shirt size Flavor Mappings](/assets/images/posts-2020/lodJlc8Hp.png)
|
||||||
|
|
||||||
|
And I created an Image Mapping named `ws2019` which points to a Windows Server 2019 Core template I have stored in my lab's Content Library (cleverly-named "LABrary" for my own amusement).
|
||||||
|
![Windows Server Image Mapping](/assets/images/posts-2020/6k06ySON7.png)
|
||||||
|
|
||||||
|
And with that, my vRA infrastructure is ready for testing a *very* basic deployment.
|
||||||
|
|
||||||
|
### My First Cloud Template
|
||||||
|
Now it's time to leave the Infrastructure tab and visit the Design one, where I'll create a new Cloud Template (what previous versions of vRA called "Blueprints"). I start by dragging one each of the **vSphere > Machine** and **vSphere > Network** entities onto the workspace. I then pop over to the Code tab on the right to throw together some simple YAML statements:
|
||||||
|
![My first Cloud Template!](/assets/images/posts-2020/RtMljqM9x.png)
|
||||||
|
|
||||||
|
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
# Image Mapping
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
```
|
||||||
|
`formatVersion` is always gonna be 1 so we'll skip right past that.
|
||||||
|
|
||||||
|
The first input is going to ask the user to select the desired Operating System for this deployment. The `oneOf` type will be presented as a dropdown (with only one option in this case, but I'll leave it this way for future flexibility); the user will see the friendly "Windows Server 2019" `title` which is tied to the `ws2019` `const` value. For now, I'll also set the `default` value of the field so I don't have to actually click the dropdown each time I test the deployment.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Flavor Mapping
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
```
|
||||||
|
|
||||||
|
Now I'm asking the user to pick the t-shirt size of the VM. These will correspond to the Flavor Mappings I defined earlier. I again chose to use a `oneOf` data type so that I can show the user more information for each option than is embedded in the name. And I'm setting a `default` value to avoid unnecessary clicking.
|
||||||
|
|
||||||
|
The `resources` section is where the data from the inputs gets applied to the deployment:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
```
|
||||||
|
|
||||||
|
So I'm connecting the selected `input.image` to the Image Mapping configured in vRA, and the selected `input.size` goes back to the Flavor Mapping that will be used for the deployment. I also specify that `Cloud_vSphere_Machine_1` should be connected to `Cloud_vSphere_Network_1` and that it should use a `static` (as opposed to `dynamic`) IP address. Finally, vRA is told that the `Cloud_vSphere_Network_1` should be an existing vSphere network.
|
||||||
|
|
||||||
|
All together now:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
# Image Mapping
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
# Flavor Mapping
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool! But does it work? Hitting the **Test** button at the bottom right is a great way to validate a template before actually running a deployment. That will confirm that the template syntax, infrastructure, and IPAM configuration is all set up correctly to support this particular deployment.
|
||||||
|
![Test inputs](/assets/images/posts-2020/lNmduGWr1.png)
|
||||||
|
![Test results](/assets/images/posts-2020/BA2BWCd6K.png)
|
||||||
|
|
||||||
|
Looks good! I like to click on the **Provisioning Diagram** link to see a bit more detail about where components were placed and why. That's also an immensely helpful troubleshooting option if the test *isn't* successful.
|
||||||
|
![Provisioning diagram](/assets/images/posts-2020/PIeW8xA2j.png)
|
||||||
|
|
||||||
|
And finally, I can hit that **Deploy** button to actually spin up this VM.
|
||||||
|
![Deploy this sucker](/assets/images/posts-2020/XmtEm51h2.png)
|
||||||
|
|
||||||
|
Each deployment has to have a *unique* deployment name. I got tired of trying to keep up with what names I had already used so kind of settled on a [DATE]_[TIME] format for my test deployments. I'll automatic this tedious step away in the future.
|
||||||
|
|
||||||
|
I then confirm that the (automatically-selected default) inputs are correct and kick it off.
|
||||||
|
![Deployment inputs](/assets/images/posts-2020/HC6vQMeVT.png)
|
||||||
|
|
||||||
|
The deployment will take a few minutes. I like to click over to the **History** tab to see a bit more detail as things progress.
|
||||||
|
![Deployment history](/assets/images/posts-2020/uklHiv46Y.png)
|
||||||
|
|
||||||
|
It doesn't take too long for activity to show up on the vSphere side of things:
|
||||||
|
![vSphere is cloning the source template](/assets/images/posts-2020/4dNwfNNDY.png)
|
||||||
|
|
||||||
|
And there's the completed VM - notice the statically-applied IP address courtesy of {php}IPAM!
|
||||||
|
![Completed test VM](/assets/images/posts-2020/3-UIo1Ykn.png)
|
||||||
|
|
||||||
|
And I can pop over to the IPAM interface to confirm that the IP has been marked as reserved as well:
|
||||||
|
![Newly-created IPAM reservation](/assets/images/posts-2020/mAfdPLKnp.png)
|
||||||
|
|
||||||
|
Fantastic! But one of my objectives from earlier was to let the user control where a VM gets provisioned. Fortunately it's pretty easy to implement thanks to vRA 8's use of tags.
|
||||||
|
|
||||||
|
### Using tags for resource placement
|
||||||
|
Just about every entity within vRA 8 can have tags applied to it, and you can leverage those tags in some pretty creative and useful ways. For now, I'll start by applying tags to my compute resources; I'll use `comp:bow` for the "BOW Cluster" and `comp:dre` for the "DRE Cluster".
|
||||||
|
![Compute tags](/assets/images/posts-2020/oz1IAp-i0.png)
|
||||||
|
|
||||||
|
I'll also use the `net:bow` and `net:dre` tags to logically divide up the networks between my sites:
|
||||||
|
![Network tags](/assets/images/posts-2020/ngSWbVI4Y.png)
|
||||||
|
|
||||||
|
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
# Datacenter location
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
# Image Mapping
|
||||||
|
```
|
||||||
|
|
||||||
|
I'm using the `enum` option now instead of `oneOf` since the site names shouldn't require further explanation.
|
||||||
|
|
||||||
|
And then I'll add some `constraints` to the `resources` section, making use of the `to_lower` function from the [cloud template expression syntax](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html) to automatically convert the selected site name from all-caps to lowercase so it matches the appropriate tag:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${to_lower(input.site)}'
|
||||||
|
```
|
||||||
|
|
||||||
|
So the VM will now only be deployed to the compute resource and networks which are tagged to match the selected Site identifier. I ran another test to make sure I didn't break anything:
|
||||||
|
![Testing against the DRE site](/assets/images/posts-2020/Q-2ZQg_ji.png)
|
||||||
|
|
||||||
|
It came back successful, so I clicked through to see the provisioning diagram. On the network tab, I see that only the last two networks (`d1650-Servers-4` and `d1660-Servers-5`) were considered since the first three didn't match the required `net:dre` tag:
|
||||||
|
![Network provisioning diagram](/assets/images/posts-2020/XVD9QVU-S.png)
|
||||||
|
|
||||||
|
And it's a similar story on the compute tab:
|
||||||
|
![Compute provisioning diagram](/assets/images/posts-2020/URW7vc1ih.png)
|
||||||
|
|
||||||
|
As a final test for this change, I kicked off one deployment to each site to make sure things worked as expected.
|
||||||
|
![vSphere showing one VM at each site](/assets/images/posts-2020/VZaK4btzl.png)
|
||||||
|
|
||||||
|
Nice!
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
This was kind of an easy introduction into what I've been doing with vRA 8 these past several months. The basic infrastructure (both in vSphere and vRA) will support far more interesting and flexible deployments as I dig deeper. For now, being able to leverage vRA tags for placing workloads on specific compute resources is a great start.
|
||||||
|
|
||||||
|
Things will get *much* more interesting in the next post, where I'll dig into how I'm using vRealize Orchestrator to generate unique computer names which fit a defined naming standard.
|
603
content/post/2021-04-02-vra8-custom-provisioning-part-two.md
Normal file
|
@ -0,0 +1,603 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-04-02T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/HXrAMJrH.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
title: 'vRA8 Custom Provisioning: Part Two'
|
||||||
|
---
|
||||||
|
|
||||||
|
We [last left off this series](vra8-custom-provisioning-part-one) after I'd set up vRA, performed a test deployment off of a minimal cloud template, and then enhanced the simple template to use vRA tags to let the user specify where a VM should be provisioned. But these VMs have kind of dumb names; right now, they're just getting named after the user who requests it + a random couple of digits, courtesy of a simple [naming template defined on the project's Provisioning page](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-AD400ED7-EB3A-4D36-B9A7-81E100FB3003.html?hWord=N4IghgNiBcIHZgLYEs4HMQF8g):
|
||||||
|
![Naming template](/assets/images/posts-2020/zAF26KJnO.png)
|
||||||
|
|
||||||
|
I could use this naming template to *almost* accomplish what I need from a naming solution, but I don't like that the numbers are random rather than an sequence (I want to deploy `server001` followed by `server002` rather than `server343` followed by `server718`). And it's not enough for me that a VM's name be unique just within the scope of vRA - the hostname should be unique across my entire environment.
|
||||||
|
|
||||||
|
So I'm going to have to get my hands dirty and develop a new solution using vRealize Orchestrator. For right now, it should create a name for a VM that fits a defined naming schema, while also ensuring that the name doesn't already exist within vSphere. (I'll add checks against Active Directory and DNS in the next post.)
|
||||||
|
|
||||||
|
### What's in a name?
|
||||||
|
For my environment, servers should be named like `BOW-DAPP-WEB001` where:
|
||||||
|
- `BOW` indicates the site code.
|
||||||
|
- `D` describes the environment, Development in this case.
|
||||||
|
- `APP` designates the server's function; this one is an application server.
|
||||||
|
- `WEB` describes the primary application running on the server; this one hosts a web server.
|
||||||
|
- `001` is a three-digit sequential designation to differentiate between similar servers.
|
||||||
|
|
||||||
|
So in vRA's custom naming template syntax, this could look something like:
|
||||||
|
- `${site}-${environment}${function}-${application}${###}`
|
||||||
|
|
||||||
|
Okay, this plan is coming together.
|
||||||
|
|
||||||
|
### Adding more inputs to the cloud template
|
||||||
|
I'll start by adding those fields as inputs on my cloud template.
|
||||||
|
|
||||||
|
I already have a `site` input at the top of the template, used for selecting the deployment location. I'll leave that there:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll add the rest of the naming components below the prompts for image selection and size, starting with a dropdown of environments to pick from:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
enum:
|
||||||
|
- Development
|
||||||
|
- Testing
|
||||||
|
- Production
|
||||||
|
default: Development
|
||||||
|
```
|
||||||
|
|
||||||
|
And a dropdown for those function options:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
```
|
||||||
|
|
||||||
|
And finally a text entry field for the application descriptor. Note that this one includes the `minLength` and `maxLength` constraints to enforce the three-character format.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
```
|
||||||
|
|
||||||
|
*We won't discuss what kind of content this server is going to host...*
|
||||||
|
|
||||||
|
I then need to map these inputs to the resource entity at the bottom of the template so that they can be passed to vRO as custom properties. All of these are direct mappings except for `environment` since I only want the first letter. I use the `substring()` function to achieve that, but wrap it in a conditional so that it won't implode if the environment hasn't been picked yet. I'm also going to add in a `dnsDomain` property that will be useful later when I need to query for DNS conflicts.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
environment: '${input.environment != "" ? substring(input.environment,0,1) : ""}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
```
|
||||||
|
|
||||||
|
So here's the complete template:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
enum:
|
||||||
|
- Development
|
||||||
|
- Testing
|
||||||
|
- Production
|
||||||
|
default: Development
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
environment: '${input.environment != "" ? substring(input.environment,0,1) : ""}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${to_lower(input.site)}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Great! Here's what it looks like on the deployment request:
|
||||||
|
![Deployment request with naming elements](/assets/images/posts-2020/iqsHm5zQR.png)
|
||||||
|
|
||||||
|
...but the deployed VM got named `john-329`. Why?
|
||||||
|
![VM deployed with a lame name](/assets/images/posts-2020/lUo1odZ03.png)
|
||||||
|
|
||||||
|
Oh yeah, I need to create a thing that will take these naming elements, mash them together, check for any conflicts, and then apply the new name to the VM. vRealize Orchestrator, it's your time!
|
||||||
|
|
||||||
|
### Setting up vRO config elements
|
||||||
|
When I first started looking for a naming solution, I found a [really handy blog post from Michael Poore](https://blog.v12n.io/custom-naming-in-vrealize-automation-8x-1/) that described his solution to doing custom naming. I wound up following his general approach but had to adapt it a bit to make the code work in vRO 8 and to add in the additional checks I wanted. So credit to Michael for getting me pointed in the right direction!
|
||||||
|
|
||||||
|
I start by hopping over to the Orchestrator interface and navigating to the Configurations section. I'm going to create a new configuration folder named `CustomProvisioning` that will store all the Configuration Elements I'll use to configure my workflows on this project.
|
||||||
|
![Configuration Folder](/assets/images/posts-2020/y7JKSxsqE.png)
|
||||||
|
|
||||||
|
Defining certain variables within configurations separates those from the workflows themselves, making the workflows much more portable. That will allow me to transfer the same code between multiple environments (like my homelab and my lab environment at work) without having to rewrite a bunch of hardcoded values.
|
||||||
|
|
||||||
|
Now I'll create a new configuration within the new folder. This will hold information about the naming schema so I name it `namingSchema`. In it, I create two strings to define the base naming format (up to the numbers on the end) and full name format (including the numbers). I define `baseFormat` and `nameFormat` as templates based on what I put together earlier.
|
||||||
|
![The namingSchema configuration](/assets/images/posts-2020/zLec-3X_D.png)
|
||||||
|
|
||||||
|
I also create another configuration named `computerNames`. When vRO picks a name for a VM, it will record it here as a `number` variable named after the "base name" (`BOW-DAPP-WEB`) and the last-used sequence as the value (`001`). This will make it quick-and-easy to see what the next VM should be named. For now, though, I just need the configuration to not be empty so I add a single variable named `sample` just to take up space.
|
||||||
|
![The computerNames configuration](/assets/images/posts-2020/pqrvUNmsj.png)
|
||||||
|
|
||||||
|
Okay, now it's time to get messy.
|
||||||
|
|
||||||
|
### The vRO workflow
|
||||||
|
Just like with the configuration elements, I create a new workflow folder named `CustomProvisioning` to keep all my workflows together. And then I make a `VM Provisioning` workflow that will be used for pre-provisioning tasks.
|
||||||
|
![Workflow organization](/assets/images/posts-2020/qt-D1mJFE.png)
|
||||||
|
|
||||||
|
On the Inputs/Outputs tab of the workflow, I create a single input named `inputProperties` of type `Properties` which will hold all the information about the deployment coming from the vRA side of things.
|
||||||
|
![inputProperties](/assets/images/posts-2020/tiBJVKYdf.png)
|
||||||
|
|
||||||
|
#### Logging the input properties
|
||||||
|
The first thing I'll want this workflow to do (particularly for testing) is to tell me about the input data from vRA. That will help to eliminate a lot of guesswork. I could just write a script within the workflow to do that, but creating it as a separate action will make it easier to reuse in other workflows. Behold, the `logPayloadProperties` action (nested within the `net.bowdre.utility` module which contains some spoilers for what else is to come!):
|
||||||
|
![image.png](/assets/images/posts-2020/0fSl55whe.png)
|
||||||
|
|
||||||
|
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: logPayloadProperties
|
||||||
|
// Inputs: payload (Properties)
|
||||||
|
// Outputs: none
|
||||||
|
|
||||||
|
System.debug("==== Begin: vRA Event Broker Payload Properties ====");
|
||||||
|
logAllProperties(payload,0);
|
||||||
|
System.debug("==== End: vRA Event Broker Payload Properties ====");
|
||||||
|
|
||||||
|
function logAllProperties(props,indent) {
|
||||||
|
var keys = (props.keys).sort();
|
||||||
|
for each (var key in keys) {
|
||||||
|
var prop = props.get(key);
|
||||||
|
var type = System.getObjectType(prop);
|
||||||
|
if (type == "Properties") {
|
||||||
|
logSingleProperty(key,prop,indent);
|
||||||
|
logAllProperties(prop,indent+1);
|
||||||
|
} else {
|
||||||
|
logSingleProperty(key,prop,indent);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function logSingleProperty(name,value,i) {
|
||||||
|
var prefix = "";
|
||||||
|
if (i > 0) {
|
||||||
|
var prefix = Array(i+1).join("-") + " ";
|
||||||
|
}
|
||||||
|
System.debug(prefix + name + " :: " + System.getObjectType(value) + " :: " + value);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Going back to my VM Provisioning workflow, I drag an Action Element onto the canvas and tie it to my new action, passing in `inputProperties (Properties)` as the input:
|
||||||
|
![image.png](/assets/images/posts-2020/o8CgTjSYm.png)
|
||||||
|
|
||||||
|
#### Event Broker Subscription
|
||||||
|
And at this point I save the workflow. I'm not finished with it - not by a long shot! - but this is a great place to get the workflow plumbed up to vRA and run a quick test. So I go to the vRA interface, hit up the Extensibility tab, and create a new subscription. I name it "VM Provisioning" and set it to fire on the "Compute allocation" event, which will happen right before the VM starts getting created. I link in my VM Provisioning workflow, and also set this as a blocking execution so that no other/future workflows will run until this one completes.
|
||||||
|
![VM Provisioning subscription](/assets/images/posts-2020/IzaMb39C-.png)
|
||||||
|
|
||||||
|
Alrighty, let's test this and see if it works. I head back to the Design tab and kick off another deployment.
|
||||||
|
![Test deployment](/assets/images/posts-2020/6PA6lIOcP.png)
|
||||||
|
|
||||||
|
I'm going to go grab some more coffee while this runs.
|
||||||
|
![Successful deployment](/assets/images/posts-2020/Zyha7vAwi.png)
|
||||||
|
|
||||||
|
And we're back! Now that the deployment completed successfully, I can go back to the Orchestrator view and check the Workflow Runs section to confirm that the VM Provisioning workflow did fire correctly. I can click on it to get more details, and the Logs tab will show me all the lovely data logged by the `logPayloadProperties` action running from the workflow.
|
||||||
|
![Logged payload properties](/assets/images/posts-2020/AiFwzSpWS.png)
|
||||||
|
|
||||||
|
That information can also be seen on the Variables tab:
|
||||||
|
![So many variables](/assets/images/posts-2020/65ECa7nej.png)
|
||||||
|
|
||||||
|
A really handy thing about capturing the data this way is that I can use the Run Again or Debug button to execute the vRO workflow again without having to actually deploy a new VM from vRA. This will be great for testing as I press onward.
|
||||||
|
|
||||||
|
(Note that I can't actually edit the workflow directly from this workflow run; I have to go back into the workflow itself to edit it, but then I can re-run the run and it will execute the new code with the already-stored variables.)
|
||||||
|
|
||||||
|
#### Wrapper workflow
|
||||||
|
I'm going to use this VM Provisioning workflow as a sort of top-level wrapper. This workflow will have a task to parse the payload and grab the variables that will be needed for naming a VM, and it will also have a task to actually rename the VM, but it's going to delegate the name generation to another nested workflow. Making the workflows somewhat modular will make it easier to make changes in the future if needed.
|
||||||
|
|
||||||
|
Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing the payload - I'll call it `parse payload` - and pass it `inputProperties (Properties)` as its input.
|
||||||
|
![parse payload task](/assets/images/posts-2020/aQg91t93a.png)
|
||||||
|
|
||||||
|
The script for this is pretty straight-forward:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: parse payload
|
||||||
|
// Inputs: inputProperties (Properties)
|
||||||
|
// Outputs: requestProperties (Properties), originalNames (Array/string)
|
||||||
|
|
||||||
|
var customProperties = inputProperties.customProperties || new Properties();
|
||||||
|
var requestProperties = new Properties();
|
||||||
|
|
||||||
|
requestProperties.site = customProperties.site;
|
||||||
|
requestProperties.environment = customProperties.environment;
|
||||||
|
requestProperties.function = customProperties.function;
|
||||||
|
requestProperties.app = customProperties.app;
|
||||||
|
requestProperties.dnsDomain = customProperties.dnsDomain;
|
||||||
|
|
||||||
|
System.debug("requestProperties: " + requestProperties)
|
||||||
|
|
||||||
|
originalNames = inputProperties.resourceNames || new Array();
|
||||||
|
System.debug("Original names: " + originalNames)
|
||||||
|
```
|
||||||
|
|
||||||
|
It creates a new `requestProperties (Properties)` variable to store the limited set of properties that will be needed for naming - `site`, `environment`, `function`, and `app`. It also stores a copy of the original `resourceNames (Array/string)`, which will be useful when we need to replace the old name with the new one. To make those two new variables accessible to other parts of the workflow, I'll need to also create the variables at the workflow level and map them as outputs of this task:
|
||||||
|
![outputs mapped](/assets/images/posts-2020/4B6wN8QeG.png)
|
||||||
|
|
||||||
|
I'll also drop in a "Foreach Element" item, which will run a linked workflow once for each item in an input array (`originalNames (Array/string)` in this case). I haven't actually created that nested workflow yet so I'm going to skip selecting that for now.
|
||||||
|
![Nested workflow placeholder](/assets/images/posts-2020/UIafeShcv.png)
|
||||||
|
|
||||||
|
The final step of this workflow will be to replace the existing contents of `resourceNames (Array/string)` with the new name. I'll do that with another scriptable task element, named `Apply new names`, which takes `inputProperties (Properties)` and `newNames (Array/string)` as inputs and returns `resourceNames (Array/string)` as a workflow output back to vRA. vRA will see that `resourceNames` has changed and it will update the name of the deployed resource (the VM) accordingly.
|
||||||
|
![Apply new names task](/assets/images/posts-2020/h_PHeT6af.png)
|
||||||
|
|
||||||
|
And here's the script for that task:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: Apply new names
|
||||||
|
// Inputs: inputProperties (Properties), newNames (Array/string)
|
||||||
|
// Outputs: resourceNames (Array/string)
|
||||||
|
|
||||||
|
resourceNames = inputProperties.get("resourceNames");
|
||||||
|
for (var i = 0; i < newNames.length; i++) {
|
||||||
|
System.log("Replacing resourceName '" + resourceNames[i] + "' with '" + newNames[i] + "'");
|
||||||
|
resourceNames[i] = newNames[i];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now's a good time to save this workflow (ignoring the warning about it failing validation for now), and create a new workflow that the VM Provisioning workflow can call to actually generate unique hostnames.
|
||||||
|
|
||||||
|
### Nested workflow
|
||||||
|
I'm a creative person so I'm going to call this workflow "Generate unique hostname". It's going to receive `requestProperties (Properties)` as its sole input, and will return `nextVmName (String)` as its sole output.
|
||||||
|
![Workflow input and output](/assets/images/posts-2020/5bWfqh4ZSE.png)
|
||||||
|
|
||||||
|
I will also need to bind a couple of workflow variables to those configuration elements I created earlier. This is done by creating the variable as usual (`baseFormat (string)` in this case), toggling the "Bind to configuration" option, and then searching for the appropriate configuration. It's important to make sure the selected type matches that of the configuration element - otherwise it won't show up in the list.
|
||||||
|
![Binding a variable to a configuration](/assets/images/posts-2020/PubHnv_jM.png)
|
||||||
|
|
||||||
|
I do the same for the `nameFormat (string)` variable as well.
|
||||||
|
![Configuration variables added to the workflow](/assets/images/posts-2020/7Sb3j2PS3.png)
|
||||||
|
|
||||||
|
#### Task: create lock
|
||||||
|
Okay, on to the schema. This workflow may take a little while to execute, and it would be bad if another deployment came in while it was running - the two runs might both assign the same hostname without realizing it. Fortunately vRO has a locking system which can be used to avoid that. Accordingly, I'll add a scriptable task element to the canvas and call it `create lock`. It will have two inputs used for identifying the lock so that it can be easily removed later on, so I create a new variable `lockOwner (string)` with the value `eventBroker` and another named `lockId (string)` set to `namingLock`.
|
||||||
|
![Task: create lock](/assets/images/posts-2020/G0TEJ30003.png)
|
||||||
|
|
||||||
|
The script is very short:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: create lock
|
||||||
|
// Inputs: lockOwner (String), lockId (String)
|
||||||
|
// Outputs: none
|
||||||
|
|
||||||
|
System.debug("Creating lock...")
|
||||||
|
LockingSystem.lockAndWait(lockId, lockOwner)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Task: generate hostnameBase
|
||||||
|
We're getting to the meat of the operation now - another scriptable task named `generate hostnameBase` which will take the naming components from the deployment properties and stick them together in the form defined in the `nameFormat (String)` configuration. The inputs will be the existing `nameFormat (String)`, `requestProperties (Properties)`, and `baseFormat (String)` variables, and it will output new `hostnameBase (String)` ("`BOW-DAPP-WEB`") and `digitCount (Number)` ("`3`", one for each `#` in the format) variables. I'll also go ahead and initialize `hostnameSeq (Number)` to `0` to prepare for a later step.
|
||||||
|
![Task: generate hostnameBase](/assets/images/posts-2020/XATryy20y.png)
|
||||||
|
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: generate hostnameBase
|
||||||
|
// Inputs: nameFormat (String), requestProperties (Properties), baseFormat (String)
|
||||||
|
// Outputs: hostnameBase (String), digitCount (Number), hostnameSeq (Number)
|
||||||
|
|
||||||
|
hostnameBase = baseFormat;
|
||||||
|
digitCount = nameFormat.match(/(#)/g).length;
|
||||||
|
hostnameSeq = 0;
|
||||||
|
|
||||||
|
// Get request keys and drop them into the template
|
||||||
|
for each (var key in requestProperties.keys) {
|
||||||
|
var propValue = requestProperties.get(key);
|
||||||
|
hostnameBase = hostnameBase.replace("{{" + key + "}}", propValue);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove leading/trailing special characters from hostname base
|
||||||
|
hostnameBase = hostnameBase.toUpperCase();
|
||||||
|
hostnameBase = hostnameBase.replace(/([-_]$)/g, "");
|
||||||
|
hostnameBase = hostnameBase.replace(/^([-_])/g, "");
|
||||||
|
System.debug("Hostname base: " + hostnameBase)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Interlude: connecting vRO to vCenter
|
||||||
|
Coming up, I'm going to want to connect to vCenter so I can find out if there are any existing VMs with a similar name. I'll use the vSphere vCenter Plug-in which is included with vRO to facilitate that, but that means I'll first need to set up that connection. So I'll save the workflow I've been working on (save early, save often) and then go run the preloaded "Add a vCenter Server instance" workflow. The first page of required inputs is pretty self-explanatory:
|
||||||
|
![Add a vCenter Server instance - vCenter properties](/assets/images/posts-2020/6Gpxapzd3.png)
|
||||||
|
|
||||||
|
On the connection properties page, I unchecked the per-user connection in favor of using a single service account, the same one that I'm already using for vRA's connection to vCenter.
|
||||||
|
![Add a vCenter Server instance - Connection properties](/assets/images/posts-2020/RuqJhj00_.png)
|
||||||
|
|
||||||
|
After successful completion of the workflow, I can go to Administration > Inventory and confirm that the new endpoint is there:
|
||||||
|
![vCenter plugin endpoint](/assets/images/posts-2020/rUmGPdz2I.png)
|
||||||
|
|
||||||
|
I've only got the one vCenter in my lab. At work, I've got multiple vCenters so I would need to repeat these steps to add each of them as an endpoint.
|
||||||
|
|
||||||
|
#### Task: prepare vCenter SDK connection
|
||||||
|
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
|
||||||
|
![Task: prepare vCenter SDK connection](/assets/images/posts-2020/ByIWO66PC.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: prepare vCenter SDK connection
|
||||||
|
// Inputs: none
|
||||||
|
// Outputs: sdkConnections (Array/VC:SdkConnection)
|
||||||
|
|
||||||
|
sdkConnections = VcPlugin.allSdkConnections
|
||||||
|
System.log("Preparing vCenter SDK connection...")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### ForEach element: search VMs by name
|
||||||
|
Next, I'm going to drop another ForEach element onto the canvas. For each vCenter endpoint in `sdkConnections (Array/VC:SdkConnection)`, it will execute the workflow titled "Get virtual machines by name with PC". I map the required `vc` input to `*sdkConnections (VC:SdkConnection)`, `filter` to `hostnameBase (String)`, and skip `rootVmFolder` since I don't care where a VM resides. And I create a new `vmsByHost (Array/Array)` variable to hold the output.
|
||||||
|
![ForEach: search VMs by name](/assets/images/posts-2020/mnOxV2udH.png)
|
||||||
|
|
||||||
|
#### Task: unpack results for all hosts
|
||||||
|
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
|
||||||
|
![Task: unpack results for all hosts](/assets/images/posts-2020/gIEFRnilq.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: unpack results for all hosts
|
||||||
|
// Inputs: vmsByHost (Array/Array)
|
||||||
|
// Outputs: vmNames (Array/string)
|
||||||
|
|
||||||
|
var vms = new Array();
|
||||||
|
vmNames = new Array();
|
||||||
|
|
||||||
|
for (host in vmsByHost) {
|
||||||
|
var a = vmsByHost[host]
|
||||||
|
for (vm in a) {
|
||||||
|
vms.push(a[vm])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Task: generate hostnameSeq & candidateVmName
|
||||||
|
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
|
||||||
|
![Task: generate hostnameSeq & candidateVmName](/assets/images/posts-2020/fWlSrD56N.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: generate hostnameSeq & candidateVmName
|
||||||
|
// Inputs: hostnameBase (String), digitCount (Number)
|
||||||
|
// Outputs: hostnameSeq (Number), computerNames (ConfigurationElement), candidateVmName (String)
|
||||||
|
|
||||||
|
// Get computerNames configurationElement, which lives in the 'CustomProvisioning' folder
|
||||||
|
// Specify a different path if the CE lives somewhere else
|
||||||
|
var category = Server.getConfigurationElementCategoryWithPath("CustomProvisioning")
|
||||||
|
var elements = category.configurationElements
|
||||||
|
for (var i in elements) {
|
||||||
|
if (elements[i].name == "computerNames") {
|
||||||
|
computerNames = elements[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lookup hostnameBase and increment sequence value
|
||||||
|
try {
|
||||||
|
var attribute = computerNames.getAttributeWithKey(hostnameBase);
|
||||||
|
hostnameSeq = attribute.value;
|
||||||
|
System.debug("Found " + attribute.name + " with sequence " + attribute.value)
|
||||||
|
} catch (e) {
|
||||||
|
System.debug("Hostname base " + hostnameBase + " does not exist, it will be created.")
|
||||||
|
} finally {
|
||||||
|
hostnameSeq++;
|
||||||
|
if (hostnameSeq.toString().length > digitCount) {
|
||||||
|
errMsg = 'All out of potential VM names, aborting...';
|
||||||
|
throw(errMsg);
|
||||||
|
}
|
||||||
|
System.debug("Adding " + hostnameBase + " with sequence " + hostnameSeq)
|
||||||
|
computerNames.setAttributeWithKey(hostnameBase, hostnameSeq)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert number to string and add leading zeroes
|
||||||
|
var hostnameNum = hostnameSeq.toString();
|
||||||
|
var leadingZeroes = new Array(digitCount - hostnameNum.length + 1).join("0");
|
||||||
|
hostnameNum = leadingZeroes + hostnameNum;
|
||||||
|
|
||||||
|
// Produce our candidate VM name
|
||||||
|
candidateVmName = hostnameBase + hostnameNum;
|
||||||
|
candidateVmName = candidateVmName.toUpperCase();
|
||||||
|
System.log("Proposed VM name: " + candidateVmName)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Task: check for VM name conflicts
|
||||||
|
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
|
||||||
|
![Task: check for VM name conflicts](/assets/images/posts-2020/qmHszypww.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: check for VM name conflicts
|
||||||
|
// Inputs: candidateVmName (String), vmNames (Array/string)
|
||||||
|
// Outputs: conflict (Boolean)
|
||||||
|
|
||||||
|
for (i in vmNames) {
|
||||||
|
if (vmNames[i] == candidateVmName) {
|
||||||
|
conflict = true;
|
||||||
|
errMsg = "Found a conflicting VM name!"
|
||||||
|
System.warn(errMsg)
|
||||||
|
throw(errMsg)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
System.log("No VM name conflicts found for " + candidateVmName)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Conflict resolution
|
||||||
|
So what happens if there *is* a naming conflict? This solution wouldn't be very flexible if it just gave up as soon as it encountered a problem. Fortunately, I planned for this - all I need to do in the event of a conflict is to run the `generate hostnameSeq & candidateVmName` task again to increment `hostnameSeq (Number)` by one, use that to create a new `candidateVmName (String)`, and then continue on with the checks.
|
||||||
|
|
||||||
|
So far, all of the workflow elements have been connected with happy blue lines which show the flow when everything is going according to the plan. Remember that `errMsg (String)` from the last task? When that gets thrown, the flow will switch to follow an angry dashed red line (if there is one). After dropping a new scriptable task onto the canvas, I can click on the blue line connecting it to the previous item and then click the red X to make it go away.
|
||||||
|
![So long, Blue Line!](/assets/images/posts-2020/BOIwhMxKy.png)
|
||||||
|
|
||||||
|
I can then drag the new element away from the "everything is fine" flow, and connect it to the `check for VM name conflict` element with that angry dashed red line. Once `conflict resolution` completes (successfully), a happy blue line will direct the flow back to `generate hostnameSeq & candidateVmName` so that the sequence can be incremented and the checks performed again. And finally, a blue line will connect the `check for VM name conflict` task's successful completion to the end of the workflow:
|
||||||
|
![Error -> fix it -> try again](/assets/images/posts-2020/dhcjdDo-E.png)
|
||||||
|
|
||||||
|
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: conflict resolution
|
||||||
|
// Inputs: none
|
||||||
|
// Outputs: conflict (Boolean)
|
||||||
|
|
||||||
|
System.log("Conflict encountered, trying a new name...")
|
||||||
|
conflict = false;
|
||||||
|
```
|
||||||
|
|
||||||
|
So if `check VM name conflict` encounters a collision with an existing VM name it will set `conflict (Boolean) = true;` and throw `errMsg (String)`, which will divert the flow to the `conflict resolution` task. That task will clear the `conflict (Boolean)` flag and return flow to `generate hostnameSeq & candidateVmName`, which will attempt to increment `hostnameSeq (Number)`. Not that this task doesn't have a dashed red line escape route; if it needs to throw `errMsg (String)` because of exhausting the number pool it will abort the workflow entirely.
|
||||||
|
|
||||||
|
#### Task: return nextVmName
|
||||||
|
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
|
||||||
|
![Task: return nextVmName](/assets/images/posts-2020/5QFTPHp5H.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: return nextVmName
|
||||||
|
// Inputs: candidateVmName (String)
|
||||||
|
// Outputs: nextVmName (String)
|
||||||
|
|
||||||
|
nextVmName = candidateVmName;
|
||||||
|
System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Task: remove lock
|
||||||
|
And we should also remove that lock that we created at the start of this workflow.
|
||||||
|
![Task: remove lock](/assets/images/posts-2020/BhBnBh8VB.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript remove lock
|
||||||
|
// Inputs: lockId (String), lockOwner (String)
|
||||||
|
// Outputs: none
|
||||||
|
|
||||||
|
System.debug("Releasing lock...")
|
||||||
|
LockingSystem.unlock(lockId, lockOwner)
|
||||||
|
```
|
||||||
|
|
||||||
|
Done! Well, mostly. Right now the workflow only actually releases the lock if it completes successfully. Which brings me to:
|
||||||
|
|
||||||
|
#### Default error handler
|
||||||
|
I can use a default error handler to capture an abort due to running out of possible names, release the lock (with an exact copy of the `remove lock` task), and return (failed) control back to the parent workflow.
|
||||||
|
![Default error handler](/assets/images/posts-2020/afDacKjVx.png)
|
||||||
|
|
||||||
|
Because the error handler will only fire when the workflow has failed catastrophically, I'll want to make sure the parent workflow knows about it. So I'll set the end mode to "Error, throw an exception" and bind it to that `errMsg (String)` variable to communicate the problem back to the parent.
|
||||||
|
![End Mode](/assets/images/posts-2020/R9d8edeFP.png)
|
||||||
|
|
||||||
|
#### Finalizing the VM Provisioning workflow
|
||||||
|
When I had dropped the foreach workflow item into the VM Provisioning workflow earlier, I hadn't configured anything but the name. Now that the nested workflow is complete, I need to fill in the blanks:
|
||||||
|
![Generate unique hostname](/assets/images/posts-2020/F0IZHRj-J.png)
|
||||||
|
|
||||||
|
So for each item in `originalNames (Array/string)`, this will run the workflow named `Generate unique hostname`. The input to the workflow will be `requestProperties (Properties)`, and the output will be `newNames (Array/string)`.
|
||||||
|
|
||||||
|
|
||||||
|
### Putting it all together now
|
||||||
|
Hokay, so. I've got configuration elements which hold the template for how I want servers to be named and also track which names have been used. My cloud template asks the user to input certain details which will be used to create a useful computer name. And I've added an extensibility subscription in Cloud Assembly which will call this vRealize Orchestrator workflow before the VM gets created:
|
||||||
|
![Workflow: VM Provisioning](/assets/images/posts-2020/cONrdrbb6.png)
|
||||||
|
|
||||||
|
This workflow first logs all the properties obtained from the vRA side of things, then parses the properties to grab the necessary details. It then passes that information to a nested workflow actually generate the hostname. Once it gets a result, it updates the deployment properties with the new name so that vRA can configure the VM accordingly.
|
||||||
|
|
||||||
|
The nested workflow is a bit more complicated:
|
||||||
|
![Workflow: Generate unique hostname](/assets/images/posts-2020/siEJSdeDE.png)
|
||||||
|
|
||||||
|
It first creates a lock to ensure there won't be multiple instances of this workflow running simultaneously, and then processes data coming from the "parent" workflow to extract the details needed for this workflow. It smashes together the naming elements (site, environment, function, etc) to create a naming base, then connects to each defined vCenter to compile a list of any VMs with the same base. The workflow then consults a configuration element to see which (if any) similar names have already been used, and generates a suggested VM name based on that. It then consults the existing VM list to see if there might be any collisions; if so, it flags the conflict and loops back to generate a new name and try again. Once the conflicts are all cleared, the suggested VM name is made official and returned back up to the VM Provisioning workflow.
|
||||||
|
|
||||||
|
Cool. But does it actually work?
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Remember how I had tested my initial workflow just to see which variables got captured? Now that the workflow has a bit more content, I can just re-run it without having to actually provision a new VM. After doing so, the logging view reports that it worked!
|
||||||
|
![Sweet success!](/assets/images/posts-2020/eZ1BUfesQ.png)
|
||||||
|
|
||||||
|
I can also revisit the `computerNames` configuration element to see the new name reflected there:
|
||||||
|
![More success](/assets/images/posts-2020/krx8rZMmh.png)
|
||||||
|
|
||||||
|
If I run the workflow again, I should see `DRE-DTST-XXX002` assigned, and I do!
|
||||||
|
![Twice as nice](/assets/images/posts-2020/SOPs3mzTs.png)
|
||||||
|
|
||||||
|
And, finally, I can go back to vRA and request a new VM and confirm that the name gets correctly applied to the VM.
|
||||||
|
![#winning](/assets/images/posts-2020/HXrAMJrH.png)
|
||||||
|
|
||||||
|
It's so beautiful!
|
||||||
|
|
||||||
|
### Wrap-up
|
||||||
|
At this point, I'm tired of typing and I'm sure you're tired of reading. In the next installment, I'll go over how I modify this workflow to also check for naming conflicts in Active Directory and DNS. That sounds like it should be pretty simple but, well, you'll see.
|
||||||
|
|
||||||
|
See you then!
|
231
content/post/2021-04-19-vra8-custom-provisioning-part-three.md
Normal file
|
@ -0,0 +1,231 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-04-19T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/K6vcxpDj8.png
|
||||||
|
last_modified_at: "2021-10-01"
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
- powershell
|
||||||
|
title: 'vRA8 Custom Provisioning: Part Three'
|
||||||
|
---
|
||||||
|
|
||||||
|
Picking up after [Part Two](vra8-custom-provisioning-part-two), I now have a pretty handy vRealize Orchestrator workflow to generate unique hostnames according to a defined naming standard. It even checks against the vSphere inventory to validate the uniqueness. Now I'm going to take it a step (or two, rather) further and extend those checks against Active Directory and DNS.
|
||||||
|
|
||||||
|
### Active Directory
|
||||||
|
#### Adding an AD endpoint
|
||||||
|
Remember how I [used the built-in vSphere plugin](vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter) to let vRO query my vCenter(s) for VMs with a specific name? And how that required first configuring the vCenter endpoint(s) in vRO? I'm going to take a very similar approach here.
|
||||||
|
|
||||||
|
So as before, I'll first need to run the preinstalled "Add an Active Directory server" workflow:
|
||||||
|
![Add an Active Directory server workflow](/assets/images/posts-2020/uUDJXtWKz.png)
|
||||||
|
|
||||||
|
I fill out the Connection tab like so:
|
||||||
|
![Connection tab](/assets/images/posts-2020/U6oMWDal2.png)
|
||||||
|
*I don't have SSL enabled on my homelab AD server so I left that unchecked.*
|
||||||
|
|
||||||
|
On the Authentication tab, I tick the box to use a shared session and specify the service account I'll use to connect to AD. It would be great for later steps if this account has the appropriate privileges to create/delete computer accounts at least within designated OUs.
|
||||||
|
![Authentication tab](/assets/images/posts-2020/7MfV-1uiO.png)
|
||||||
|
|
||||||
|
If you've got multiple AD servers, you can use the options on the Alternative Hosts tab to specify those, saving you from having to create a new configuration for each. I've just got the one AD server in my lab, though, so at this point I just hit Run.
|
||||||
|
|
||||||
|
Once it completes successfully, I can visit the Inventory section of the vRO interface to confirm that the new Active Directory endpoint shows up:
|
||||||
|
![New AD endpoint](/assets/images/posts-2020/vlnle_ekN.png)
|
||||||
|
|
||||||
|
#### checkForAdConflict Action
|
||||||
|
Since I try to keep things modular, I'm going to write a new vRO action within the `net.bowdre.utility` module called `checkForAdConflict` which can be called from the `Generate unique hostname` workflow. It will take in `computerName (String)` as an input and return a boolean `True` if a conflict is found or `False` if the name is available.
|
||||||
|
![Action: checkForAdConflict](/assets/images/posts-2020/JT7pbzM-5.png)
|
||||||
|
|
||||||
|
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: checkForAdConflict action
|
||||||
|
// Inputs: computerName (String)
|
||||||
|
// Outputs: (Boolean)
|
||||||
|
|
||||||
|
var adHosts = AD_HostManager.findAllHosts();
|
||||||
|
for each (var adHost in adHosts) {
|
||||||
|
var computer = ActiveDirectory.getComputerAD(computerName,adHost);
|
||||||
|
System.log("Searched AD for: " + computerName);
|
||||||
|
if (computer) {
|
||||||
|
System.log("Found: " + computer.name);
|
||||||
|
return true;
|
||||||
|
} else {
|
||||||
|
System.log("No AD objects found.");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Adding it to the workflow
|
||||||
|
Now I can pop back over to my massive `Generate unique hostname` workflow and drop in a new scriptable task between the `check for VM name conflicts` and `return nextVmName` tasks. It will bring in `candidateVmName (String)` as well as `conflict (Boolean)` as inputs, return `conflict (Boolean)` as an output, and `errMsg (String)` will be used for exception handling. If `errMsg (String)` is thrown, the flow will follow the dashed red line back to the `conflict resolution` action.
|
||||||
|
![Action: check for AD conflict](/assets/images/posts-2020/iB1bjdC8C.png)
|
||||||
|
|
||||||
|
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: check for AD conflict task
|
||||||
|
// Inputs: candidateVmName (String), conflict (Boolean)
|
||||||
|
// Outputs: conflict (Boolean)
|
||||||
|
|
||||||
|
if (conflict) {
|
||||||
|
System.log("Existing conflict found, skipping AD check...")
|
||||||
|
} else {
|
||||||
|
var result = System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName);
|
||||||
|
// remember this returns 'true' if a conflict is encounter
|
||||||
|
if (result == true) {
|
||||||
|
conflict = true;
|
||||||
|
errMsg = "Conflicting AD object found!"
|
||||||
|
System.warn(errMsg)
|
||||||
|
throw(errMsg)
|
||||||
|
} else {
|
||||||
|
System.log("No AD conflict found for " + candidateVmName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool, so that's the AD check in the bank. Onward to DNS!
|
||||||
|
|
||||||
|
### DNS
|
||||||
|
**[Update]** Thanks to a [kind commenter](https://github.com/jbowdre/jbowdre.github.io/issues/10#issuecomment-932541245), I've learned that my DNS-checking solution detailed below is somewhat unnecessarily complicated. I overlooked it at the time I was putting this together, but vRO _does_ provide a `System.resolveHostName()` function to easily perform DNS lookups. I've updated the [Adding it to the workflow](#adding-it-to-the-workflow-1) section below with the simplified script which eliminates the need for building an external script with dependencies and importing that as a vRO action, but I'm going to leave those notes in place as well in case anyone else (or Future John) might need to leverage a similar approach to solve another issue.
|
||||||
|
|
||||||
|
Seriously. Go ahead and skip to [here](#adding-it-to-the-workflow-1).
|
||||||
|
|
||||||
|
#### The Challenge (Deprecated)
|
||||||
|
JavaScript can't talk directly to Active Directory on its own, but in the previous action I was able to leverage the AD plugin built into vRO to bridge that gap. Unfortunately ~~there isn't~~ _I couldn't find_ a corresponding pre-installed plugin that will work as a DNS client. vRO 8 does introduce support for using other languages like (cross-platform) PowerShell or Python instead of being restricted to just JavaScript... but I wasn't able to find an easy solution for querying DNS from those languages either without requiring external modules. (The cross-platform version of PowerShell doesn't include handy Windows-centric cmdlets like `Get-DnsServerResourceRecord`.)
|
||||||
|
|
||||||
|
So I'll have to get creative here.
|
||||||
|
|
||||||
|
#### The Solution (Deprecated)
|
||||||
|
Luckily, vRO does provide a way to import scripts bundled with their required modules, and I found the necessarily clues for doing that [here](https://docs.vmware.com/en/vRealize-Orchestrator/8.3/com.vmware.vrealize.orchestrator-using-client-guide.doc/GUID-3C0CEB11-4079-43DF-B134-08C1D62EE3A4.html). And I found a DNS client written for cross-platform PowerShell in the form of the [DnsClient-PS](https://github.com/rmbolger/DnsClient-PS) module. So I'll write a script locally, package it up with the DnsClient-PS module, and import it as a vRO action.
|
||||||
|
|
||||||
|
I start by creating a folder to store the script and needed module, and then I create the required `handler.ps1` file.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
❯ mkdir checkDnsConflicts
|
||||||
|
❯ cd checkDnsConflicts
|
||||||
|
❯ touch handler.ps1
|
||||||
|
```
|
||||||
|
|
||||||
|
I then create a `Modules` folder and install the DnsClient-PS module:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
❯ mkdir Modules
|
||||||
|
❯ pwsh -c "Save-Module -Name DnsClient-PS -Path ./Modules/ -Repository PSGallery"
|
||||||
|
```
|
||||||
|
|
||||||
|
And then it's time to write the PowerShell script in `handler.ps1`:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# PowerShell: checkForDnsConflict script
|
||||||
|
# Inputs: $inputs.hostname (String), $inputs.domain (String)
|
||||||
|
# Outputs: $queryresult (String)
|
||||||
|
#
|
||||||
|
# Returns true if a conflicting record is found in DNS.
|
||||||
|
|
||||||
|
Import-Module DnsClient-PS
|
||||||
|
|
||||||
|
function handler {
|
||||||
|
Param($context, $inputs)
|
||||||
|
$hostname = $inputs.hostname
|
||||||
|
$domain = $inputs.domain
|
||||||
|
$fqdn = $hostname + '.' + $domain
|
||||||
|
Write-Host "Querying DNS for $fqdn..."
|
||||||
|
$resolution = (Resolve-DNS $fqdn)
|
||||||
|
If (-not $resolution.HasError) {
|
||||||
|
Write-Host "Record found:" ($resolution | Select-Object -Expand Answers).ToString()
|
||||||
|
$queryresult = "true"
|
||||||
|
} Else {
|
||||||
|
Write-Host "No record found."
|
||||||
|
$queryresult = "false"
|
||||||
|
}
|
||||||
|
return $queryresult
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now to package it up in a `.zip` which I can then import into vRO:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
❯ zip -r --exclude=\*.zip -X checkDnsConflicts.zip .
|
||||||
|
adding: Modules/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Public/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Public/Set-DnsClientSetting.ps1 (deflated 67%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Public/Resolve-Dns.ps1 (deflated 67%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Public/Get-DnsClientSetting.ps1 (deflated 65%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/lib/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/lib/DnsClient.1.3.1-netstandard2.0.xml (deflated 91%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/lib/DnsClient.1.3.1-netstandard2.0.dll (deflated 57%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/lib/System.Buffers.4.4.0-netstandard2.0.xml (deflated 72%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/lib/System.Buffers.4.4.0-netstandard2.0.dll (deflated 44%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psm1 (deflated 56%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Private/ (stored 0%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Private/Get-NameserverList.ps1 (deflated 68%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Private/Resolve-QueryOptions.ps1 (deflated 63%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/Private/MockWrappers.ps1 (deflated 47%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/PSGetModuleInfo.xml (deflated 73%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.Format.ps1xml (deflated 80%)
|
||||||
|
adding: Modules/DnsClient-PS/1.0.0/DnsClient-PS.psd1 (deflated 59%)
|
||||||
|
adding: handler.ps1 (deflated 49%)
|
||||||
|
❯ ls
|
||||||
|
checkDnsConflicts.zip handler.ps1 Modules
|
||||||
|
```
|
||||||
|
|
||||||
|
#### checkForDnsConflict action (Deprecated)
|
||||||
|
And now I can go into vRO, create a new action called `checkForDnsConflict` inside my `net.bowdre.utilities` module. This time, I change the Language to `PowerCLI 12 (PowerShell 7.0)` and switch the Type to `Zip` to reveal the Import button.
|
||||||
|
![Preparing to import the zip](/assets/images/posts-2020/sjCtvoZA0.png)
|
||||||
|
|
||||||
|
Clicking that button lets me browse to the file I need to import. I can also set up the two input variables that the script requires, `hostname (String)` and `domain (String)`.
|
||||||
|
![Package imported and variables defined](/assets/images/posts-2020/xPvBx3oVX.png)
|
||||||
|
|
||||||
|
#### Adding it to the workflow
|
||||||
|
Just like with the `check for AD conflict` action, I'll add this onto the workflow as a scriptable task, this time between that action and the `return nextVmName` one. This will take `candidateVmName (String)`, `conflict (Boolean)`, and `requestProperties (Properties)` as inputs, and will return `conflict (Boolean)` as its sole output. The task will use `errMsg (String)` as its exception binding, which will divert flow via the dashed red line back to the `conflict resolution` task.
|
||||||
|
|
||||||
|
![Task: check for DNS conflict](/assets/images/posts-2020/uSunGKJfH.png)
|
||||||
|
|
||||||
|
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: check for DNS conflict
|
||||||
|
// Inputs: candidateVmName (String), conflict (Boolean), requestProperties (Properties)
|
||||||
|
// Outputs: conflict (Boolean)
|
||||||
|
|
||||||
|
var domain = requestProperties.dnsDomain
|
||||||
|
if (conflict) {
|
||||||
|
System.log("Existing conflict found, skipping DNS check...")
|
||||||
|
} else {
|
||||||
|
if (System.resolveHostName(candidateVmName + "." + domain)) {
|
||||||
|
conflict == true;
|
||||||
|
errMsg = "Conflicting DNS record found!"
|
||||||
|
System.warn(errMsg)
|
||||||
|
throw(errMsg)
|
||||||
|
} else {
|
||||||
|
System.log("No DNS conflict for " + candidateVmName)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Once that's all in place, I kick off another deployment to make sure that everything works correctly. After it completes, I can navigate to the **Extensibility > Workflow runs** section of the vRA interface to review the details:
|
||||||
|
![Workflow run success](/assets/images/posts-2020/GZKQbELfM.png)
|
||||||
|
|
||||||
|
It worked!
|
||||||
|
|
||||||
|
But what if there *had* been conflicts? It's important to make sure that works too. I know that if I run that deployment again, the VM will get named `DRE-DTST-XXX008` and then `DRE-DTST-XXX009`. So I'm going to force conflicts by creating an AD object for one and a DNS record for the other.
|
||||||
|
![Making conflicts](/assets/images/posts-2020/6HBIUf6KE.png)
|
||||||
|
|
||||||
|
And I'll kick off another deployment and see what happens.
|
||||||
|
![Workflow success even with conflicts](/assets/images/posts-2020/K6vcxpDj8.png)
|
||||||
|
|
||||||
|
The workflow saw that the last VM was created as `-007` so it first grabbed `-008`. It saw that `-008` already existed in AD so incremented up to try `-009`. The workflow then found that a record for `-009` was present in DNS so bumped it up to `-010`. That name finally passed through the checks and so the VM was deployed with the name `DRE-DTST-XXX010`. Success!
|
||||||
|
|
||||||
|
### Next steps
|
||||||
|
So now I've got a pretty capable workflow for controlled naming of my deployed VMs. The names conform with my established naming scheme and increment predictably in response to naming conflicts in vSphere, Active Directory, and DNS.
|
||||||
|
|
||||||
|
In the next post, I'll be enhancing my cloud template to let users pick which network to use for the deployed VM. That sounds simple, but I'll want the list of available networks to be filtered based on the selected site - that means using a Service Broker custom form to query another vRO action. I will also add the ability to create AD computer objects in a site-specific OU and automatically join the server to the domain. And I'll add notes to the VM to make it easier to remember why it was deployed.
|
||||||
|
|
||||||
|
Stay tuned!
|
|
@ -0,0 +1,92 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Scripts
|
||||||
|
date: "2021-04-29T08:34:30Z"
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- shell
|
||||||
|
title: Automatic unattended expansion of Linux root LVM volume to fill disk
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
While working on my [vRealize Automation 8 project](series/vra8), I wanted to let users specify how large a VM's system drive should be and have vRA apply that without any further user intervention. For instance, if the template has a 60GB C: drive and the user specifies that they want it to be 80GB, vRA will embiggen the new VM's VMDK to 80GB and then expand the guest file system to fill up the new free space.
|
||||||
|
|
||||||
|
I'll get into the details of how that's implemented from the vRA side #soon, but first I needed to come up with simple scripts to extend the guest file system to fill the disk.
|
||||||
|
|
||||||
|
This was pretty straight-forward on Windows with a short PowerShell script to grab the appropriate volume and resize it to its full capacity:
|
||||||
|
```powershell
|
||||||
|
$Partition = Get-Volume -DriveLetter C | Get-Partition
|
||||||
|
$Partition | Resize-Partition -Size ($Partition | Get-PartitionSupportedSize).sizeMax
|
||||||
|
```
|
||||||
|
|
||||||
|
It was a bit trickier for Linux systems though. My Linux templates all use LVM to abstract the file systems away from the physical disks, but they may have a different number of physical partitions or different names for the volume groups and logical volumes. So I needed to be able to automagically determine which logical volume was mounted as `/`, which volume group it was a member of, and which partition on which disk is used for that physical volume. I could then expand the physical partition to fill the disk, expand the volume group to fill the now-larger physical volume, grow the logical volume to fill the volume group, and (finally) extend the file system to fill the logical volume.
|
||||||
|
|
||||||
|
I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh) that helped with most of those operations, but it required the user to specify the physical and logical volumes. I modified it to auto-detect those, and here's what I came up with:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#!/bin/bash
|
||||||
|
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
|
||||||
|
# expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical
|
||||||
|
# volume, and Linux filesystem to consume new free space on the disk.
|
||||||
|
# Adapted from https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh
|
||||||
|
|
||||||
|
extenddisk() {
|
||||||
|
echo -e "\n+++Current partition layout of $disk:+++"
|
||||||
|
parted $disk --script unit s print
|
||||||
|
if [ $logical == 1 ]; then
|
||||||
|
parted $disk --script rm $ext_partnum
|
||||||
|
parted $disk --script "mkpart extended ${ext_startsector}s -1s"
|
||||||
|
parted $disk --script "set $ext_partnum lba off"
|
||||||
|
parted $disk --script "mkpart logical ext2 ${startsector}s -1s"
|
||||||
|
else
|
||||||
|
parted $disk --script rm $partnum
|
||||||
|
parted $disk --script "mkpart primary ext2 ${startsector}s -1s"
|
||||||
|
fi
|
||||||
|
parted $disk --script set $partnum lvm on
|
||||||
|
echo -e "\n\n+++New partition layout of $disk:+++"
|
||||||
|
parted $disk --script unit s print
|
||||||
|
partx -v -a $disk
|
||||||
|
pvresize $pvname
|
||||||
|
lvextend --extents +100%FREE --resize $lvpath
|
||||||
|
echo -e "\n+++New root partition size:+++"
|
||||||
|
df -h / | grep -v Filesystem
|
||||||
|
}
|
||||||
|
export LVM_SUPPRESS_FD_WARNINGS=1
|
||||||
|
mountpoint=$(df --output=source / | grep -v Filesystem) # /dev/mapper/centos-root
|
||||||
|
lvdisplay $mountpoint > /dev/null
|
||||||
|
if [ $? != 0 ]; then
|
||||||
|
echo "Error: $mountpoint does not look like a LVM logical volume. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo -e "\n+++Current root partition size:+++"
|
||||||
|
df -h / | grep -v Filesystem
|
||||||
|
lvname=$(lvs --noheadings $mountpoint | awk '{print($1)}') # root
|
||||||
|
vgname=$(lvs --noheadings $mountpoint | awk '{print($2)}') # centos
|
||||||
|
lvpath="/dev/${vgname}/${lvname}" # /dev/centos/root
|
||||||
|
pvname=$(pvs | grep $vgname | tail -n1 | awk '{print($1)}') # /dev/sda2
|
||||||
|
disk=$(echo $pvname | rev | cut -c 2- | rev) # /dev/sda
|
||||||
|
diskshort=$(echo $disk | grep -Po '[^\/]+$') # sda
|
||||||
|
partnum=$(echo $pvname | grep -Po '\d$') # 2
|
||||||
|
startsector=$(fdisk -u -l $disk | grep $pvname | awk '{print $2}') # 2099200
|
||||||
|
layout=$(parted $disk --script unit s print) # Model: VMware Virtual disk (scsi) Disk /dev/sda: 83886080s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 2099199s 2097152s primary xfs boot 2 2099200s 62914559s 60815360s primary lvm
|
||||||
|
if grep -Pq "^\s$partnum\s+.+?logical.+$" <<< "$layout"; then
|
||||||
|
logical=1
|
||||||
|
ext_partnum=$(parted $disk --script unit s print | grep extended | grep -Po '^\s\d\s' | tr -d ' ')
|
||||||
|
ext_startsector=$(parted $disk --script unit s print | grep extended | awk '{print $2}' | tr -d 's')
|
||||||
|
else
|
||||||
|
logical=0
|
||||||
|
fi
|
||||||
|
parted $disk --script unit s print | if ! grep -Pq "^\s$partnum\s+.+?[^,]+?lvm\s*$"; then
|
||||||
|
echo -e "Error: $pvname seems to have some flags other than 'lvm' set."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if ! (fdisk -u -l $disk | grep $disk | tail -1 | grep $pvname | grep -q "Linux LVM"); then
|
||||||
|
echo -e "Error: $pvname is not the last LVM volume on disk $disk."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
ls /sys/class/scsi_device/*/device/rescan | while read path; do echo 1 > $path; done
|
||||||
|
ls /sys/class/scsi_host/host*/scan | while read path; do echo "- - -" > $path; done
|
||||||
|
extenddisk
|
||||||
|
```
|
||||||
|
|
||||||
|
And it works beautifully within my environment. Hopefully it'll work for yours too in case you have a similar need!
|
|
@ -0,0 +1,42 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Scripts
|
||||||
|
date: "2021-04-29T08:34:30Z"
|
||||||
|
tags:
|
||||||
|
- windows
|
||||||
|
- powershell
|
||||||
|
title: Using PowerShell and a Scheduled Task to apply Windows Updates
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
In the same vein as [my script to automagically resize a Linux LVM volume to use up free space on a disk](automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk), I wanted a way to automatically apply Windows updates for servers deployed by [my vRealize Automation environment](series/vra8). I'm only really concerned with Windows Server 2019, which includes the [built-in Windows Update Provider PowerShell module](https://4sysops.com/archives/scan-download-and-install-windows-updates-with-powershell/). So this could be as simple as `Install-WUUpdates -Updates (Start-WUScan)` to scan for and install any available updates.
|
||||||
|
|
||||||
|
Unfortunately, I found that this approach can take a long time to run and often exceeded the timeout limits imposed upon my ABX script, causing the PowerShell session to end and terminating the update process. I really needed a way to do this without requiring a persistent session.
|
||||||
|
|
||||||
|
After further experimentation, I settled on using PowerShell to create a one-time scheduled task that would run the updates and reboot, if necessary. I also wanted the task to automatically delete itself after running to avoid cluttering up the task scheduler library - and that last item had me quite stumped until I found [this blog post with the solution](https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/).
|
||||||
|
|
||||||
|
So here's what I put together:
|
||||||
|
```powershell
|
||||||
|
# This can be easily pasted into a remote PowerShell session to automatically install any available updates and reboot.
|
||||||
|
# It creates a scheduled task to start the update process after a one-minute delay so that you don't have to maintain
|
||||||
|
# the session during the process (or have the session timeout), and it also sets the task to automatically delete itself 2 hours later.
|
||||||
|
#
|
||||||
|
# This leverages the Windows Update Provider PowerShell module which is included in Windows 10 1709+ and Windows Server 2019.
|
||||||
|
#
|
||||||
|
# Adapted from https://iamsupergeek.com/self-deleting-scheduled-task-via-powershell/
|
||||||
|
|
||||||
|
$action = New-ScheduledTaskAction -Execute 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe' -Argument '-NoProfile -WindowStyle Hidden -Command "& {Install-WUUpdates -Updates (Start-WUScan); if (Get-WUIsPendingReboot) {shutdown.exe /f /r /d p:2:4 /t 120 /c `"Rebooting to apply updates`"}}"'
|
||||||
|
$trigger = New-ScheduledTaskTrigger -Once -At ([DateTime]::Now.AddMinutes(1))
|
||||||
|
$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -Hidden
|
||||||
|
Register-ScheduledTask -Action $action -Trigger $trigger -Settings $settings -TaskName "Initial_Updates" -User "NT AUTHORITY\SYSTEM" -RunLevel Highest
|
||||||
|
$task = Get-ScheduledTask -TaskName "Initial_Updates"
|
||||||
|
$task.Triggers[0].StartBoundary = [DateTime]::Now.AddMinutes(1).ToString("yyyy-MM-dd'T'HH:mm:ss")
|
||||||
|
$task.Triggers[0].EndBoundary = [DateTime]::Now.AddHours(2).ToString("yyyy-MM-dd'T'HH:mm:ss")
|
||||||
|
$task.Settings.AllowHardTerminate = $True
|
||||||
|
$task.Settings.DeleteExpiredTaskAfter = 'PT0S'
|
||||||
|
$task.Settings.ExecutionTimeLimit = 'PT2H'
|
||||||
|
$task.Settings.Volatile = $False
|
||||||
|
$task | Set-ScheduledTask
|
||||||
|
```
|
||||||
|
|
||||||
|
It creates the task, sets it to run in one minute, and then updates the task's configuration to make it auto-expire and delete two hours later. When triggered, the task installs all available updates and (if necessary) reboots the system after a 2-minute countdown (which an admin could cancel with `shutdown /a`, if needed). This could be handy for pasting in from a remote PowerShell session and works great when called from a vRA ABX script too!
|
217
content/post/2021-05-18-vra8-custom-provisioning-part-four.md
Normal file
|
@ -0,0 +1,217 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-05-18T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/hFPeakMxn.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
title: 'vRA8 Custom Provisioning: Part Four'
|
||||||
|
---
|
||||||
|
|
||||||
|
My [last post in this series](vra8-custom-provisioning-part-three) marked the completion of the vRealize Orchestrator workflow that I use for pre-provisioning tasks, namely generating a unique *sequential* hostname which complies with a defined naming standard and doesn't conflict with any existing records in vSphere, Active Directory, or DNS. That takes care of many of the "back-end" tasks for a simple deployment.
|
||||||
|
|
||||||
|
This post will add in some "front-end" operations, like creating a customized VM request form in Service Broker and dynamically populating a drop-down with a list of networks available at the user-selected deployment site. I'll also take care of some housekeeping items like automatically generating a unique deployment name.
|
||||||
|
|
||||||
|
### Getting started with Service Broker Custom Forms
|
||||||
|
So far, I've been working either in the Cloud Assembly or Orchestrator UIs, both of which are really geared toward administrators. Now I'm going to be working with Service Broker which will provide the user-facing front-end. This is where "normal" users will be able to submit provisioning requests without having to worry about any of the underlying infrastructure or orchestration.
|
||||||
|
|
||||||
|
Before I can do anything with my Cloud Template in the Service Broker UI, though, I'll need to release it from Cloud Assembly. I do this by opening the template on the *Design* tab and clicking the *Version* button at the bottom of the screen. I'll label this as `1.0` and tick the checkbox to *Release this version to the catalog*.
|
||||||
|
![Releasing the Cloud Template to the Service Broker catalog](/assets/images/posts-2020/0-9BaWJqq.png)
|
||||||
|
|
||||||
|
I can then go to the Service Broker UI and add a new Content Source for my Cloud Assembly templates.
|
||||||
|
![Add a new Content Source](/assets/images/posts-2020/4X1dPG_Rq.png)
|
||||||
|
![Adding a new Content Source](/assets/images/posts-2020/af-OEP5Tu.png)
|
||||||
|
After hitting the *Create & Import* button, all released Cloud Templates in the selected Project will show up in the Service Broker *Content* section:
|
||||||
|
![New content!](/assets/images/posts-2020/Hlnnd_8Ed.png)
|
||||||
|
|
||||||
|
In order for users to deploy from this template, I also need to go to *Content Sharing*, select the Project, and share the content. This can be done either at the Project level or by selecting individual content items.
|
||||||
|
![Content sharing](/assets/images/posts-2020/iScnhmzVY.png)
|
||||||
|
|
||||||
|
That template now appears on the Service Broker *Catalog* tab:
|
||||||
|
![Catalog items](/assets/images/posts-2020/09faF5-Fm.png)
|
||||||
|
|
||||||
|
That's cool and all, and I could go ahead and request a deployment off of that catalog item right now - but I'm really interested in being able to customize the request form. I do that by clicking on the little three-dot menu icon next to the Content entry and selecting the *Customize form* option.
|
||||||
|
![Customize form](/assets/images/posts-2020/ZPsS0oZuc.png)
|
||||||
|
|
||||||
|
When you start out, the custom form kind of jumbles up the available fields. So I'm going to start by dragging-and-dropping the fields to resemble the order defined in the Cloud Template:
|
||||||
|
![image.png](/assets/images/posts-2020/oLwUg1k6T.png)
|
||||||
|
|
||||||
|
In addition to rearranging the request form fields, Custom Forms also provide significant control over how the form behaves. You can change how a field is displayed, define default values, make fields dependent upon other fields and more. For instance, all of my templates and resources belong to a single project so making the user select the project (from a set of 1) is kind of redundant. Every deployment has to be tied to a project so I can't just remove that field, but I can select the "Project" field on the canvas and change its *Visibility* to "No" to hide it. It will silently pass along the correct project ID in the background without cluttering up the form.
|
||||||
|
![Hiding the Project field](/assets/images/posts-2020/4flvfGC54.png)
|
||||||
|
|
||||||
|
How about that Deployment Name field? In my tests, I'd been manually creating a string of numbers to uniquely identify the deployment, but I'm not going to ask my users to do that. Instead, I'll leverage another great capability of Custom Forms - tying a field value to a result of a custom vRO action!
|
||||||
|
|
||||||
|
### Automatic deployment naming
|
||||||
|
*[Update] I've since come up with what I think is a better approach to handling this. Check it out [here](vra8-automatic-deployment-naming-another-take)!*
|
||||||
|
|
||||||
|
That means it's time to dive back into the vRealize Orchestrator interface and whip up a new action for this purpose. I created a new action within my existing `net.bowdre.utility` module called `createDeploymentName`.
|
||||||
|
![createDeploymentName action](/assets/images/posts-2020/GMCWhns7u.png)
|
||||||
|
|
||||||
|
A good deployment name *must* be globally unique, and it would be great if it could also convey some useful information like who requested the deployment, which template it is being deployed from, and the purpose of the server. The `siteCode (String)`, `envCode (String)`, `functionCode (String)`, and `appCode (String)` variables from the request form will do a great job of describing the server's purpose. I can also pass in some additional information from the Service Broker form like `catalogItemName (String)` to get the template name and `requestedByName (String)` to identify the user making the request. So I'll set all those as inputs to my action:
|
||||||
|
![createDeploymentName inputs](/assets/images/posts-2020/bCKrtn05o.png)
|
||||||
|
|
||||||
|
I also went ahead and specified that the action will return a String.
|
||||||
|
|
||||||
|
And now for the code. I really just want to mash all those variables together into a long string, and I'll also add a timestamp to make sure each deployment name is truly unique.
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: createDeploymentName
|
||||||
|
// Inputs: catalogItemName (String), requestedByName (String), siteCode (String),
|
||||||
|
// envCode (String), functionCode (String), appCode (String)
|
||||||
|
// Returns: deploymentName (String)
|
||||||
|
|
||||||
|
var deploymentName = ''
|
||||||
|
|
||||||
|
// we don't want to evaluate this until all requested fields have been completed
|
||||||
|
if (catalogItemName != '' && requestedByName != null && siteCode != null && envCode != null && functionCode != null && appCode != null) {
|
||||||
|
var date = new Date()
|
||||||
|
deploymentName = requestedByName + "_" + catalogItemName + "_" + siteCode + "-" + envCode.substring(0,1) + functionCode + "-" + appCode.toUpperCase() + "-(" + date.toISOString() + ")"
|
||||||
|
System.debug("Returning deploymentName: " + deploymentName)
|
||||||
|
}
|
||||||
|
return deploymentName
|
||||||
|
```
|
||||||
|
|
||||||
|
With that sorted, I can go back to the Service Broker interface to modify the custom form a bit more. I select the "Deployment Name" field and click over to the Values tab on the right. There, I set the *Value source* to "External source" and *Select action* to the new action I just created, `net.bowdre.utility/createDeploymentName`. (If the action doesn't appear in the search field, go to *Infrastructure > Integrations > Embedded-VRO* and click the "Start Data Collection" button to force vRA to update its inventory of vRO actions and workflows.) I then map all the action's inputs to properties available on the request form.
|
||||||
|
![Linking the action](/assets/images/posts-2020/mpbPukEeB.png)
|
||||||
|
|
||||||
|
The last step before testing is to click that *Enable* button to activate the custom form, and then the *Save* button to save my work. So did it work? Let's head to the *Catalog* tab and open the request:
|
||||||
|
![Screen recording 2021-05-10 17.01.37.gif](/assets/images/posts-2020/tybyj-5dG.gif)
|
||||||
|
|
||||||
|
Cool! So it's dynamically generating the deployment name based on selections made on the form. Now that it works, I can go back to the custom form and set the "Deployment Name" field to be invisible just like the "Project" one.
|
||||||
|
|
||||||
|
### Per-site network selection
|
||||||
|
So far, vRA has been automatically placing VMs on networks based solely on [which networks are tagged as available](vra8-custom-provisioning-part-one#using-tags-for-resource-placement) for the selected site. I'd like to give my users a bit more control over which network their VMs get attached to, particularly as some networks may be set aside for different functions or have different firewall rules applied.
|
||||||
|
|
||||||
|
As a quick recap, I've got five networks available for vRA, split across my two sites using tags:
|
||||||
|
|
||||||
|
|Name |Subnet |Site |Tags |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| d1620-Servers-1 | 172.16.20.0/24 | BOW | `net:bow` |
|
||||||
|
| d1630-Servers-2 | 172.16.30.0/24 | BOW | `net:bow` |
|
||||||
|
| d1640-Servers-3 | 172.16.40.0/24 | BOW | `net:bow` |
|
||||||
|
| d1650-Servers-4 | 172.16.50.0/24 | DRE | `net:dre` |
|
||||||
|
| d1660-Servers-5 | 172.16.60.0/24 | DRE | `net:dre` |
|
||||||
|
|
||||||
|
I'm going to add additional tags to these networks to further define their purpose.
|
||||||
|
|
||||||
|
|Name |Purpose |Tags |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| d1620-Servers-1 |Management | `net:bow`, `net:mgmt` |
|
||||||
|
| d1630-Servers-2 | Front-end | `net:bow`, `net:front` |
|
||||||
|
| d1640-Servers-3 | Back-end | `net:bow`, `net:back` |
|
||||||
|
| d1650-Servers-4 | Front-end | `net:dre`, `net:front` |
|
||||||
|
| d1660-Servers-5 | Back-end | `net:dre`, `net:back` |
|
||||||
|
|
||||||
|
I *could* just use those tags to let users pick the appropriate network, but I've found that a lot of times users don't know why they're picking a certain network, they just know the IP range they need to use. So I'll take it a step further and add a giant tag to include the Site, Purpose, and Subnet, and this is what will ultimately be presented to the users:
|
||||||
|
|
||||||
|
|Name |Tags |
|
||||||
|
| --- | --- |
|
||||||
|
| d1620-Servers-1 | `net:bow`, `net:mgmt`, `net:bow-mgmt-172.16.20.0` |
|
||||||
|
| d1630-Servers-2 | `net:bow`, `net:front`, `net:bow-front-172.16.30.0` |
|
||||||
|
| d1640-Servers-3 | `net:bow`, `net:back`, `net:bow-back-172.16.40.0` |
|
||||||
|
| d1650-Servers-4 | `net:dre`, `net:front`, `net:dre-front-172.16.50.0` |
|
||||||
|
| d1660-Servers-5 | `net:dre`, `net:back`, `net:dre-back-172.16.60.0` |
|
||||||
|
|
||||||
|
![Tagged networks](/assets/images/posts-2020/J_RG9JNPz.png)
|
||||||
|
|
||||||
|
So I can now use a single tag to positively identify a single network, as long as I know its site and either its purpose or its IP space. I'll reference these tags in a vRO action that will populate a dropdown in the request form with the available networks for the selected site. Unfortunately I couldn't come up with an easy way to dynamically pull the tags into vRO so I create another Configuration Element to store them:
|
||||||
|
![networksPerSite configuration element](/assets/images/posts-2020/xfEultDM_.png)
|
||||||
|
|
||||||
|
This gets filed under the existing `CustomProvisioning` folder, and I name it `networksPerSite`. Each site gets a new variable of type `Array/string`. The name of the variable matches the site ID, and the contents are just the tags minus the `net:` prefix.
|
||||||
|
|
||||||
|
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
|
||||||
|
![getNetworksForSite action](/assets/images/posts-2020/IdrT-Un8H1.png)
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: getNetworksForSite
|
||||||
|
// Inputs: siteCode (String)
|
||||||
|
// Returns: site.value (Array/String)
|
||||||
|
|
||||||
|
// Get networksPerSite configurationElement
|
||||||
|
var category = Server.getConfigurationElementCategoryWithPath("CustomProvisioning")
|
||||||
|
var elements = category.configurationElements
|
||||||
|
for (var i in elements) {
|
||||||
|
if (elements[i].name == "networksPerSite") {
|
||||||
|
var networksPerSite = elements[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lookup siteCode and find available networks
|
||||||
|
try {
|
||||||
|
var site = networksPerSite.getAttributeWithKey(siteCode)
|
||||||
|
} catch (e) {
|
||||||
|
System.debug("Invalid site.");
|
||||||
|
} finally {
|
||||||
|
return site.value
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Back in Cloud Assembly, I edit the Cloud Template to add an input field called `network`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
network:
|
||||||
|
type: string
|
||||||
|
title: Network
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
and update the resource configuration for the network entity to constrain it based on `input.network` instead of `input.site` as before:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
<...>
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
# - tag: 'net:${to_lower(input.site)}'
|
||||||
|
- tag: 'net:${input.network}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Remember that the `networksPerSite` configuration element contains the portion of the tags *after* the `net:` prefix so that's why I include the prefix in the constraint tag here. I just didn't want it to appear in the selection dropdown.
|
||||||
|
|
||||||
|
After making this change to the Cloud Template I use the "Create Version" button again to create a new version and tick the option to release it so that it can be picked up by Service Broker.
|
||||||
|
![Another new version](/assets/images/posts-2020/REZ08yA2E.png)
|
||||||
|
|
||||||
|
Back on the Service Broker UI, I hit my `LAB` Content Source again to Save & Import the new change, and then go to customize the form for `WindowsDemo` again. After dragging-and-dropping the new `Network` field onto the request form blueprint, I kind of repeat the steps I used for adjusting the Deployment Name field earlier. On the Appearance tab I set it to be a DropDown, and on the Values tab I set it to an external source, `net.bowdre.utility/getNetworksForSite`. This action only needs a single input so I map `Site` on the request form to the `siteCode` input.
|
||||||
|
![Linking the Network field to the getNetworksForSite action](/assets/images/posts-2020/CDy518peA.png)
|
||||||
|
|
||||||
|
Now I can just go back to the Catalog tab and request a new deployment to check out my--
|
||||||
|
![Ew, an ugly error](/assets/images/posts-2020/zWFTuOYOG.png)
|
||||||
|
|
||||||
|
Oh yeah. That vRO action gets called as soon as the request form loads - before selecting the required site code as an input. I could modify the action so that returns an empty string if the site hasn't been selected yet, but I'm kind of lazy so I'll instead just modify the custom form so that the Site field defaults to the `BOW` site.
|
||||||
|
![BOW is default](/assets/images/posts-2020/yb77nH2Fp.png)
|
||||||
|
|
||||||
|
*Now* I can open up the request form and see how well it works:
|
||||||
|
![Network selection in action](/assets/images/posts-2020/fh37T__nb.gif)
|
||||||
|
|
||||||
|
Noice!
|
||||||
|
|
||||||
|
### Putting it all together now
|
||||||
|
At this point, I'll actually kick off a deployment and see how everything works out.
|
||||||
|
![The request](/assets/images/posts-2020/hFPeakMxn.png)
|
||||||
|
|
||||||
|
After hitting Submit, I can see that this deployment has a much more friendly name than the previous ones:
|
||||||
|
![Auto generated deployment name!](/assets/images/posts-2020/TQGyrUqIx.png)
|
||||||
|
|
||||||
|
And I can also confirm that the VM got named appropriately (based on the [naming standard I implemented earlier](vra8-custom-provisioning-part-two)), and it also got placed on the `172.16.60.0/24` network I selected.
|
||||||
|
![Network placement - check!](/assets/images/posts-2020/1NJvDeA7r.png)
|
||||||
|
|
||||||
|
Very slick. And I think that's a great stopping point for today.
|
||||||
|
|
||||||
|
Coming up, I'll describe how I create AD computer objects in site-specific OUs, add notes and custom attributes to the VM in vSphere, and optionally create static DNS records on a Windows DNS server.
|
|
@ -0,0 +1,56 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-05-20T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/wl-WPQpEl.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
title: vRA8 Automatic Deployment Naming - Another Take
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
A [few days ago](vra8-custom-provisioning-part-four#automatic-deployment-naming), I shared how I combined a Service Broker Custom Form with a vRO action to automatically generate a unique and descriptive deployment name based on user inputs. That approach works *fine* but while testing some other components I realized that calling that action each time a user makes a selection isn't necessarily ideal. After a bit of experimentation, I settled on what I believe to be a better solution.
|
||||||
|
|
||||||
|
Instead of setting the "Deployment Name" field to use an External Source (vRO), I'm going to configure it to use a Computed Value. This is a bit less flexible, but all the magic happens right there in the form without having to make an expensive vRO call.
|
||||||
|
![Computed Value option](/assets/images/posts-2020/Ivv0ia8oX.png)
|
||||||
|
|
||||||
|
After setting `Value source` to `Computed value`, I also set the `Operation` to `Concatenate` (since it is, after all, the only operation choice. I can then use the **Add Value** button to add some fields. Each can be either a *Constant* (like a separator) or linked to a *Field* on the request form. By combining those, I can basically reconstruct the same arrangement that I was previously generating with vRO:
|
||||||
|
![Fields and Constants!](/assets/images/posts-2020/zN3EN6lrG.png)
|
||||||
|
|
||||||
|
So this will generate a name that looks something like `[user]_[catalog_item]_[site]-[env][function]-[app]`, all without having to call vRO! That gets me pretty close to what I want... but there's always the chance that the generated name won't be truly unique. Being able to append a timestamp on to the end would be a big help here.
|
||||||
|
|
||||||
|
That does mean that I'll need to add another vRO call, but I can set this up so that it only gets triggered once, when the form loads, instead of refreshing each time the inputs change.
|
||||||
|
|
||||||
|
So I hop over to vRO and create a new action, which I call `getTimestamp`. It doesn't require any inputs, and returns a single string. Here's the code:
|
||||||
|
```js
|
||||||
|
// JavaScript: getTimestamp action
|
||||||
|
// Inputs: None
|
||||||
|
// Returns: result (String)
|
||||||
|
|
||||||
|
var date = new Date();
|
||||||
|
var result = date.toISOString();
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
I then drag a Text Field called `Timestamp` onto the Service Broker Custom Form canvas, and set it to not be visible:
|
||||||
|
![Invisible timestamp](/assets/images/posts-2020/rtTeG3ZoR.png)
|
||||||
|
|
||||||
|
And I set it to pull its value from my new `net.bowdre.utility/getTimestamp` action:
|
||||||
|
![Calling the action](/assets/images/posts-2020/NoN-72Qf6.png)
|
||||||
|
|
||||||
|
Now when the form loads, this field will store a timestamp with thousandths-of-a-second precision.
|
||||||
|
|
||||||
|
The last step is to return to the Deployment Name field and link in the new Timestamp field so that it will get tacked on to the end of the generated name.
|
||||||
|
![Linked!](/assets/images/posts-2020/wl-WPQpEl.png)
|
||||||
|
|
||||||
|
The old way looked like this, where it had to churn a bit after each selection:
|
||||||
|
![The Churn](/assets/images/posts-2020/vH-npyz9s.gif)
|
||||||
|
|
||||||
|
Here's the newer approach, which feels much snappier:
|
||||||
|
![Snappy!](/assets/images/posts-2020/aumfETl1l.gif)
|
||||||
|
|
||||||
|
Not bad! Now I can make the Deployment Name field hidden again and get back to work!
|
182
content/post/2021-05-27-adguard-home-in-docker-on-photon-os.md
Normal file
|
@ -0,0 +1,182 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2021-05-27T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/HRRpFOKuN.png
|
||||||
|
tags:
|
||||||
|
- docker
|
||||||
|
- vmware
|
||||||
|
title: AdGuard Home in Docker on Photon OS
|
||||||
|
---
|
||||||
|
|
||||||
|
I was recently introduced to [AdGuard Home](https://adguard.com/en/adguard-home/overview.html) by way of its very slick [Home Assistant Add-On](https://github.com/hassio-addons/addon-adguard-home/blob/main/adguard/DOCS.md). Compared to the relatively-complicated [Pi-hole](https://pi-hole.net/) setup that I had implemented several months back, AdGuard Home was *much* simpler to deploy (particularly since I basically just had to click the "Install" button from the Home Assistant add-ons manage). It also has a more modern UI with options arranged more logically (to me, at least), and it just feels easier to use overall. It worked great for a time... until my Home Assistant instance crashed, taking down AdGuard Home (and my internet access) with it. Maybe bundling these services isn't the best move.
|
||||||
|
|
||||||
|
I'd like to use AdGuard Home, but the system it runs on needs to be rock-solid. With that in mind, I thought it might be fun to instead run AdGuard Home in a Docker container on a VM running VMware's container-optimized [Photon OS](https://github.com/vmware/photon), primarily because I want an excuse to play more with Docker and Photon (but also the thing I just mentioned about stability). So here's what it took to get that running.
|
||||||
|
|
||||||
|
### Deploy Photon
|
||||||
|
First, up: getting Photon. There are a variety of delivery formats available [here](https://github.com/vmware/photon/wiki/Downloading-Photon-OS), and I opted for the HW13 OVA version. I copied that download URL:
|
||||||
|
```
|
||||||
|
https://packages.vmware.com/photon/4.0/GA/ova/photon-hw13-uefi-4.0-1526e30ba0.ova
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I went into vCenter, hit the **Deploy OVF Template** option, and pasted in the URL:
|
||||||
|
![Deploying the OVA straight from the internet](/assets/images/posts-2020/Es90-kFW9.png)
|
||||||
|
This lets me skip the kind of tedious "download file from internet and then upload file to vCenter" dance, and I can then proceed to click through the rest of the deployment options.
|
||||||
|
![Ready to deploy](/assets/images/posts-2020/rCpaTbPX5.png)
|
||||||
|
|
||||||
|
Once the VM is created, I power it on and hop into the web console. The default root username is `changeme`, and I'll of course be forced to change that the first time I log in.
|
||||||
|
|
||||||
|
|
||||||
|
### Configure Networking
|
||||||
|
My next step was to configure a static IP address by creating `/etc/systemd/network/10-static-en.network` and entering the following contents:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
[Match]
|
||||||
|
Name=eth0
|
||||||
|
|
||||||
|
[Network]
|
||||||
|
Address=192.168.1.2/24
|
||||||
|
Gateway=192.168.1.1
|
||||||
|
DNS=192.168.1.5
|
||||||
|
```
|
||||||
|
|
||||||
|
By the way, that `192.168.1.5` address is my Windows DC/DNS server that I use for [my homelab environment](vmware-home-lab-on-intel-nuc-9#basic-infrastructure). That's the DNS server that's configured on my Google Wifi router, and it will continue to handle resolution for local addresses.
|
||||||
|
|
||||||
|
I also disabled DHCP by setting `DHCP=no` in `/etc/systemd/network/99-dhcp-en.network`:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
[Match]
|
||||||
|
Name=e*
|
||||||
|
|
||||||
|
[Network]
|
||||||
|
DHCP=no
|
||||||
|
IPv6AcceptRA=no
|
||||||
|
```
|
||||||
|
|
||||||
|
I set the required permissions on my new network configuration file with `chmod 644 /etc/systemd/network/10-static-en.network` and then restarted `networkd` with `systemctl restart systemd-networkd`.
|
||||||
|
|
||||||
|
I then ran `networkctl` a couple of times until the `eth0` interface went fully green, and did an `ip a` to confirm that the address had been applied.
|
||||||
|
![Verifying networking](/assets/images/posts-2020/qOw7Ysj3O.png)
|
||||||
|
|
||||||
|
One last little bit of housekeeping is to change the hostname with `hostnamectl set-hostname adguard` and then reboot for good measure. I can then log in via SSH to continue the setup.
|
||||||
|
![SSH login](/assets/images/posts-2020/NOyfgjjUy.png)
|
||||||
|
|
||||||
|
Now that I'm in, I run `tdnf update` to make sure the VM is fully up to date.
|
||||||
|
|
||||||
|
### Install docker-compose
|
||||||
|
Photon OS ships with Docker preinstalled, but I need to install `docker-compose` on my own to simplify container deployment. Per the [install instructions](https://docs.docker.com/compose/install/#install-compose), I run:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||||
|
chmod +x /usr/local/bin/docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
And then verify that it works:
|
||||||
|
```shell
|
||||||
|
root@adguard [ ~]# docker-compose --version
|
||||||
|
docker-compose version 1.29.2, build 5becea4c
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also want to enable and start Docker:
|
||||||
|
```shell
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disable DNSStubListener
|
||||||
|
By default, the `resolved` daemon is listening on `127.0.0.53:53` and will prevent docker from binding to that port. Fortunately it's [pretty easy](https://github.com/pi-hole/docker-pi-hole#installing-on-ubuntu) to disable the `DNSStubListener` and free up the port:
|
||||||
|
```shell
|
||||||
|
sed -r -i.orig 's/#?DNSStubListener=yes/DNSStubListener=no/g' /etc/systemd/resolved.conf
|
||||||
|
rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
|
||||||
|
systemctl restart systemd-resolved
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy AdGuard Home container
|
||||||
|
Okay, now for the fun part.
|
||||||
|
|
||||||
|
I create a directory for AdGuard to live in, and then create a `docker-compose.yaml` therein:
|
||||||
|
```shell
|
||||||
|
mkdir ~/adguard
|
||||||
|
cd ~/adguard
|
||||||
|
vi docker-compose.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
And I define the container:
|
||||||
|
```yaml
|
||||||
|
version: "3"
|
||||||
|
|
||||||
|
services:
|
||||||
|
adguard:
|
||||||
|
container_name: adguard
|
||||||
|
restart: unless-stopped
|
||||||
|
image: adguard/adguardhome:latest
|
||||||
|
ports:
|
||||||
|
- "53:53/tcp"
|
||||||
|
- "53:53/udp"
|
||||||
|
- "67:67/udp"
|
||||||
|
- "68:68/tcp"
|
||||||
|
- "68:68/udp"
|
||||||
|
- "80:80/tcp"
|
||||||
|
- "443:443/tcp"
|
||||||
|
- "853:853/tcp"
|
||||||
|
- "3000:3000/tcp"
|
||||||
|
volumes:
|
||||||
|
- './workdir:/opt/adguardhome/work'
|
||||||
|
- './confdir:/opt/adguardhome/conf'
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I can fire it up with `docker-compose up --detach`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@adguard [ ~/adguard ]# docker-compose up --detach
|
||||||
|
Creating network "adguard_default" with the default driver
|
||||||
|
Pulling adguard (adguard/adguardhome:latest)...
|
||||||
|
latest: Pulling from adguard/adguardhome
|
||||||
|
339de151aab4: Pull complete
|
||||||
|
4db4be09618a: Pull complete
|
||||||
|
7e918e810e4e: Pull complete
|
||||||
|
bfad96428d01: Pull complete
|
||||||
|
Digest: sha256:de7d791b814560663fe95f9812fca2d6dd9d6507e4b1b29926cc7b4a08a676ad
|
||||||
|
Status: Downloaded newer image for adguard/adguardhome:latest
|
||||||
|
Creating adguard ... done
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Post-deploy configuration
|
||||||
|
Next, I point a web browser to `http://adguard.lab.bowdre.net:3000` to perform the initial (minimal) setup:
|
||||||
|
![Initial config screen](/assets/images/posts-2020/UHvtv1DrT.png)
|
||||||
|
|
||||||
|
Once that's done, I can log in to the dashboard at `http://adguard.lab.bowdre.net/login.html`:
|
||||||
|
![Login page](/assets/images/posts-2020/34xD8tbli.png)
|
||||||
|
|
||||||
|
AdGuard Home ships with pretty sensible defaults so there's not really a huge need to actually do a lot of configuration. Any changes that I *do* do will be saved in `~/adguard/confdir/AdGuardHome.yaml` so they will be preserved across container changes.
|
||||||
|
|
||||||
|
|
||||||
|
### Getting requests to AdGuard Home
|
||||||
|
Normally, you'd tell your Wifi router what DNS server you want to use, and it would relay that information to the connected DHCP clients. Google Wifi is a bit funny, in that it wants to function as a DNS proxy for the network. When you configure a custom DNS server for Google Wifi, it still tells the DHCP clients to send the requests to the router, and the router then forwards the queries on to the configured DNS server.
|
||||||
|
|
||||||
|
I already have Google Wifi set up to use my Windows DC (at `192.168.1.5`) for DNS. That lets me easily access systems on my internal `lab.bowdre.net` domain without having to manually configure DNS, and the DC forwards resolution requests it can't handle on to the upstream (internet) DNS servers.
|
||||||
|
|
||||||
|
To easily insert my AdGuard Home instance into the flow, I pop in to my Windows DC and configure the AdGuard Home address (`192.168.1.2`) as the primary DNS forwarder. The DC will continue to handle internal resolutions, and anything it can't handle will now get passed up the chain to AdGuard Home. And this also gives me a bit of a failsafe, in that queries will fail back to the previously-configured upstream DNS if AdGuard Home doesn't respond within a few seconds.
|
||||||
|
![Setting AdGuard Home as a forwarder](/assets/images/posts-2020/bw09OXG7f.png)
|
||||||
|
|
||||||
|
It's working!
|
||||||
|
![Requests!](/assets/images/posts-2020/HRRpFOKuN.png)
|
||||||
|
|
||||||
|
|
||||||
|
### Caveat
|
||||||
|
Chaining my DNS configurations in this way (router -> DC -> AdGuard Home -> internet) does have a bit of a limitation, in that all queries will appear to come from the Windows server:
|
||||||
|
![Only client](/assets/images/posts-2020/OtPGufxlP.png)
|
||||||
|
I won't be able to do any per-client filtering as a result, but honestly I'm okay with that as I already use the "Pause Internet" option in Google Wifi to block outbound traffic from certain devices anyway. And using the Windows DNS as an intermediary makes it significantly quicker and easier to switch things up if I run into problems later; changing the forwarder here takes effect instantly rather than having to manually update all of my clients or wait for DHCP to distribute the change.
|
||||||
|
|
||||||
|
I have worked around this in the past by [bypassing Google Wifi's DHCP](https://www.mbreviews.com/pi-hole-google-wifi-raspberry-pi/) but I think it was actually more trouble than it was worth to me.
|
||||||
|
|
||||||
|
|
||||||
|
### One last thing...
|
||||||
|
I'm putting a lot of responsibility on both of these VMs, my Windows DC and my new AdGuard Home instance. If they aren't up, I won't have internet access, and that would be a shame. I already have my ESXi host configured to automatically start up when power is (re)applied, so I also adjust the VM Startup/Shutdown Configuration so that AdGuard Home will automatically boot after ESXi is loaded, followed closely by the Windows DC (and the rest of my virtualized infrastructure):
|
||||||
|
![Auto Start-up Options](/assets/images/posts-2020/clE6OVmjp.png)
|
||||||
|
|
||||||
|
So there you have it. Simple DNS-based ad-blocking running on a minimal container-optimized VM that *should* be more stable than the add-on tacked on to my Home Assistant instance. Enjoy!
|
|
@ -0,0 +1,135 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-06-01T08:34:30Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/-Fuvz-GmF.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
title: Adding VM Notes and Custom Attributes with vRA8
|
||||||
|
---
|
||||||
|
|
||||||
|
*In [past posts](series/vra8), I started by [creating a basic deployment infrastructure](vra8-custom-provisioning-part-one) in Cloud Assembly and using tags to group those resources. I then [wrote an integration](integrating-phpipam-with-vrealize-automation-8) to let vRA8 use phpIPAM for static address assignments. I [implemented a vRO workflow](vra8-custom-provisioning-part-two) for generating unique VM names which fit an organization's established naming standard, and then [extended the workflow](vra8-custom-provisioning-part-three) to avoid any naming conflicts in Active Directory and DNS. And, finally, I [created an intelligent provisioning request form in Service Broker](vra8-custom-provisioning-part-four) to make it easy for users to get the servers they need. That's got the core functionality pretty well sorted, so moving forward I'll be detailing additions that enable new capabilities and enhance the experience.*
|
||||||
|
|
||||||
|
In this post, I'll describe how to get certain details from the Service Broker request form and into the VM's properties in vCenter. The obvious application of this is adding descriptive notes so I can remember what purpose a VM serves, but I will also be using [Custom Attributes](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-73606C4C-763C-4E27-A1DA-032E4C46219D.html) to store the server's Point of Contact information and a record of which ticketing system request resulted in the server's creation.
|
||||||
|
|
||||||
|
### New inputs
|
||||||
|
I'll start this by adding a few new inputs to the cloud template in Cloud Assembly.
|
||||||
|
![New inputs in Cloud Assembly](/assets/images/posts-2020/F3Wkd3VT.png)
|
||||||
|
|
||||||
|
I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also need to add these to the `resources` section of the template so that they will get passed along with the deployment properties.
|
||||||
|
![New resource properties](/assets/images/posts-2020/N7YllJkxS.png)
|
||||||
|
|
||||||
|
I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
<...>
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
<...>
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll save this as a new version so that the changes will be available in the Service Broker front-end.
|
||||||
|
![New template version](/assets/images/posts-2020/Z2aKLsLou.png)
|
||||||
|
|
||||||
|
### Service Broker custom form
|
||||||
|
I can then go to Service Broker and drag the new fields onto the Custom Form canvas. (If the new fields don't show up, hit up the Content Sources section of Service Broker, select the content source, and click the "Save and Import" button to sync the changes.) While I'm at it, I set the Description field to display as a text area (encouraging more detailed input), and I also set all the fields on the form to be required.
|
||||||
|
![Service Broker form](/assets/images/posts-2020/unhgNySSzz.png)
|
||||||
|
|
||||||
|
### vRO workflow
|
||||||
|
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||||
|
![image.png](/assets/images/posts-2020/X9JhgWx8x.png)
|
||||||
|
|
||||||
|
The workflow will have a single input from vRA, `inputProperties` of type `Properties`.
|
||||||
|
![image.png](/assets/images/posts-2020/zHrp6GPcP.png)
|
||||||
|
|
||||||
|
The first thing this workflow needs to do is parse `inputProperties (Properties)` to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it `Get VM Object`. It will take `inputProperties (Properties)` as its sole input, and output a new variable called `vm` of type `VC:VirtualMachine`.
|
||||||
|
![image.png](/assets/images/posts-2020/5ATk99aPW.png)
|
||||||
|
|
||||||
|
The script for this task is fairly straightforward:
|
||||||
|
```js
|
||||||
|
// JavaScript: Get VM Object
|
||||||
|
// Inputs: inputProperties (Properties)
|
||||||
|
// Outputs: vm (VC:VirtualMachine)
|
||||||
|
|
||||||
|
var name = inputProperties.resourceNames[0]
|
||||||
|
|
||||||
|
var vms = VcPlugin.getAllVirtualMachines(null, name)
|
||||||
|
System.log("Found VM object: " + vms[0])
|
||||||
|
vm = vms[0]
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll add another scriptable task item to the workflow to actually apply the notes to the VM - I'll call it `Set Notes`, and it will take both `vm (VC:VirtualMachine)` and `inputProperties (Properties)` as its inputs.
|
||||||
|
![image.png](/assets/images/posts-2020/w24V6YVOR.png)
|
||||||
|
|
||||||
|
The first part of the script creates a new VM config spec, inserts the description into the spec, and then reconfigures the selected VM with the new spec.
|
||||||
|
|
||||||
|
The second part uses a built-in action to set the `Point of Contact` and `Ticket` custom attributes accordingly.
|
||||||
|
|
||||||
|
```js
|
||||||
|
// Javascript: Set Notes
|
||||||
|
// Inputs: vm (VC:VirtualMachine), inputProperties (Properties)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var notes = inputProperties.customProperties.description
|
||||||
|
var poc = inputProperties.customProperties.poc
|
||||||
|
var ticket = inputProperties.customProperties.ticket
|
||||||
|
|
||||||
|
var spec = new VcVirtualMachineConfigSpec()
|
||||||
|
spec.annotation = notes
|
||||||
|
vm.reconfigVM_Task(spec)
|
||||||
|
|
||||||
|
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Point of Contact", poc)
|
||||||
|
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Ticket", ticket)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Extensibility subscription
|
||||||
|
Now I need to return to Cloud Assembly and create a new extensibility subscription that will call this new workflow at the appropriate time. I'll call it "VM Post-Provisioning" and attach it to the "Compute Post Provision" topic.
|
||||||
|
![image.png](/assets/images/posts-2020/PmhVOWJsUn.png)
|
||||||
|
|
||||||
|
And then I'll link it to my new workflow:
|
||||||
|
![image.png](/assets/images/posts-2020/cEbWSOg00.png)
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
And then back to Service Broker to request a VM and see if it works:
|
||||||
|
|
||||||
|
![image.png](/assets/images/posts-2020/Lq9DBCK_Y.png)
|
||||||
|
|
||||||
|
It worked!
|
||||||
|
![image.png](/assets/images/posts-2020/-Fuvz-GmF.png)
|
||||||
|
|
||||||
|
In the future, I'll be exploring more features that I can add on to this "VM Post-Provisioning" workflow like creating static DNS records as needed.
|
|
@ -0,0 +1,435 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2021-06-28T00:00:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/2xe34VJym.png
|
||||||
|
last_modified_at: "2021-09-17"
|
||||||
|
tags:
|
||||||
|
- docker
|
||||||
|
- linux
|
||||||
|
- cloud
|
||||||
|
title: Federated Matrix Server (Synapse) on Oracle Cloud's Free Tier
|
||||||
|
---
|
||||||
|
|
||||||
|
I've heard a lot lately about how generous [Oracle Cloud's free tier](https://www.oracle.com/cloud/free/) is, particularly when [compared with the free offerings](https://github.com/cloudcommunity/Cloud-Service-Providers-Free-Tier-Overview) from other public cloud providers. Signing up for an account was fairly straight-forward, though I did have to wait a few hours for an actual human to call me on an actual telephone to verify my account. Once in, I thought it would be fun to try building my own [Matrix](https://matrix.org/) homeserver to really benefit from the network's decentralized-but-federated model for secure end-to-end encrypted communications.
|
||||||
|
|
||||||
|
There are two primary projects for Matrix homeservers: [Synapse](https://github.com/matrix-org/synapse/) and [Dendrite](https://github.com/matrix-org/dendrite). Dendrite is the newer, more efficient server, but it's not quite feature complete. I'll be using Synapse for my build to make sure that everything works right off the bat, and I will be running the server in a Docker container to make it (relatively) easy to replace if I feel more comfortable about Dendrite in the future.
|
||||||
|
|
||||||
|
As usual, it took quite a bit of fumbling about before I got everything working correctly. Here I'll share the steps I used to get up and running.
|
||||||
|
|
||||||
|
### Instance creation
|
||||||
|
Getting a VM spun up on Oracle Cloud was a pretty simple process. I logged into my account, navigated to *Menu -> Compute -> Instances*, and clicked on the big blue **Create Instance** button.
|
||||||
|
![Create Instance](/assets/images/posts-2020/8XAB60aqk.png)
|
||||||
|
|
||||||
|
I'll be hosting this for my `bowdre.net` domain, so I start by naming the instance accordingly: `matrix.bowdre.net`. Naming it isn't strictly necessary, but it does help with keeping track of things. The instance defaults to using an Oracle Linux image. I'd rather use an Ubuntu one for this, simply because I was able to find more documentation on getting Synapse going on Debian-based systems. So I hit the **Edit** button next to *Image and Shape*, select the **Change Image** option, pick **Canonical Ubuntu** from the list of available images, and finally click **Select Image** to confirm my choice.
|
||||||
|
![Image Selection](/assets/images/posts-2020/OSbsiOw8E.png)
|
||||||
|
|
||||||
|
This will be an Ubuntu 20.04 image running on a `VM.Standard.E2.1.Micro` instance, which gets a single AMD EPYC 7551 CPU with 2.0GHz base frequency and 1GB of RAM. It's not much, but it's free - and it should do just fine for this project.
|
||||||
|
|
||||||
|
I can leave the rest of the options as their defaults, making sure that the instance will be allotted a public IPv4 address.
|
||||||
|
![Other default selections](/assets/images/posts-2020/Ki0z1C3g.png)
|
||||||
|
|
||||||
|
Scrolling down a bit to the *Add SSH Keys* section, I leave the default **Generate a key pair for me** option selected, and click the very-important **Save Private Key** button to download the private key to my computer so that I'll be able to connect to the instance via SSH.
|
||||||
|
![Download Private Key](/assets/images/posts-2020/dZkZUIFum.png)
|
||||||
|
|
||||||
|
Now I can finally click the blue **Create Instance** button at the bottom of the screen, and just wait a few minutes for it to start up. Once the status shows a big green "Running" square, I'm ready to connect! I'll copy the listed public IP and make a note of the default username (`ubuntu`). I can then plug the IP, username, and the private key I downloaded earlier into my SSH client (the [Secure Shell extension](https://chrome.google.com/webstore/detail/secure-shell/iodihamcpbpeioajjeobimgagajmlibd) for Google Chrome since I'm doing this from my Pixelbook), and log in to my new VM in The Cloud.
|
||||||
|
![Logged in!](/assets/images/posts-2020/5PD1H7b1O.png)
|
||||||
|
|
||||||
|
### DNS setup
|
||||||
|
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
|
||||||
|
|
||||||
|
While I'm managing DNS, it might be good to take a look at the requirements for [federating my new server](https://github.com/matrix-org/synapse/blob/master/docs/federate.md#setting-up-federation) with the other Matrix servers out there. I'd like for users identities on my server to be identified by the `bowdre.net` domain (`@user:bowdre.net`) rather than the full `matrix.bowdre.net` FQDN (`@user:matrix.bowdre.net` is kind of cumbersome). The standard way to do this to leverage [`.well-known` delegation](https://github.com/matrix-org/synapse/blob/master/docs/delegate.md#well-known-delegation), where the URL at `http://bowdre.net/.well-known/matrix/server` would return a JSON structure telling other Matrix servers how to connect to mine:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"m.server": "matrix.bowdre.net:8448"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
I don't *currently* have another server already handling requests to `bowdre.net`, so for now I'll add another `A` record with the same public IP address to my DNS configuration. Requests for both `bowdre.net` and `matrix.bowdre.net` will reach the same server instance, but those requests will be handled differently. More on that later.
|
||||||
|
|
||||||
|
An alternative to this `.well-known` delegation would be to use [`SRV` DNS record delegation](https://github.com/matrix-org/synapse/blob/master/docs/delegate.md#srv-dns-record-delegation) to accomplish the same thing. I'd create an `SRV` record for `_matrix._tcp.bowdre.net` with the data `0 10 8448 matrix.bowdre.net` (priority=`0`, weight=`10`, port=`8448`, target=`matrix.bowdre.net`) which would again let other Matrix servers know where to send the federation traffic for my server. This approach has an advantage of not needing to make any changes on the `bowdre.net` web server, but it would require the delegated `matrix.bowdre.net` server to *also* [return a valid certificate for `bowdre.net`](https://matrix.org/docs/spec/server_server/latest#:~:text=If%20the%20/.well-known%20request%20resulted,present%20a%20valid%20certificate%20for%20%3Chostname%3E.). Trying to get a Let's Encrypt certificate for a server name that doesn't resolve authoritatively in DNS sounds more complicated than I want to get into with this project, so I'll move forward with my plan to use the `.well-known` delegation instead.
|
||||||
|
|
||||||
|
But first, I need to make sure that the traffic reaches the server to begin with.
|
||||||
|
|
||||||
|
### Firewall configuration
|
||||||
|
Synapse listens on port `8008` for connections from messaging clients, and typically uses port `8448` for federation traffic from other Matrix servers. Rather than expose those ports directly, I'm going to put Synapse behind a reverse proxy on HTTPS port `443`. I'll also need to allow inbound traffic HTTP port `80` for ACME certificate challenges. I've got two firewalls to contend with: the Oracle Cloud one which blocks traffic from getting into my virtual cloud network, and the host firewall running inside the VM.
|
||||||
|
|
||||||
|
I'll tackle the cloud firewall first. From the page showing my instance details, I click on the subnet listed under the *Primary VNIC* heading:
|
||||||
|
![Click on subnet](/assets/images/posts-2020/lBjINolYq.png)
|
||||||
|
|
||||||
|
I then look in the *Security Lists* section and click on the Default Security List:
|
||||||
|
![Click on default security list](/assets/images/posts-2020/nnQ7aQrpm.png)
|
||||||
|
|
||||||
|
The *Ingress Rules* section lists the existing inbound firewall exceptions, which by default is basically just SSH. I click on **Add Ingress Rules** to create a new one.
|
||||||
|
![Ingress rules](/assets/images/posts-2020/dMPHvLHkH.png)
|
||||||
|
|
||||||
|
I want this to apply to traffic from any source IP so I enter the CIDR `0.0.0.0/0`, and I enter the *Destination Port Range* as `80,443`. I also add a brief description and click **Add Ingress Rules**.
|
||||||
|
![Adding an ingress rule](/assets/images/posts-2020/2fbKJc5Y6.png)
|
||||||
|
|
||||||
|
Success! My new ingress rules appear at the bottom of the list.
|
||||||
|
![New rules added](/assets/images/posts-2020/s5Y0rycng.png)
|
||||||
|
|
||||||
|
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
|
||||||
|
```
|
||||||
|
$ sudo iptables -L INPUT --line-numbers
|
||||||
|
Chain INPUT (policy ACCEPT)
|
||||||
|
num target prot opt source destination
|
||||||
|
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||||
|
2 ACCEPT icmp -- anywhere anywhere
|
||||||
|
3 ACCEPT all -- anywhere anywhere
|
||||||
|
4 ACCEPT udp -- anywhere anywhere udp spt:ntp
|
||||||
|
5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||||
|
6 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all:
|
||||||
|
```
|
||||||
|
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
|
||||||
|
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
|
||||||
|
```
|
||||||
|
|
||||||
|
And then I'll confirm that the order is correct:
|
||||||
|
```
|
||||||
|
$ sudo iptables -L INPUT --line-numbers
|
||||||
|
Chain INPUT (policy ACCEPT)
|
||||||
|
num target prot opt source destination
|
||||||
|
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||||
|
2 ACCEPT icmp -- anywhere anywhere
|
||||||
|
3 ACCEPT all -- anywhere anywhere
|
||||||
|
4 ACCEPT udp -- anywhere anywhere udp spt:ntp
|
||||||
|
5 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||||
|
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
|
||||||
|
7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
|
||||||
|
8 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||||
|
```
|
||||||
|
|
||||||
|
I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.)
|
||||||
|
```
|
||||||
|
$ nmap -Pn matrix.bowdre.net
|
||||||
|
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT
|
||||||
|
Nmap scan report for matrix.bowdre.net(150.136.6.180)
|
||||||
|
Host is up (0.086s latency).
|
||||||
|
Other addresses for matrix.bowdre.net (not scanned): 2607:7700:0:1d:0:1:9688:6b4
|
||||||
|
Not shown: 997 filtered ports
|
||||||
|
PORT STATE SERVICE
|
||||||
|
22/tcp open ssh
|
||||||
|
80/tcp closed http
|
||||||
|
443/tcp closed https
|
||||||
|
|
||||||
|
Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up:
|
||||||
|
|
||||||
|
Make rules persistent:
|
||||||
|
```
|
||||||
|
$ sudo netfilter-persistent save
|
||||||
|
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
|
||||||
|
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reverse proxy setup
|
||||||
|
I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration:
|
||||||
|
```conf
|
||||||
|
server {
|
||||||
|
listen 443 ssl http2;
|
||||||
|
listen [::]:443 ssl http2;
|
||||||
|
|
||||||
|
# For the federation port
|
||||||
|
listen 8448 ssl http2 default_server;
|
||||||
|
listen [::]:8448 ssl http2 default_server;
|
||||||
|
|
||||||
|
server_name matrix.example.com;
|
||||||
|
|
||||||
|
location ~* ^(\/_matrix|\/_synapse\/client) {
|
||||||
|
proxy_pass http://localhost:8008;
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
|
||||||
|
# Nginx by default only allows file uploads up to 1M in size
|
||||||
|
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
|
||||||
|
client_max_body_size 50M;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And this sample Apache one:
|
||||||
|
```conf
|
||||||
|
<VirtualHost *:443>
|
||||||
|
SSLEngine on
|
||||||
|
ServerName matrix.example.com
|
||||||
|
|
||||||
|
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||||
|
AllowEncodedSlashes NoDecode
|
||||||
|
ProxyPreserveHost on
|
||||||
|
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||||
|
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||||
|
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
|
||||||
|
ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client
|
||||||
|
</VirtualHost>
|
||||||
|
|
||||||
|
<VirtualHost *:8448>
|
||||||
|
SSLEngine on
|
||||||
|
ServerName example.com
|
||||||
|
|
||||||
|
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
|
||||||
|
AllowEncodedSlashes NoDecode
|
||||||
|
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||||
|
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||||
|
</VirtualHost>
|
||||||
|
```
|
||||||
|
|
||||||
|
I also found this sample config for another web server called [Caddy](https://caddyserver.com):
|
||||||
|
```
|
||||||
|
matrix.example.com {
|
||||||
|
reverse_proxy /_matrix/* http://localhost:8008
|
||||||
|
reverse_proxy /_synapse/client/* http://localhost:8008
|
||||||
|
}
|
||||||
|
|
||||||
|
example.com:8448 {
|
||||||
|
reverse_proxy http://localhost:8008
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
|
||||||
|
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add -
|
||||||
|
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install caddy
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
|
||||||
|
```
|
||||||
|
$ sudo vi /etc/caddy/Caddyfile
|
||||||
|
matrix.bowdre.net {
|
||||||
|
reverse_proxy /_matrix/* http://localhost:8008
|
||||||
|
reverse_proxy /_synapse/client/* http://localhost:8008
|
||||||
|
}
|
||||||
|
|
||||||
|
bowdre.net {
|
||||||
|
route {
|
||||||
|
respond /.well-known/matrix/server `{"m.server": "matrix.bowdre.net:443"}`
|
||||||
|
redir https://virtuallypotato.com
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
There's a lot happening in that 11-line `Caddyfile`, but it's not complicated by any means. The `matrix.bowdre.net` section is pretty much exactly yanked from the sample config, and it's going to pass any requests that start like `matrix.bowdre.net/_matrix/` or `matrix.bowdre.net/_synapse/client/` through to the Synapse server listening locally on port `8008`. Caddy will automatically request and apply a Let's Encrypt or ZeroSSL cert for any server names spelled out in the config - very slick!
|
||||||
|
|
||||||
|
I set up the `bowdre.net` section to return the appropriate JSON string to tell other Matrix servers to connect to `matrix.bowdre.net` on port `443` (so that I don't have to open port `8448` through the firewalls), and to redirect all other traffic to one of my favorite technical blogs (maybe you've heard of it?). I had to wrap the `respond` and `redir` directives in a [`route { }` block](https://caddyserver.com/docs/caddyfile/directives/route) because otherwise Caddy's [implicit precedence](https://caddyserver.com/docs/caddyfile/directives#directive-order) would execute the redirect for *all* traffic and never hand out the necessary `.well-known` data.
|
||||||
|
|
||||||
|
(I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.)
|
||||||
|
|
||||||
|
Now to enable the `caddy` service, start it, and restart it so that it loads the new config:
|
||||||
|
```
|
||||||
|
sudo systemctl enable caddy
|
||||||
|
sudo systemctl start caddy
|
||||||
|
sudo systemctl restart caddy
|
||||||
|
```
|
||||||
|
|
||||||
|
If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening.
|
||||||
|
```
|
||||||
|
$ nmap -Pn matrix.bowdre.net
|
||||||
|
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT
|
||||||
|
Nmap scan report for matrix.bowdre.net (150.136.6.180)
|
||||||
|
Host is up (0.034s latency).
|
||||||
|
Not shown: 997 filtered ports
|
||||||
|
PORT STATE SERVICE
|
||||||
|
22/tcp open ssh
|
||||||
|
80/tcp open http
|
||||||
|
443/tcp open https
|
||||||
|
|
||||||
|
Nmap done: 1 IP address (1 host up) scanned in 5.29 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
Browsing to `https://matrix.bowdre.net` shows a blank page - but a valid and trusted certificate that I did absolutely nothing to configure!
|
||||||
|
![Valid cert!](/assets/images/posts-2020/GHVqVOTAE.png)
|
||||||
|
|
||||||
|
The `.well-known` URL also returns the expected JSON:
|
||||||
|
![.well-known](/assets/images/posts-2020/6IRPHhr6u.png)
|
||||||
|
|
||||||
|
And trying to hit anything else at `https://bowdre.net` brings me right back here.
|
||||||
|
|
||||||
|
And again, the config to do all this (including getting valid certs for two server names!) is just 11 lines long. Caddy is seriously and magically cool.
|
||||||
|
|
||||||
|
Okay, let's actually serve something up now.
|
||||||
|
|
||||||
|
### Synapse installation
|
||||||
|
#### Docker setup
|
||||||
|
Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt-get install \
|
||||||
|
apt-transport-https \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
gnupg \
|
||||||
|
lsb-release
|
||||||
|
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
||||||
|
|
||||||
|
echo \
|
||||||
|
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
|
||||||
|
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
|
||||||
|
sudo apt update
|
||||||
|
|
||||||
|
sudo apt install docker-ce docker-ce-cli containerd.io
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose):
|
||||||
|
```sh
|
||||||
|
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||||
|
|
||||||
|
sudo chmod +x /usr/local/bin/docker-compose
|
||||||
|
```
|
||||||
|
|
||||||
|
And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`:
|
||||||
|
```
|
||||||
|
sudo usermod -G docker -a ubuntu
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working:
|
||||||
|
```
|
||||||
|
$ docker --version
|
||||||
|
Docker version 20.10.7, build f0df350
|
||||||
|
|
||||||
|
$ docker-compose --version
|
||||||
|
docker-compose version 1.29.2, build 5becea4c
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Synapse setup
|
||||||
|
Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container:
|
||||||
|
```
|
||||||
|
sudo mkdir -p /opt/matrix/synapse/data
|
||||||
|
cd /opt/matrix/synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
And then I'll create the compose file to define the deployment:
|
||||||
|
```yaml
|
||||||
|
$ sudo vi docker-compose.yml
|
||||||
|
services:
|
||||||
|
synapse:
|
||||||
|
container_name: "synapse"
|
||||||
|
image: "matrixdotorg/synapse"
|
||||||
|
restart: "unless-stopped"
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:8008:8008"
|
||||||
|
volumes:
|
||||||
|
- "./data/:/data/"
|
||||||
|
```
|
||||||
|
|
||||||
|
Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ docker run -it --rm \
|
||||||
|
-v "/opt/matrix/synapse/data:/data" \
|
||||||
|
-e SYNAPSE_SERVER_NAME=bowdre.net \
|
||||||
|
-e SYNAPSE_REPORT_STATS=yes \
|
||||||
|
matrixdotorg/synapse generate
|
||||||
|
|
||||||
|
Unable to find image 'matrixdotorg/synapse:latest' locally
|
||||||
|
latest: Pulling from matrixdotorg/synapse
|
||||||
|
69692152171a: Pull complete
|
||||||
|
66a3c154490a: Pull complete
|
||||||
|
3e35bdfb65b2: Pull complete
|
||||||
|
f2c4c4355073: Pull complete
|
||||||
|
65d67526c337: Pull complete
|
||||||
|
5186d323ad7f: Pull complete
|
||||||
|
436afe4e6bba: Pull complete
|
||||||
|
c099b298f773: Pull complete
|
||||||
|
50b871f28549: Pull complete
|
||||||
|
Digest: sha256:5ccac6349f639367fcf79490ed5c2377f56039ceb622641d196574278ed99b74
|
||||||
|
Status: Downloaded newer image for matrixdotorg/synapse:latest
|
||||||
|
Creating log config /data/bowdre.net.log.config
|
||||||
|
Generating config file /data/homeserver.yaml
|
||||||
|
Generating signing key file /data/bowdre.net.signing.key
|
||||||
|
A config file has been generated in '/data/homeserver.yaml' for server name 'bowdre.net'. Please review this file and customise it to your needs.
|
||||||
|
```
|
||||||
|
|
||||||
|
As instructed, I'll use `sudo vi data/homeserver.yaml` to review/modify the generated config. I'll leave
|
||||||
|
```yaml
|
||||||
|
server_name: "bowdre.net"
|
||||||
|
```
|
||||||
|
since that's how I'd like other servers to know my server, and I'll uncomment/edit in:
|
||||||
|
```yaml
|
||||||
|
public_baseurl: https://matrix.bowdre.net
|
||||||
|
```
|
||||||
|
since that's what users (namely, me) will put into their Matrix clients to connect.
|
||||||
|
|
||||||
|
And for now, I'll temporarily set:
|
||||||
|
```yaml
|
||||||
|
enable_registration: true
|
||||||
|
```
|
||||||
|
so that I can create a user account without fumbling with the CLI. I'll be sure to set `enable_registration: false` again once I've registered the account(s) I need to have on my server. The instance has limited resources so it's probably not a great idea to let just anybody create an account on it.
|
||||||
|
|
||||||
|
There are a bunch of other useful configurations that can be made here, but these will do to get things going for now.
|
||||||
|
|
||||||
|
Time to start it up:
|
||||||
|
```
|
||||||
|
$ docker-compose up -d
|
||||||
|
Creating network "synapse_default" with the default driver
|
||||||
|
Creating synapse ... done
|
||||||
|
```
|
||||||
|
|
||||||
|
And use `docker ps` to confirm that it's running:
|
||||||
|
```
|
||||||
|
$ docker ps
|
||||||
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||||
|
573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
And I can point my browser to `https://matrix.bowdre.net/_matrix/static/` and see the Matrix landing page:
|
||||||
|
![Synapse is running!](/assets/images/posts-2020/-9apQIUci.png)
|
||||||
|
|
||||||
|
Before I start trying to connect with a client, I'm going to plug the server address in to the [Matrix Federation Tester](https://federationtester.matrix.org/) to make sure that other servers will be able to talk to it without any problems:
|
||||||
|
![Good to go](/assets/images/posts-2020/xqOt3SydX.png)
|
||||||
|
|
||||||
|
And I can view the JSON report at the bottom of the page to confirm that it's correctly pulling my `.well-known` delegation:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"WellKnownResult": {
|
||||||
|
"m.server": "matrix.bowdre.net:443",
|
||||||
|
"CacheExpiresAt": 0
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
Now I can fire up my [Matrix client of choice](https://element.io/get-started)), specify my homeserver using its full FQDN, and [register](https://app.element.io/#/register) a new user account:
|
||||||
|
![image.png](/assets/images/posts-2020/2xe34VJym.png)
|
||||||
|
|
||||||
|
(Once my account gets created, I go back to edit `/opt/matrix/synapse/data/homeserver.yaml` again and set `enable_registration: false`, then fire a `docker-compose restart` command to restart the Synapse container.)
|
||||||
|
|
||||||
|
### Wrap-up
|
||||||
|
And that's it! I now have my own Matrix server, and I can use my new account for secure chats with Matrix users on any other federated homeserver. It works really well for directly messaging other individuals, and also for participating in small group chats. The server *does* kind of fall on its face if I try to join a massively-populated (like 500+ users) room, but I'm not going to complain about that too much on a free-tier server.
|
||||||
|
|
||||||
|
All in, I'm pretty pleased with how this little project turned out, and I learned quite a bit along the way. I'm tremendously impressed by Caddy's power and simplicity, and I look forward to using it more in future projects.
|
||||||
|
|
||||||
|
If you're on Matrix, hit me up: **[@john:bowdre.net](https://matrix.to/#/@john:bowdre.net)**
|
||||||
|
|
||||||
|
### Update: Updating
|
||||||
|
After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:
|
||||||
|
```sh
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade
|
||||||
|
# And, if needed:
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's what I do to update the container:
|
||||||
|
```sh
|
||||||
|
# Move to the working directory
|
||||||
|
cd /opt/matrix/synapse
|
||||||
|
# Pull a new version of the synapse image
|
||||||
|
docker-compose pull
|
||||||
|
# Stop the container
|
||||||
|
docker-compose down
|
||||||
|
# Start it back up without the old version
|
||||||
|
docker-compose up -d --remove-orphans
|
||||||
|
# Periodically remove the old docker images
|
||||||
|
docker image prune
|
||||||
|
```
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Scripts
|
||||||
|
date: "2021-07-19T16:03:30Z"
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- shell
|
||||||
|
- regex
|
||||||
|
- jekyll
|
||||||
|
- meta
|
||||||
|
title: Script to update image embed links in Markdown files
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
I'm preparing to migrate this blog thingy from Hashnode (which has been great!) to a [GitHub Pages site with Jekyll](https://docs.github.com/en/pages/setting-up-a-github-pages-site-with-jekyll/creating-a-github-pages-site-with-jekyll) so that I can write posts locally and then just do a `git push` to publish them - and get some more practice using `git` in the process. Of course, I've written some admittedly-great content here and I don't want to abandon that.
|
||||||
|
|
||||||
|
Hashnode helpfully automatically backs up my posts in Markdown format to a private GitHub repo so it was easy to clone those into a local working directory, but all the embedded images were still hosted on Hashnode:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
|
||||||
|
![Clever image title](https://cdn.hashnode.com/res/hashnode/image/upload/v1600098180227/lhTnVwCO3.png)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
I wanted to download those images to `./assets/images/posts-2020/` within my local Jekyll working directory, and then update the `*.md` files to reflect the correct local path... without doing it all manually. It took a bit of trial and error to get the regex working just right (and the result is neither pretty nor elegant), but here's what I came up with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Hasty script to process a blog post markdown file, capture the URL for embedded images,
|
||||||
|
# download the image locally, and modify the markdown file with the relative image path.
|
||||||
|
#
|
||||||
|
# Run it from the top level of a Jekyll blog directory for best results, and pass the
|
||||||
|
# filename of the blog post you'd like to process.
|
||||||
|
#
|
||||||
|
# Ex: ./imageMigration.sh 2021-07-19-Bulk-migrating-images-in-a-blog-post.md
|
||||||
|
|
||||||
|
postfile="_posts/$1"
|
||||||
|
|
||||||
|
imageUrls=($(grep -o -P '(?<=!\[)(?:[^\]]+)\]\(([^\)]+)' $postfile | grep -o -P 'http.*'))
|
||||||
|
imageNames=($(for name in ${imageUrls[@]}; do echo $name | grep -o -P '[^\/]+\.[[:alnum:]]+$'; done))
|
||||||
|
imagePaths=($(for name in ${imageNames[@]}; do echo "assets/images/posts-2020/${name}"; done))
|
||||||
|
echo -e "\nProcessing $postfile...\n"
|
||||||
|
for index in ${!imageUrls[@]}; do
|
||||||
|
echo -e "${imageUrls[index]}\n => ${imagePaths[index]}"
|
||||||
|
curl ${imageUrls[index]} --output ${imagePaths[index]}
|
||||||
|
sed -i "s|${imageUrls[index]}|${imagePaths[index]}|" $postfile
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
I could then run that against all of the Markdown posts under `./_posts/` with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for post in $(ls _posts/); do ~/scripts/imageMigration.sh $post; done
|
||||||
|
```
|
||||||
|
|
||||||
|
And the image embeds in the local copy of my posts now all look like this:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
|
||||||
|
![Clever image title](/assets/images/posts-2020/lhTnVwCO3.png)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Brilliant!
|
|
@ -0,0 +1,75 @@
|
||||||
|
---
|
||||||
|
categories: null
|
||||||
|
date: "2021-07-20T22:20:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/07/20210720-jekyll.png
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- meta
|
||||||
|
- chromeos
|
||||||
|
- crostini
|
||||||
|
- jekyll
|
||||||
|
title: Virtually Potato migrated to GitHub Pages!
|
||||||
|
---
|
||||||
|
|
||||||
|
After a bit less than a year of hosting my little technical blog with [Hashnode](https://hashnode.com), I spent a few days [migrating the content](script-to-update-image-embed-links-in-markdown-files) over to a new format hosted with [GitHub Pages](https://pages.github.com/).
|
||||||
|
|
||||||
|
![Party!](/assets/images/posts-2021/07/20210720-party.gif)
|
||||||
|
|
||||||
|
### So long, Hashnode
|
||||||
|
Hashnode served me well for the most part, but it was never really a great fit for me. Hashnode's focus is on developer content, and I'm not really a developer; I'm a sysadmin who occasionally develops solutions to solve my needs, but the code is never the end goal for me. As a result, I didn't spend much time in the (large and extremely active) community associated with Hashnode. It's a perfectly adequate blogging platform apart from the community, but it's really built to prop up that community aspect and I found that to be a bit limiting - particularly once Hashnode stopped letting you create tags to be used within your blog and instead only allowed you to choose from [the tags](https://hashnode.com/tags) already popular in the community. There are hundreds of tags for different coding languages, but not any that would cover the infrastructure virtualization or other technical projects that I tend to write about.
|
||||||
|
|
||||||
|
### Hello, GitHub Pages
|
||||||
|
I knew about GitHub Pages, but had never seriously looked into it. Once I did, though, it seemed like a much better fit for v{:potato:} - particularly when combined with [Jekyll](https://jekyllrb.com/) to take in Markdown posts and render them into static HTML. This approach would provide me more flexibility (and the ability to use whatever [tags](tags) I want!), while still letting me easily compose my posts with Markdown. And I can now do my composition locally (and even offline!), and just do a `git push` to publish. Very cool!
|
||||||
|
|
||||||
|
#### Getting started
|
||||||
|
I found that the quite-popular [Minimal Mistakes](https://mademistakes.com/work/minimal-mistakes-jekyll-theme/) theme for Jekyll offers a [remote theme starter](https://github.com/mmistakes/mm-github-pages-starter/generate) that can be used to quickly get things going. I just used that generator to spawn a new repository in my GitHub account ([`jbowdre.github.io`](https://github.com/jbowdre/jbowdre.github.io)). And that was it - I had a starter GitHub Pages-hosted Jekyll-powered static site with an elegant theme applied. I could even make changes to the various configuration and sample post files, point any browser to `https://jbowdre.github.io`, and see the results almost immediately. I got to work digging through the lengthy [configuration documentation](https://mmistakes.github.io/minimal-mistakes/docs/configuration/) to start making the site my own, like [connecting with my custom domain](https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site) and enabling [GitHub Issue-based comments](https://github.com/apps/utterances).
|
||||||
|
|
||||||
|
#### Working locally
|
||||||
|
A quick `git clone` operation was sufficient to create a local copy of my new site in my Lenovo Chromebook Duet's [Linux environment](setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications). That lets me easily create and edit Markdown posts or configuration files with VS Code, commit them to the local copy of the repo, and then push them back to GitHub when I'm ready to publish the changes.
|
||||||
|
|
||||||
|
In order to view the local changes, I needed to install Jekyll locally as well. I started by installing Ruby and other prerequisites:
|
||||||
|
```shell
|
||||||
|
sudo apt-get install ruby-full build-essential zlib1g-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
I added the following to my `~/.zshrc` file so that the gems would be installed under my home directory rather than somewhere more privileged:
|
||||||
|
```shell
|
||||||
|
export GEM_HOME="$HOME/gems"
|
||||||
|
export PATH="$HOME/gems/bin:$PATH"
|
||||||
|
```
|
||||||
|
|
||||||
|
And then ran `source ~/.zshrc` so the change would take immediate effect.
|
||||||
|
|
||||||
|
I could then install Jekyll:
|
||||||
|
```shell
|
||||||
|
gem install jekyll bundler
|
||||||
|
```
|
||||||
|
|
||||||
|
I then `cd`ed to the local repo and ran `bundle install` to also load up the components specified in the repo's `Gemfile`.
|
||||||
|
|
||||||
|
And, finally, I can run this to start up the local Jekyll server instance:
|
||||||
|
```shell
|
||||||
|
❯ bundle exec jekyll serve -l --drafts
|
||||||
|
Configuration file: /home/jbowdre/projects/jbowdre.github.io/_config.yml
|
||||||
|
Source: /home/jbowdre/projects/jbowdre.github.io
|
||||||
|
Destination: /home/jbowdre/projects/jbowdre.github.io/_site
|
||||||
|
Incremental build: enabled
|
||||||
|
Generating...
|
||||||
|
Remote Theme: Using theme mmistakes/minimal-mistakes
|
||||||
|
Jekyll Feed: Generating feed for posts
|
||||||
|
GitHub Metadata: No GitHub API authentication could be found. Some fields may be missing or have incorrect data.
|
||||||
|
done in 30.978 seconds.
|
||||||
|
Auto-regeneration: enabled for '/home/jbowdre/projects/jbowdre.github.io'
|
||||||
|
LiveReload address: http://0.0.0.0:35729
|
||||||
|
Server address: http://0.0.0.0:4000
|
||||||
|
Server running... press ctrl-c to stop.
|
||||||
|
```
|
||||||
|
|
||||||
|
And there it is!
|
||||||
|
![Jekyll running locally on my Chromebook](/assets/images/posts-2021/07/20210720-jekyll.png)
|
||||||
|
|
||||||
|
### `git push` time
|
||||||
|
Alright that's enough rambling for now. I'm very happy with this new setup, particularly with the automatically-generated Table of Contents to help folks navigate some of my longer posts. (I can't believe I was having to piece those together manually in this blog's previous iteration!)
|
||||||
|
|
||||||
|
I'll continue to make some additional tweaks in the coming weeks but for now I'll `git push` this post and get back to documenting my never-ending [vRA project](categories#vra).
|
|
@ -0,0 +1,231 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-07-21T00:00:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/07/20210721-successful-ad_machine.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- abx
|
||||||
|
- activedirectory
|
||||||
|
title: Joining VMs to Active Directory in site-specific OUs with vRA8
|
||||||
|
---
|
||||||
|
Connecting a deployed Windows VM to an Active Directory domain is pretty easy; just apply an appropriately-configured [customization spec](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-CAEB6A70-D1CF-446E-BC64-EC42CDB47117.html) and vCenter will take care of it for you. Of course, you'll likely then need to move the newly-created computer object to the correct Organizational Unit so that it gets all the right policies and such.
|
||||||
|
|
||||||
|
Fortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even [introduced the ability](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to:
|
||||||
|
|
||||||
|
| **Site** | **OU** |
|
||||||
|
| --- | --- |
|
||||||
|
| `BOW` | `lab.bowdre.net/LAB/BOW/Computers/Servers` |
|
||||||
|
| `DRE` | `lab.bowre.net/LAB/DRE/Computers/Servers` |
|
||||||
|
|
||||||
|
|
||||||
|
I didn't find a lot of documentation on how make this work, though, so here's how I've implemented it in my lab (now [running vRA 8.4.2](https://twitter.com/johndotbowdre/status/1416037317052178436)).
|
||||||
|
|
||||||
|
### Adding the AD integration
|
||||||
|
First things first: connecting vRA to AD. I do this by opening the Cloud Assembly interface, navigating to **Infrastructure > Connections > Integrations**, and clicking the **Add Integration** button. I'm then prompted to choose the integration type so I select the **Active Directory** one, and then I fill in the required information: a name (`Lab AD` seems appropriate), my domain controller as the LDAP host (`ldap://win01.lab.bowdre.net:389`), credentials for an account with sufficient privileges to create and delete computer objects (`lab\vra`), and finally the base DN to be used for the LDAP connection (`DC=lab,DC=bowdre,DC=net`).
|
||||||
|
|
||||||
|
![Creating the new AD integration](/assets/images/posts-2021/07/20210721-adding-ad-integration.png)
|
||||||
|
|
||||||
|
Clicking the **Validate** button quickly confirms that I've entered the information correctly, and then I can click **Add** to save my work.
|
||||||
|
|
||||||
|
I'll then need to associate the integration with a project by opening the new integration, navigating to the **Projects** tab, and clicking **Add Project**. Now I select the project name from the dropdown, enter a valid relative OU (`OU=LAB`), and enable the options to let me override the relative OU and optionally skip AD actions from the cloud template.
|
||||||
|
|
||||||
|
![Project options for the AD integration](/assets/images/posts-2021/07/20210721-adding-project-to-integration.png)
|
||||||
|
|
||||||
|
|
||||||
|
### Customization specs
|
||||||
|
As mentioned above, I'll leverage the customization specs in vCenter to handle the actual joining of a computer to the domain. I maintain two specs for Windows deployments (one to join the domain and one to stay on the workgroup), and I can let the vRA cloud template decide which should be applied to a given deployment.
|
||||||
|
|
||||||
|
First, the workgroup spec, appropriately called `vra-win-workgroup`:
|
||||||
|
![Workgroup spec](/assets/images/posts-2020/AzAna5Dda.png)
|
||||||
|
|
||||||
|
It's about as basic as can be, including using DHCP for the network configuration (which doesn't really matter since the VM will eventually get a [static IP assigned from {php}IPAM](integrating-phpipam-with-vrealize-automation-8)).
|
||||||
|
|
||||||
|
`vra-win-domain` is basically the same, with one difference:
|
||||||
|
![Domain spec](/assets/images/posts-2020/0ZYcORuiU.png)
|
||||||
|
|
||||||
|
Now to reference these specs from a cloud template...
|
||||||
|
|
||||||
|
### Cloud template
|
||||||
|
I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
adJoin:
|
||||||
|
title: Join to AD domain
|
||||||
|
type: boolean
|
||||||
|
default: true
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
This new `adJoin` input is a boolean so it will appear on the request form as a checkbox, and it will default to `true`; we'll assume that any Windows deployment should be automatically joined to AD unless this option gets unchecked.
|
||||||
|
|
||||||
|
In the `resources` section of the template, I'll set a new property called `ignoreActiveDirectory` to be the inverse of the `adJoin` input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use `activeDirectory: relativeDN` to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the `customizationSpec` and use [cloud template conditional syntax](https://docs.vmware.com/en/vRealize-Automation/8.4/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html#conditions-4) to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern `'${conditional-expresion ? true-value : false-value}'`).
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
[...]
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
activeDirectory:
|
||||||
|
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
|
||||||
|
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's the current cloud template in its entirety:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
network:
|
||||||
|
title: Network
|
||||||
|
type: string
|
||||||
|
adJoin:
|
||||||
|
title: Join to AD domain
|
||||||
|
type: boolean
|
||||||
|
default: true
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
oneOf:
|
||||||
|
- title: Development
|
||||||
|
const: D
|
||||||
|
- title: Testing
|
||||||
|
const: T
|
||||||
|
- title: Production
|
||||||
|
const: P
|
||||||
|
default: D
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
environment: '${input.environment}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
activeDirectory:
|
||||||
|
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
|
||||||
|
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${input.network}'
|
||||||
|
```
|
||||||
|
|
||||||
|
The last thing I need to do before leaving the Cloud Assembly interface is smash that **Version** button at the bottom of the cloud template editor so that the changes will be visible to Service Broker:
|
||||||
|
![New version](/assets/images/posts-2020/gOTzVawJE.png)
|
||||||
|
|
||||||
|
### Service Broker custom form updates
|
||||||
|
... and the *first* thing to do after entering the Service Broker UI is to navigate to **Content Sources**, click on my Lab content source, and then click **Save & Import** to bring in the new version. I can then go to **Content**, click the little three-dot menu icon next to my `WindowsDemo` cloud template, and pick the **Customize form** option.
|
||||||
|
|
||||||
|
This bit will be pretty quick. I just need to look for the new `Join to AD domain` element on the left:
|
||||||
|
![New element on left](/assets/images/posts-2020/Zz0D9wjYr.png)
|
||||||
|
|
||||||
|
And drag-and-drop it onto the canvas in the middle. I'll stick it directly after the `Network` field:
|
||||||
|
![New element on the canvas](/assets/images/posts-2020/HHiShFlnT.png)
|
||||||
|
|
||||||
|
I don't need to do anything else here since I'm not trying to do any fancy logic or anything, so I'll just hit **Save** and move on to...
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Now to submit the request through Service Broker to see if this actually works:
|
||||||
|
![Submitting the request](/assets/images/posts-2021/07/20210721-test-deploy-request.png)
|
||||||
|
|
||||||
|
After a few minutes, I can go into Cloud Assembly and navigate to **Extensibility > Activity > Actions Runs** and look at the **Integration Runs** to see if the `ad_machine` action has completed yet.
|
||||||
|
![Successful ad_machine action](/assets/images/posts-2021/07/20210721-successful-ad_machine.png)
|
||||||
|
|
||||||
|
Looking good! And once the deployment completes, I can look at the VM in vCenter to see that it has registered a fully-qualified DNS name since it was automatically joined to the domain:
|
||||||
|
![Domain-joined VM](/assets/images/posts-2021/07/20210721-vm-joined.png)
|
||||||
|
|
||||||
|
I can also repeat the test for a VM deployed to the `DRE` site just to confirm that it gets correctly placed in that site's OU:
|
||||||
|
![Another domain-joined VM](/assets/images/posts-2021/07/20210721-vm-joined-2.png)
|
||||||
|
|
||||||
|
And I'll fire off another deployment with the `adJoin` box *unchecked* to test that I can also skip the AD configuration completely:
|
||||||
|
![VM not joined to the domain](/assets/images/posts-2021/07/20210721-vm-not-joined.png)
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
Confession time: I had actually started writing this posts weeks ago. At that point, my efforts to bend the built-in AD integration to my will had been fairly unsuccessful, so I was instead working on a custom vRO workflow to accomplish the same basic thing. I circled back to try the AD integration again after upgrading the vRA environment to the latest 8.4.2 release, and found that it actually works quite well now. So I happily scrapped my ~50 lines of messy vRO JavaScript in favor of *just three lines* of YAML in the cloud template.
|
||||||
|
|
||||||
|
I love it when things work out!
|
|
@ -0,0 +1,287 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Tips
|
||||||
|
date: "2021-07-24T16:46:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/07/20210724-series-navigation.png
|
||||||
|
tags:
|
||||||
|
- meta
|
||||||
|
- jekyll
|
||||||
|
title: Recreating Hashnode Series (Categories) in Jekyll on GitHub Pages
|
||||||
|
---
|
||||||
|
|
||||||
|
I recently [migrated this site](virtually-potato-migrated-to-github-pages) from Hashnode to GitHub Pages, and I'm really getting into the flexibility and control that managing the content through Jekyll provides. So, naturally, after finalizing the move I got to work recreating Hashnode's "Series" feature, which lets you group posts together and highlight them as a collection. One of the things I liked about the Series setup was that I could control the order of the collected posts: my posts about [building out the vRA environment in my homelab](series/vra8) are probably best consumed in chronological order (oldest to newest) since the newer posts build upon the groundwork laid by the older ones, while posts about my [other one-off projects](series/projects) could really be enjoyed in any order.
|
||||||
|
|
||||||
|
I quickly realized that if I were hosting this pretty much anywhere *other* than GitHub Pages I could simply leverage the [`jekyll-archives`](https://github.com/jekyll/jekyll-archives) plugin to manage this for me - but, alas, that's not one of the [plugins supported by the platform](https://pages.github.com/versions/). I needed to come up with my own solution, and being still quite new to Jekyll (and this whole website design thing in general) it took me a bit of fumbling to get it right.
|
||||||
|
|
||||||
|
### Reviewing the theme-provided option
|
||||||
|
The Jekyll theme I'm using ([Minimal Mistakes](https://github.com/mmistakes/minimal-mistakes)) comes with [built-in support](https://mmistakes.github.io/mm-github-pages-starter/categories/) for a [category archive page](series), which (like the [tags page](tags)) displays all the categorized posts on a single page. Links at the top will let you jump to an appropriate anchor to start viewing the selected category, but it's not really an elegant way to display a single category.
|
||||||
|
![Posts by category](/assets/images/posts-2021/07/20210724-posts-by-category.png)
|
||||||
|
|
||||||
|
It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md):
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Posts by Category"
|
||||||
|
layout: categories
|
||||||
|
permalink: /categories/
|
||||||
|
author_profile: true
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
The `title` indicates what's going to be written in bold text at the top of the page, the `permalink` says that it will be accessible at `http://localhost/categories/`, and the nice little `author_profile` sidebar will appear on the left.
|
||||||
|
|
||||||
|
This page then calls the `categories` layout, which is defined in [`_layouts/categories.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/categories.html):
|
||||||
|
```liquid
|
||||||
|
{% raw %}---
|
||||||
|
layout: archive
|
||||||
|
---
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
{% assign categories_max = 0 %}
|
||||||
|
{% for category in site.categories %}
|
||||||
|
{% if category[1].size > categories_max %}
|
||||||
|
{% assign categories_max = category[1].size %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
<ul class="taxonomy__index">
|
||||||
|
{% for i in (1..categories_max) reversed %}
|
||||||
|
{% for category in site.categories %}
|
||||||
|
{% if category[1].size == i %}
|
||||||
|
<li>
|
||||||
|
<a href="#{{ category[0] | slugify }}">
|
||||||
|
<strong>{{ category[0] }}</strong> <span class="taxonomy__count">{{ i }}</span>
|
||||||
|
</a>
|
||||||
|
</li>
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endfor %}
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
{% assign entries_layout = page.entries_layout | default: 'list' %}
|
||||||
|
{% for i in (1..categories_max) reversed %}
|
||||||
|
{% for category in site.categories %}
|
||||||
|
{% if category[1].size == i %}
|
||||||
|
<section id="{{ category[0] | slugify | downcase }}" class="taxonomy__section">
|
||||||
|
<h2 class="archive__subtitle">{{ category[0] }}</h2>
|
||||||
|
<div class="entries-{{ entries_layout }}">
|
||||||
|
{% for post in category.last %}
|
||||||
|
{% include archive-single.html type=entries_layout %}
|
||||||
|
{% endfor %}
|
||||||
|
</div>
|
||||||
|
<a href="#page-title" class="back-to-top">{{ site.data.ui-text[site.locale].back_to_top | default: 'Back to Top' }} ↑</a>
|
||||||
|
</section>
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% endfor %}{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
I wanted my solution to preserve the formatting that's used by the theme elsewhere on this site so this bit is going to be my base. The big change I'll make is that instead of enumerating all of the categories on one page, I'll have to create a new static page for each of the categories I'll want to feature. And each of those pages will refer to a new layout to determine what will actually appear on the page.
|
||||||
|
|
||||||
|
### Defining a new layout
|
||||||
|
I create a new file called `_layouts/series.html` which will define how these new series pages get rendered. It starts out just like the default `categories.html` one:
|
||||||
|
|
||||||
|
```liquid
|
||||||
|
{% raw %}---
|
||||||
|
layout: archive
|
||||||
|
---
|
||||||
|
|
||||||
|
{{ content }}{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
That `{{ content }}` block will let me define text to appear above the list of articles - very handy. Much of the original `categories.html` code has to do with iterating through the list of categories. I won't need that, though, so I'll jump straight to setting what layout the entries on this page will use:
|
||||||
|
```liquid
|
||||||
|
{% assign entries_layout = page.entries_layout | default: 'list' %}
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll be including two custom variables in the [Front Matter](https://jekyllrb.com/docs/front-matter/) for my category pages: `tag` to specify what category to filter on, and `sort_order` which will be set to `reverse` if I want the older posts up top. I'll be able to access these in the layout as `page.tag` and `page.sort_order`, respectively. So I'll go ahead and grab all the posts which are categorized with `page.tag`, and then decide whether the posts will get sorted normally or in reverse:
|
||||||
|
```liquid
|
||||||
|
{% raw %}{% assign posts = site.categories[page.tag] %}
|
||||||
|
{% if page.sort_order == 'reverse' %}
|
||||||
|
{% assign posts = posts | reverse %}
|
||||||
|
{% endif %}{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page:
|
||||||
|
```liquid
|
||||||
|
{% raw %}<div class="entries-{{ entries_layout }}">
|
||||||
|
{% for post in posts %}
|
||||||
|
{% include archive-single.html type=entries_layout %}
|
||||||
|
{% endfor %}
|
||||||
|
</div>{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
Putting it all together now, here's my new `_layouts/series.html` file:
|
||||||
|
```liquid
|
||||||
|
{% raw %}---
|
||||||
|
layout: archive
|
||||||
|
---
|
||||||
|
|
||||||
|
{{ content }}
|
||||||
|
|
||||||
|
{% assign entries_layout = page.entries_layout | default: 'list' %}
|
||||||
|
{% assign posts = site.categories[page.tag] %}
|
||||||
|
{% if page.sort_order == 'reverse' %}
|
||||||
|
{% assign posts = posts | reverse %}
|
||||||
|
{% endif %}
|
||||||
|
<div class="entries-{{ entries_layout }}">
|
||||||
|
{% for post in posts %}
|
||||||
|
{% include archive-single.html type=entries_layout %}
|
||||||
|
{% endfor %}
|
||||||
|
</div>{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Series pages
|
||||||
|
Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly:
|
||||||
|
```markdown
|
||||||
|
{% raw %}---
|
||||||
|
title: "Adventures in vRealize Automation 8"
|
||||||
|
layout: series
|
||||||
|
permalink: "/series/vra8"
|
||||||
|
tag: vRA8
|
||||||
|
sort_order: reverse
|
||||||
|
author_profile: true
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/RtMljqM9x.png
|
||||||
|
---
|
||||||
|
|
||||||
|
*Follow along as I create a flexible VMware vRealize Automation 8 environment for provisioning virtual machines - all from the comfort of my Intel NUC homelab.*{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that this page is referencing the series layout I just created, and it's going to live at `http://localhost/series/vra8` - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
|
||||||
|
|
||||||
|
Check it out [here](series/vra8):
|
||||||
|
![vRA8 series](/assets/images/posts-2021/07/20210724-vra8-series.png)
|
||||||
|
|
||||||
|
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
|
||||||
|
```markdown
|
||||||
|
{% raw %}---
|
||||||
|
title: "Tips & Tricks"
|
||||||
|
layout: series
|
||||||
|
permalink: "/series/tips"
|
||||||
|
tag: Tips
|
||||||
|
author_profile: true
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2020/kJ_l7gPD2.png
|
||||||
|
---
|
||||||
|
|
||||||
|
*Useful tips and tricks I've stumbled upon.*{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Changing the category permalink
|
||||||
|
Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
category_archive:
|
||||||
|
type: liquid
|
||||||
|
# path: /categories/
|
||||||
|
path: /series/
|
||||||
|
tag_archive:
|
||||||
|
type: liquid
|
||||||
|
path: /tags/
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink:
|
||||||
|
```markdown
|
||||||
|
{% raw %}---
|
||||||
|
title: "Posts by Series"
|
||||||
|
layout: categories
|
||||||
|
permalink: /series/
|
||||||
|
author_profile: true
|
||||||
|
---{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fixing category links in posts
|
||||||
|
The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`).
|
||||||
|
![Old category link](/assets/images/posts-2021/07/20210724-old-category-link.png)
|
||||||
|
|
||||||
|
That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
|
||||||
|
|
||||||
|
I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed:
|
||||||
|
```liquid
|
||||||
|
{% raw %} <footer class="page__meta">
|
||||||
|
{% if site.data.ui-text[site.locale].meta_label %}
|
||||||
|
<h4 class="page__meta-title">{{ site.data.ui-text[site.locale].meta_label }}</h4>
|
||||||
|
{% endif %}
|
||||||
|
{% include page__taxonomy.html %}
|
||||||
|
{% include page__date.html %}
|
||||||
|
</footer>{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
It looks like [`page__taxonomy.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/page__taxonomy.html) is being used to display the tags and categories, so I then went to that file in the `_include` directory:
|
||||||
|
```liquid
|
||||||
|
{% raw %}{% if site.tag_archive.type and page.tags[0] %}
|
||||||
|
{% include tag-list.html %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if site.category_archive.type and page.categories[0] %}
|
||||||
|
{% include category-list.html %}
|
||||||
|
{% endif %}{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
Okay, it looks like [`_include/category-list.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/category-list.html) is what I actually want. Here's that file:
|
||||||
|
```liquid
|
||||||
|
{% raw %}{% case site.category_archive.type %}
|
||||||
|
{% when "liquid" %}
|
||||||
|
{% assign path_type = "#" %}
|
||||||
|
{% when "jekyll-archives" %}
|
||||||
|
{% assign path_type = nil %}
|
||||||
|
{% endcase %}
|
||||||
|
|
||||||
|
{% if site.category_archive.path %}
|
||||||
|
{% assign categories_sorted = page.categories | sort_natural %}
|
||||||
|
|
||||||
|
<p class="page__taxonomy">
|
||||||
|
<strong><i class="fas fa-fw fa-folder-open" aria-hidden="true"></i> {{ site.data.ui-text[site.locale].categories_label | default: "Categories:" }} </strong>
|
||||||
|
<span itemprop="keywords">
|
||||||
|
{% for category_word in categories_sorted %}
|
||||||
|
<a href="{{ category_word | slugify | prepend: path_type | prepend: site.category_archive.path | relative_url }}" class="page__taxonomy-item p-category" rel="tag">{{ category_word }}</a>{% unless forloop.last %}<span class="sep">, </span>{% endunless %}
|
||||||
|
{% endfor %}
|
||||||
|
</span>
|
||||||
|
</p>
|
||||||
|
{% endif %}{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
I'm using the `liquid` archive approach since I can't use the `jekyll-archives` plugin, so I can see that it's setting the `path_type` to `"#"`. And near the bottom of the file, I can see that it's assembling the category link by slugifying the `category_word`, sticking the `path_type` in front of it, and then putting the `site.category_archive.path` (which I edited earlier in `_config.yml`) in front of that. So that's why my category links look like `/series/#category`. I can just edit the top of this file to statically set `path_type = nil` and that should clear this up in a jiffy:
|
||||||
|
```liquid
|
||||||
|
{% raw %}{% assign path_type = nil %}
|
||||||
|
{% if site.category_archive.path %}
|
||||||
|
{% assign categories_sorted = page.categories | sort_natural %}
|
||||||
|
[...]{% endraw %}
|
||||||
|
```
|
||||||
|
|
||||||
|
To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](https://github.com/mmistakes/minimal-mistakes/blob/master/_data/ui-text.yml) to update the string used for `categories_label`:
|
||||||
|
```yaml
|
||||||
|
meta_label :
|
||||||
|
tags_label : "Tags:"
|
||||||
|
categories_label : "Series:"
|
||||||
|
date_label : "Updated:"
|
||||||
|
comments_label : "Leave a comment"
|
||||||
|
```
|
||||||
|
![Updated series link](/assets/images/posts-2021/07/20210724-new-series-link.png)
|
||||||
|
|
||||||
|
Much better!
|
||||||
|
|
||||||
|
### Updating the navigation header
|
||||||
|
And, finally, I'll want to update the navigation links at the top of each page to help visitors find my new featured series pages. For that, I can just edit `_data/navigation.yml` with links to my new pages:
|
||||||
|
```yaml
|
||||||
|
main:
|
||||||
|
- title: "vRealize Automation 8"
|
||||||
|
url: /series/vra8
|
||||||
|
- title: "Projects"
|
||||||
|
url: /series/projects
|
||||||
|
- title: "Scripts"
|
||||||
|
url: /series/scripts
|
||||||
|
- title: "Tips & Tricks"
|
||||||
|
url: /series/tips
|
||||||
|
- title: "Tags"
|
||||||
|
url: /tags/
|
||||||
|
- title: "All Posts"
|
||||||
|
url: /posts/
|
||||||
|
```
|
||||||
|
|
||||||
|
### All done!
|
||||||
|
![Slick series navigation!](/assets/images/posts-2021/07/20210724-series-navigation.png)
|
||||||
|
|
||||||
|
I set out to recreate the series setup that I had over at Hashnode, and I think I've accomplished that. More importantly, I've learned quite a bit more about how Jekyll works, and I'm already plotting further tweaks. For now, though, I think this is ready for a `git push`!
|
|
@ -0,0 +1,418 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-08-13T00:00:00Z"
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
- powershell
|
||||||
|
title: Creating static records in Microsoft DNS from vRealize Automation
|
||||||
|
---
|
||||||
|
One of the requirements for my vRA deployments is the ability to automatically create a static `A` records for non-domain-joined systems so that users can connect without needing to know the IP address. The organization uses Microsoft DNS servers to provide resolution on the internal domain. At first glance, this shouldn't be too much of a problem: vRealize Orchestrator 8.x can run PowerShell scripts, and PowerShell can use the [`Add-DnsServerResourceRecord` cmdlet](https://docs.microsoft.com/en-us/powershell/module/dnsserver/add-dnsserverresourcerecord?view=windowsserver2019-ps) to create the needed records.
|
||||||
|
|
||||||
|
Not so fast, though. That cmdlet is provided through the [Remote Server Administration Tools](https://docs.microsoft.com/en-us/troubleshoot/windows-server/system-management-components/remote-server-administration-tools) package so it won't be available within the limited PowerShell environment inside of vRO. A workaround might be to add a Windows machine to vRO as a remote PowerShell host, but then you run into [issues of credential hopping](https://communities.vmware.com/t5/vRealize-Orchestrator/unable-to-run-get-DnsServerResourceRecord-via-vRO-Powershell-wf/m-p/2286685).
|
||||||
|
|
||||||
|
I eventually came across [this blog post](https://www.virtualnebula.com/blog/2017/7/14/microsoft-ad-dns-integration-over-ssh) which described adding a Windows machine as a remote *SSH* host instead. I'll deviate a bit from the described configuration, but that post did at least get me pointed in the right direction. This approach would get around the complicated authentication-tunneling business while still being pretty easy to set up. So let's go!
|
||||||
|
|
||||||
|
### Preparing the SSH host
|
||||||
|
I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as `win02.lab.bowdre.net`. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the `Add-DnsServerResourceRecord` and associated cmdlets. I can do that through PowerShell like so:
|
||||||
|
```powershell
|
||||||
|
# Install RSAT DNS tools
|
||||||
|
Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019:
|
||||||
|
```powershell
|
||||||
|
# Install OpenSSH Server
|
||||||
|
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets:
|
||||||
|
```powershell
|
||||||
|
# Set PowerShell as the default Shell (for access to DNS cmdlets)
|
||||||
|
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll be using my `lab\vra` service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host:
|
||||||
|
```powershell
|
||||||
|
# Add the service account as a local administrator
|
||||||
|
Add-LocalGroupMember -Group Administrators -Member "lab\vra"
|
||||||
|
```
|
||||||
|
|
||||||
|
And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH:
|
||||||
|
```powershell
|
||||||
|
# Restrict SSH access to members in the local Administrators group
|
||||||
|
(Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", "$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config"
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, I'll start the `sshd` service and set it to start up automatically:
|
||||||
|
```powershell
|
||||||
|
# Start service and set it to automatic
|
||||||
|
Set-Service -Name sshd -StartupType Automatic -Status Running
|
||||||
|
```
|
||||||
|
|
||||||
|
#### A quick test
|
||||||
|
At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
|
||||||
|
```powershell
|
||||||
|
$ ssh vra@win02.lab.bowdre.net
|
||||||
|
vra@win02.lab.bowdre.net's password:
|
||||||
|
|
||||||
|
Windows PowerShell
|
||||||
|
Copyright (C) Microsoft Corporation. All rights reserved.
|
||||||
|
|
||||||
|
PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99
|
||||||
|
|
||||||
|
PS C:\Users\vra> nslookup testy
|
||||||
|
Server: win01.lab.bowdre.net
|
||||||
|
Address: 192.168.1.5
|
||||||
|
|
||||||
|
Name: testy.lab.bowdre.net
|
||||||
|
Address: 172.16.99.99
|
||||||
|
|
||||||
|
PS C:\Users\vra> Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -RRType A -Force
|
||||||
|
|
||||||
|
PS C:\Users\vra> nslookup testy
|
||||||
|
Server: win01.lab.bowdre.net
|
||||||
|
Address: 192.168.1.5
|
||||||
|
|
||||||
|
*** win01.lab.bowdre.net can't find testy: Non-existent domain
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool! Now I just need to do that same thing, but from vRealize Orchestrator. First, though, I'll update the template so the requester can choose whether or not a static record will get created.
|
||||||
|
|
||||||
|
### Template changes
|
||||||
|
#### Cloud Template
|
||||||
|
Similar to the template changes I made for [optionally joining deployed servers to the Active Directory domain](joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template), I'll just be adding a simple boolean checkbox to the `inputs` section of the template in Cloud Assembly:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
staticDns:
|
||||||
|
title: Create static DNS record
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
*Unlike* the AD piece, in the `resources` section I'll just bind a custom property called `staticDns` to the input with the same name:
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
[...]
|
||||||
|
staticDns: '${input.staticDns}'
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
So here's the complete cloud template that I've been working on:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
network:
|
||||||
|
title: Network
|
||||||
|
type: string
|
||||||
|
adJoin:
|
||||||
|
title: Join to AD domain
|
||||||
|
type: boolean
|
||||||
|
default: true
|
||||||
|
staticDns:
|
||||||
|
title: Create static DNS record
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
oneOf:
|
||||||
|
- title: Development
|
||||||
|
const: D
|
||||||
|
- title: Testing
|
||||||
|
const: T
|
||||||
|
- title: Production
|
||||||
|
const: P
|
||||||
|
default: D
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
environment: '${input.environment}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
activeDirectory:
|
||||||
|
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
|
||||||
|
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
|
||||||
|
staticDns: '${input.staticDns}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${input.network}'
|
||||||
|
```
|
||||||
|
I save the template, and then also hit the "Version" button to publish a new version to the catalog:
|
||||||
|
![Releasing new version](/assets/images/posts-2021/08/20210803_new_template_version.png)
|
||||||
|
|
||||||
|
#### Service Broker Custom Form
|
||||||
|
I switch over to the Service Broker UI to update the custom form - but first I stop off at **Content & Policies > Content Sources**, select my Content Source, and hit the **Save & Import** button to force a sync of the cloud templates. I can then move on to the **Content & Policies > Content** section, click the 3-dot menu next to my template name, and select the option to **Customize Form**.
|
||||||
|
|
||||||
|
I'll just drag the new Schema Element called `Create static DNS record` from the Request Inputs panel and on to the form canvas. I'll drop it right below the `Join to AD domain` field:
|
||||||
|
![Adding the field to the form](/assets/images/posts-2021/08/20210803_updating_custom_form.png)
|
||||||
|
|
||||||
|
And then I'll hit the **Save** button so that my efforts are preserved.
|
||||||
|
|
||||||
|
That should take care of the front-end changes. Now for the back-end stuff: I need to teach vRO how to connect to my SSH host and run the PowerShell commands, [just like I tested earlier](#a-quick-test).
|
||||||
|
|
||||||
|
|
||||||
|
### The vRO solution
|
||||||
|
I will be adding the DNS action on to my existing "VM Post-Provisioning" workflow (described [here](adding-vm-notes-and-custom-attributes-with-vra8), which gets triggered after the VM has been successfully deployed.
|
||||||
|
|
||||||
|
#### Configuration Element
|
||||||
|
But first, I'm going to go to the **Assets > Configurations** section of the Orchestrator UI and create a new Configuration Element to store variables related to the SSH host and DNS configuration.
|
||||||
|
![Create a new configuration](/assets/images/posts-2020/Go3D-gemP.png)
|
||||||
|
|
||||||
|
I'll call it `dnsConfig` and put it in my `CustomProvisioning` folder.
|
||||||
|
![Giving it a name](/assets/images/posts-2020/fJswso9KH.png)
|
||||||
|
|
||||||
|
And then I create the following variables:
|
||||||
|
|
||||||
|
| Variable | Value | Type |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `sshHost` | `win02.lab.bowdre.net` | string |
|
||||||
|
| `sshUser` | `vra` | string |
|
||||||
|
| `sshPass` | `*****` | secureString |
|
||||||
|
| `dnsServer` | `[win01.lab.bowdre.net]` | Array/string |
|
||||||
|
| `supportedDomains` | `[lab.bowdre.net]` | Array/string |
|
||||||
|
|
||||||
|
`sshHost` is my new `win02` server that I'm going to connect to via SSH, and `sshUser` and `sshPass` should explain themselves. The `dnsServer` array will tell the script which DNS servers to try to create the record on; this will just be a single server in my lab, but I'm going to construct the script to support multiple servers in case one isn't reachable. And `supported domains` will be used to restrict where I'll be creating records; again, that's just a single domain in my lab, but I'm building this solution to account for the possibility where a VM might need to be deployed on a domain where I can't create a static record in this way so I want it to fail elegantly.
|
||||||
|
|
||||||
|
Here's what the new configuration element looks like:
|
||||||
|
![Variables defined](/assets/images/posts-2020/a5gtUrQbc.png)
|
||||||
|
|
||||||
|
#### Workflow to create records
|
||||||
|
I'll need to tell my workflow about the variables held in the `dnsConfig` Configuration Element I just created. I do that by opening the "VM Post-Provisioning" workflow in the vRO UI, clicking the **Edit** button, and then switching to the **Variables** tab. I create a variable for each member of `dnsConfig`, and enable the toggle to *Bind to configuration* so that I can select the corresponding item. It's important to make sure that the variable type exactly matches what's in the configuration element so that you'll be able to pick it!
|
||||||
|
![Linking variable to config element](/assets/images/posts-2021/08/20210809_creating_bound_variable.png)
|
||||||
|
|
||||||
|
I repeat that for each of the remaining variables until all the members of `dnsConfig` are represented in the workflow:
|
||||||
|
![Variables added](/assets/images/posts-2021/08/20210809_variables_added.png)
|
||||||
|
|
||||||
|
Now we're ready for the good part: inserting a new scriptable task into the workflow schema. I'll called it `Create DNS Record` and place it directly after the `Set Notes` task. For inputs, the task will take in `inputProperties (Properties)` as well as everything from that `dnsConfig` configuration element:
|
||||||
|
![Task inputs](/assets/images/posts-2021/08/20210809_task_inputs.png)
|
||||||
|
|
||||||
|
And here's the JavaScript for the task:
|
||||||
|
```js
|
||||||
|
// JavaScript: Create DNS Record task
|
||||||
|
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var staticDns = inputProperties.customProperties.staticDns;
|
||||||
|
var hostname = inputProperties.resourceNames[0];
|
||||||
|
var dnsDomain = inputProperties.customProperties.dnsDomain;
|
||||||
|
var ipAddress = inputProperties.addresses[0];
|
||||||
|
var created = false;
|
||||||
|
|
||||||
|
// check if user requested a record to be created and if the VM's dnsDomain is in the supportedDomains array
|
||||||
|
if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) {
|
||||||
|
System.log("Attempting to create DNS record for "+hostname+"."+dnsDomain+" at "+ipAddress+"...")
|
||||||
|
// create the ssh session to the intermediary host
|
||||||
|
var sshSession = new SSHSession(sshHost, sshUser);
|
||||||
|
System.debug("Connecting to "+sshHost+"...")
|
||||||
|
sshSession.connectWithPassword(sshPass)
|
||||||
|
// loop through DNS servers in case the first one doesn't respond
|
||||||
|
for each (var dnsServer in dnsServers) {
|
||||||
|
if (created == false) {
|
||||||
|
System.debug("Using DNS Server "+dnsServer+"...")
|
||||||
|
// insert the PowerShell command to create A record
|
||||||
|
var sshCommand = 'Add-DnsServerResourceRecordA -ComputerName '+dnsServer+' -ZoneName '+dnsDomain+' -Name '+hostname+' -AllowUpdateAny -IPv4Address '+ipAddress;
|
||||||
|
System.debug("sshCommand: "+sshCommand)
|
||||||
|
// run the command and check the result
|
||||||
|
sshSession.executeCommand(sshCommand, true)
|
||||||
|
var result = sshSession.exitCode;
|
||||||
|
if (result == 0) {
|
||||||
|
System.log("Successfully created DNS record!")
|
||||||
|
// make a note that it was successful so we don't repeat this unnecessarily
|
||||||
|
created = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sshSession.disconnect()
|
||||||
|
if (created == false) {
|
||||||
|
System.warn("Error! Unable to create DNS record.")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
System.log("Not trying to do DNS")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now I can just save the workflow, and I'm done! - with this part. Of course, being able to *create* a static record is just one half of the fight; I also need to make sure that vRA will be able to clean up these static records when a deployment gets deleted.
|
||||||
|
|
||||||
|
#### Workflow to delete records
|
||||||
|
I haven't previously created any workflows that fire on deployment removal, so I'll create a new one and call it `VM Deprovisioning`:
|
||||||
|
![New workflow](/assets/images/posts-2021/08/20210811_new_workflow.png)
|
||||||
|
|
||||||
|
This workflow only needs a single input (`inputProperties (Properties)`) so it can receive information about the deployment from vRA:
|
||||||
|
![Workflow input](/assets/images/posts-2021/08/20210811_inputproperties.png)
|
||||||
|
|
||||||
|
I'll also need to bind in the variables from the `dnsConfig` element as before:
|
||||||
|
![Workflow variables](/assets/images/posts-2021/08/20210812_deprovision_variables.png)
|
||||||
|
|
||||||
|
The schema will include a single scriptable task:
|
||||||
|
![Delete DNS Record task](/assets/images/posts-2021/08/20210812_delete_dns_record_task.png)
|
||||||
|
|
||||||
|
And it's going to be *pretty damn similar* to the other one:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: Delete DNS Record task
|
||||||
|
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var staticDns = inputProperties.customProperties.staticDns;
|
||||||
|
var hostname = inputProperties.resourceNames[0];
|
||||||
|
var dnsDomain = inputProperties.customProperties.dnsDomain;
|
||||||
|
var ipAddress = inputProperties.addresses[0];
|
||||||
|
var deleted = false;
|
||||||
|
|
||||||
|
// check if user requested a record to be created and if the VM's dnsDomain is in the supportedDomains array
|
||||||
|
if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) {
|
||||||
|
System.log("Attempting to remove DNS record for "+hostname+"."+dnsDomain+" at "+ipAddress+"...")
|
||||||
|
// create the ssh session to the intermediary host
|
||||||
|
var sshSession = new SSHSession(sshHost, sshUser);
|
||||||
|
System.debug("Connecting to "+sshHost+"...")
|
||||||
|
sshSession.connectWithPassword(sshPass)
|
||||||
|
// loop through DNS servers in case the first one doesn't respond
|
||||||
|
for each (var dnsServer in dnsServers) {
|
||||||
|
if (deleted == false) {
|
||||||
|
System.debug("Using DNS Server "+dnsServer+"...")
|
||||||
|
// insert the PowerShell command to delete A record
|
||||||
|
var sshCommand = 'Remove-DnsServerResourceRecord -ComputerName '+dnsServer+' -ZoneName '+dnsDomain+' -RRType A -Name '+hostname+' -Force';
|
||||||
|
System.debug("sshCommand: "+sshCommand)
|
||||||
|
// run the command and check the result
|
||||||
|
sshSession.executeCommand(sshCommand, true)
|
||||||
|
var result = sshSession.exitCode;
|
||||||
|
if (result == 0) {
|
||||||
|
System.log("Successfully deleted DNS record!")
|
||||||
|
// make a note that it was successful so we don't repeat this unnecessarily
|
||||||
|
deleted = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sshSession.disconnect()
|
||||||
|
if (created == false) {
|
||||||
|
System.warn("Error! Unable to delete DNS record.")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
System.log("No need to clean up DNS.")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Since this is a new workflow, I'll also need to head back to **Cloud Assembly > Extensibility > Subscriptions** and add a new subscription to call it when a deployment gets deleted. I'll call it "VM Deprovisioning", assign it to the "Compute Post Removal" Event Topic, and link it to my new "VM Deprovisioning" workflow. I *could* use the Condition option to filter this only for deployments which had a static DNS record created, but I'll later want to use this same workflow for other cleanup tasks so I'll just save it as is for now.
|
||||||
|
![VM Deprovisioning subscription](/assets/images/posts-2021/08/20210812_deprovisioning_subscription.png)
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Now I can (finally) fire off a quick deployment to see if all this mess actually works:
|
||||||
|
![Test deploy request](/assets/images/posts-2021/08/20210812_test_deploy_request.png)
|
||||||
|
|
||||||
|
Once the deployment completes, I go back into vRO, find the most recent item in the **Workflow Runs** view, and click over to the **Logs** tab to see how I did:
|
||||||
|
![Workflow success!](/assets/images/posts-2021/08/20210813_workflow_success.png)
|
||||||
|
|
||||||
|
And I can run a quick query to make sure that name actually resolves:
|
||||||
|
```shell
|
||||||
|
❯ dig +short bow-ttst-xxx023.lab.bowdre.net A
|
||||||
|
172.16.30.10
|
||||||
|
```
|
||||||
|
|
||||||
|
It works!
|
||||||
|
|
||||||
|
Now to test the cleanup. For that, I'll head back to Service Broker, navigate to the **Deployments** tab, find my deployment, click the little three-dot menu button, and select the **Delete** option:
|
||||||
|
![Deleting the deployment](/assets/images/posts-2021/08/20210813_delete_deployment.png)
|
||||||
|
|
||||||
|
Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning task completed successfully:
|
||||||
|
![VM Deprovisioning workflow](/assets/images/posts-2021/08/20210813_workflow_deletion.png)
|
||||||
|
|
||||||
|
And I can `dig` a little more to make sure the name doesn't resolve anymore:
|
||||||
|
```shell
|
||||||
|
❯ dig +short bow-ttst-xxx023.lab.bowdre.net A
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
It *really* works!
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
So there you have it - how I've got vRA/vRO able to create and delete static DNS records as needed, using a Windows SSH host as an intermediary. Cool, right?
|
|
@ -0,0 +1,89 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2021-08-20T00:00:00Z"
|
||||||
|
tags:
|
||||||
|
- gcp
|
||||||
|
- cloud
|
||||||
|
- serverless
|
||||||
|
title: Free serverless URL shortener on Google Cloud Run
|
||||||
|
---
|
||||||
|
#### Intro
|
||||||
|
I've been [using short.io with a custom domain](https://twitter.com/johndotbowdre/status/1370125198196887556) to keep track of and share messy links for a few months now. That approach has worked very well, but it's also seriously overkill for my needs. I don't need (nor want) tracking metrics to know anything about when those links get clicked, and short.io doesn't provide an easy way to turn that off. I was casually looking for a lighter self-hosted alternative today when I stumbled upon a *serverless* alternative: **[sheets-url-shortener](https://github.com/ahmetb/sheets-url-shortener)**. This uses [Google Cloud Run](https://cloud.google.com/run/) to run an ultralight application container which receives an incoming web request, looks for the path in a Google Sheet, and redirects the client to the appropriate URL. It supports connecting with a custom domain, and should run happily within the [Cloud Run Free Tier limits](https://cloud.google.com/run/pricing).
|
||||||
|
|
||||||
|
The Github instructions were pretty straight-forward but I did have to fumble through a few additional steps to get everything up and running. Here we go:
|
||||||
|
|
||||||
|
#### Shortcut mapping
|
||||||
|
Since the setup uses a simple Google Sheets document to map the shortcuts to the original long-form URLs, I started by going to [https://sheets.new](https://sheets.new) to create a new Sheet. I then just copied in the shorcuts and URLs I was already using in short.io. By the way, I learned on a previous attempt that this solution only works with lowercase shortcuts so I made sure to convert my `MixedCase` ones as I went.
|
||||||
|
![Creating a new sheet](/assets/images/posts-2021/08/20210820_sheet.png)
|
||||||
|
|
||||||
|
I then made a note of the Sheet ID from the URL; that's the bit that looks like `1SMeoyesCaGHRlYdGj9VyqD-qhXtab1jrcgHZ0irvNDs`. That will be needed later on.
|
||||||
|
|
||||||
|
#### Create a new GCP project
|
||||||
|
I created a new project in my GCP account by going to [https://console.cloud.google.com/projectcreate](https://console.cloud.google.com/projectcreate) and entering a descriptive name.
|
||||||
|
![Creating a new GCP project](/assets/images/posts-2021/08/20210820_create_project.png)
|
||||||
|
|
||||||
|
#### Deploy to GCP
|
||||||
|
At this point, I was ready to actually kick off the deployment. Ahmet made this part exceptionally easy: just hit the **Run on Google Cloud** button from the [Github project page](https://github.com/ahmetb/sheets-url-shortener#setup). That opens up a Google Cloud Shell instance which prompts for authorization before it starts the deployment script.
|
||||||
|
![Open in Cloud Shell prompt](/assets/images/posts-2021/08/20210820_open_in_cloud_shell.png)
|
||||||
|
|
||||||
|
![Authorize Cloud Shell prompt](/assets/images/posts-2021/08/20210820_authorize_cloud_shell.png)
|
||||||
|
|
||||||
|
The script prompted me to select a project and a region, and then asked for the Sheet ID that I copied earlier.
|
||||||
|
![Cloud Shell deployment](/assets/images/posts-2021/08/20210820_cloud_shell.png)
|
||||||
|
|
||||||
|
#### Grant access to the Sheet
|
||||||
|
In order for the Cloud Run service to be able to see the URL mappings in the Sheet I needed to share the Sheet with the service account. That service account is found by going to [https://console.cloud.google.com/run](https://console.cloud.google.com/run), clicking on the new `sheets-url-shortener` service, and then viewing the **Permissions** tab. I'm interested in the one that's `############-computer@developer.gserviceaccount.com`.
|
||||||
|
![Finding the service account](/assets/images/posts-2021/08/20210820_service_account.png)
|
||||||
|
|
||||||
|
I then went back to the Sheet, hit the big **Share** button at the top, and shared the Sheet to the service account with *Viewer* access.
|
||||||
|
![Sharing to the service account](/assets/images/posts-2021/08/20210820_share_with_svc_account.png)
|
||||||
|
|
||||||
|
#### Quick test
|
||||||
|
Back in GCP land, the details page for the `sheets-url-shortener` Cloud Run service shows a gross-looking URL near the top: `https://sheets-url-shortener-vrw7x6wdzq-uc.a.run.app`. That doesn't do much for *shortening* my links, but it'll do just fine for a quick test. First, I pointed my browser straight to that listed URL:
|
||||||
|
![Testing the web server](/assets/images/posts-2021/08/20210820_home_page.png)
|
||||||
|
|
||||||
|
This at least tells me that the web server portion is working. Now to see if I can redirect to my [project car posts on Polywork](https://john.bowdre.net/?badges%5B%5D=Car+Nerd):
|
||||||
|
![Testing a redirect](/assets/images/posts-2021/08/20210820_sheets_api_disabled.png)
|
||||||
|
|
||||||
|
Hmm, not quite. Luckily the error tells me exactly what I need to do...
|
||||||
|
|
||||||
|
#### Enable Sheets API
|
||||||
|
I just needed to visit `https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=############` to enable the Google Sheets API.
|
||||||
|
![Enabling Sheets API](/assets/images/posts-2021/08/20210820_enable_sheets_api.png)
|
||||||
|
|
||||||
|
Once that's done, I can try my redirect again - and, after a brief moment, it successfully sends me on to Polywork!
|
||||||
|
![Successful redirect](/assets/images/posts-2021/08/20210820_successful_redirect.png)
|
||||||
|
|
||||||
|
#### Link custom domain
|
||||||
|
The whole point of this project is to *shorten* URLs, but I haven't done that yet. I'll want to link in my `go.bowdre.net` domain to use that in place of the rather unwieldy `https://sheets-url-shortener-vrw7x6wdzq-uc.a.run.app`. I do that by going back to the [Cloud Run console](https://console.cloud.google.com/run) and selecting the option at the top to **Manage Custom Domains**.
|
||||||
|
![Manage custom domains](/assets/images/posts-2021/08/20210820_manage_custom_domain.png)
|
||||||
|
|
||||||
|
I can then use the **Add Mapping** button, select my `sheets-url-shortener` service, choose one of my verified domains (which I *think* are already verified since they're registered through Google Domains with the same account), and then specify the desired subdomain.
|
||||||
|
![Adding a domain mapping](/assets/images/posts-2021/08/20210820_add_mapping_1.png)
|
||||||
|
|
||||||
|
The wizard then tells me exactly what record I need to create/update with my domain host:
|
||||||
|
![CNAME details](/assets/images/posts-2021/08/20210820_add_mapping_2.png)
|
||||||
|
|
||||||
|
It took a while for the domain mapping to go live once I've updated the record.
|
||||||
|
![Processing mapping...](/assets/images/posts-2021/08/20210820_domain_mapping.png)
|
||||||
|
|
||||||
|
#### Final tests
|
||||||
|
Once it did finally update, I was able to hit `https://go.bowdre.net` to get the error/landing page, complete with a valid SSL cert:
|
||||||
|
![Successful error!](/assets/images/posts-2021/08/20210820_landing_page.png)
|
||||||
|
|
||||||
|
And testing [go.bowdre.net/ghia](https://go.bowdre.net/ghia) works as well!
|
||||||
|
|
||||||
|
#### Outro
|
||||||
|
I'm very pleased with how this quick little project turned out. Managing my shortened links with a Google Sheet is quite convenient, and I really like the complete lack of tracking or analytics. Plus I'm a sucker for an excuse to use a cloud technology I haven't played a lot with yet.
|
||||||
|
|
||||||
|
And now I can hand out handy-dandy short links!
|
||||||
|
|
||||||
|
| Link | Description|
|
||||||
|
| --- | --- |
|
||||||
|
| [go.bowdre.net/ghia](https://go.bowdre.net/ghia) | 1974 VW Karmann Ghia project |
|
||||||
|
| [go.bowdre.net/conedoge](https://go.bowdre.net/conedoge) | 2014 Subaru BRZ autocross videos |
|
||||||
|
| [go.bowdre.net/matrix](https://go.bowdre.net/matrix) | Chat with me on Matrix |
|
||||||
|
| [go.bowdre.net/twits](https://go.bowdre.net/twits) | Follow me on Twitter |
|
||||||
|
| [go.bowdre.net/stadia](https://go.bowdre.net/stadia) | Game with me on Stadia |
|
||||||
|
|
98
content/post/2021-08-25-notes-on-vra-ha-with-nsx-alb.md
Normal file
|
@ -0,0 +1,98 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-08-25T00:00:00Z"
|
||||||
|
tags:
|
||||||
|
- nsx
|
||||||
|
- cluster
|
||||||
|
- vra
|
||||||
|
- availability
|
||||||
|
title: Notes on vRA HA with NSX-ALB
|
||||||
|
---
|
||||||
|
This is going to be a pretty quick recap of the steps I recently took to convert a single-node instance of vRealize Automation 8.4.2 into a 3-node High-Availability vRA cluster behind a standalone NSX Advanced Load Balancer (without NSX being deployed in the environment). No screenshots or specific details since I ran through this in the lab at work and didn't capture anything along the way, and my poor NUC homelab struggles enough to run a single instance of memory-hogging vRA.
|
||||||
|
|
||||||
|
### Getting started with NSX-ALB
|
||||||
|
I found a lot of information on how to use NSX-ALB as a component of a broader NSX-equipped environment, but not a lot of detail on how to use the ALB *without* NSX - until I found [Rudi Martinsen's blog on the subject](https://rudimartinsen.com/2021/06/25/load-balancing-with-nsx-alb/). That turned out to be a great reference for the ALB configuration so be sure to check it out if you need more details than what I provide in this section.
|
||||||
|
|
||||||
|
#### Download
|
||||||
|
NSX-ALB is/was formerly known as the Avi Vantage Controller, and downloads are available [here](https://portal.avipulse.vmware.com/software/vantage). You'll need to log in with your VMware Customer Connect account to access the download, and then grab the latest VMware Controller OVA. Be sure to make a note of the default password listed on the right-hand side since you'll need that to log in post-deployment.
|
||||||
|
|
||||||
|
#### Deploy
|
||||||
|
It's an OVA, so deploy it like an OVA. When you get to the "Customize template" stage, drop in valid data for the **Management Interface IP Address**, **Management Interface Subnet Mask**, and **Default Gateway** fields but leave everything else blank. Click on through to the end and watch the thing deploy. Once the deployment completes, power on the new VM and wait a little bit for it to configure itself and get ready for operation.
|
||||||
|
|
||||||
|
### Configure NSX-ALB
|
||||||
|
Point a browser to the NSX-ALB appliance's IP and log in as `admin` using the password you copied from the download page (I told you it would come in handy!). Once you're in, you'll be prompted to establish a passphrase (for backups and such) and provide DNS Resolver(s) and the DNS search domain. Set the SMTP option to "None" and leave the other options as the defaults.
|
||||||
|
|
||||||
|
I'd then recommend clicking the little Avi icon at the top right of the screen and using the **My Account** button to change the admin password to something that's not posted out on the internet. You know, for reasons.
|
||||||
|
|
||||||
|
#### Cloud
|
||||||
|
Go to **Infrastructure > Clouds** and click the pencil icon for *Default-Cloud*, then set the *Cloud Infrastructure Type* to "VMware". Input the credentials needed for connecting to your vCenter, and make sure the account has "Write" access so it can create the Service Engine VMs and whatnot.
|
||||||
|
|
||||||
|
Click over to the *Data Center* tab and point it to the virtual data center used in your vCenter. On the *Network* tab, select the network that will be used for management traffic. Also configure a subnet (in CIDR notation) and gateway, and add a small static IP address pool that can be assigned to the Service Engine nodes (I used something like `192.168.1.120-192.168.1.126`).
|
||||||
|
|
||||||
|
#### Networks
|
||||||
|
Once thats sorted, navigate to **Infrastructure > Cloud Resources > Networks**. You should already see the networks which were imported from vCenter; find the one you'll use for servers (like your pending vRA cluster) and click the pencil icon to edit it. Then click the **Add Subnet** button, define the subnet in CIDR format, and add a static IP pool as well. Also go ahead and select the *Default-Group* as the **Template Service Engine Group**.
|
||||||
|
|
||||||
|
Back on the Networks list, you should now see both your management and server network defined with IP pools for each.
|
||||||
|
|
||||||
|
#### IPAM profile
|
||||||
|
Now go to **Templates > Profiles > IPAM/DNS Profiles**, click the **Create** button at the top right, and select **IPAM Profile**. Give it a name, set **Type** to `Avi Vantage IPAM`, pick the appropriate Cloud, and then also select the Networks for which you created the IP pools earlier.
|
||||||
|
|
||||||
|
Then go back to **Infastructure > Clouds**, edit the Cloud, and select the IPAM Profile you just created.
|
||||||
|
|
||||||
|
#### Service Engine Group
|
||||||
|
Navigate to **Infrastructure > Cloud Resources > Service Engine Group** and edit the *Default-Group*. I left everything on the *Basic Settings* tab at the defaults. On the *Advanced* tab, I specified which vSphere cluster the Service Engines should be deployed to. And I left everything else with the default settings.
|
||||||
|
|
||||||
|
#### SSL Certificate
|
||||||
|
Hop over to **Templates > Security > SSL/TLS Certificates** and click **Create > Application Certificate**. Give the new cert a name and change the **Type** to `CSR` to generate a new signing request. Enter the **Common Name** you're going to want to use for the load balancer VIP (something like `vra`, perhaps?) and all the usual cert fields. Use the **Subject Alternate Name (SAN)** section at the bottom to add all the other components, like the individual vRA cluster members by both hostname and FQDN. I went ahead and included those IPs as well for good measure.
|
||||||
|
|
||||||
|
| Name |
|
||||||
|
|----------------------|
|
||||||
|
| `vra.domain.local` |
|
||||||
|
| `vra01.domain.local` |
|
||||||
|
| `vra01` |
|
||||||
|
| `192.168.1.41` |
|
||||||
|
| `vra02.domain.local` |
|
||||||
|
| `vra02` |
|
||||||
|
| `192.168.1.42` |
|
||||||
|
| `vra03.domain.local` |
|
||||||
|
| `vra03` |
|
||||||
|
| `192.168.1.43` |
|
||||||
|
|
||||||
|
Click **Save**.
|
||||||
|
|
||||||
|
Click **Create** again, but this time select **Root/Intermediate CA Certificate** and upload/paste your CA's cert so it can be trusted. Save your work.
|
||||||
|
|
||||||
|
Back at the cert list, find your new application cert and click the pencil icon to edit it. Copy the **Certificate Signing Request** field and go get it signed by your CA. Be sure to grab the certificate chain (base64-encoded) as well if you can. Come back and paste in / upload your shiny new CA-signed certificate file.
|
||||||
|
|
||||||
|
#### Virtual Service
|
||||||
|
Now it's finally time to create the Virtual Service that will function as the load balancer front-end. Pop over to **Applications > Virtual Services** and click **Create Virtual Service > Basic Setup**. Give it a name and set the **Application Type** to `HTTPS`, which will automatically set the port and bind a default self-signed certificate.
|
||||||
|
|
||||||
|
Click on the **Certificate** field and select the new cert you created above. Be sure to remove the default cert.
|
||||||
|
|
||||||
|
Tick the box to auto-allocate the IP(s), and select the appropriate network and subnet.
|
||||||
|
|
||||||
|
Add your vRA servers (current and future) by their IP addresses (`192.168.1.41`, `192.168.1.42`, `192.168.1.43`), and then click **Save**.
|
||||||
|
|
||||||
|
Now that the Virtual Service is created, make a note of the IP address assigned to the service and go add that to your DNS so that the name will resolve.
|
||||||
|
|
||||||
|
### Now do vRA
|
||||||
|
Log into LifeCycle Manager in a new browser tab/window. Make sure that you've mapped an *Install* product binary for your current version of vRA; the upgrade binary that you probably used to do your last update won't cut it. It's probably also a good idea to go make a snapshot of your vRA and IDM instances just in case.
|
||||||
|
|
||||||
|
#### Adding new certificate
|
||||||
|
In LCM, go to **Locker > Certificates** and select the option to **Import**. Switch back to the NSX-ALB tab and go to **Templates > Security > SSL/TLS Certificates**. Click the little down-arrow-in-a-circle "Export" icon next to the application certificate you created earlier. Copy the key section and paste that into LCM. Then open the file containing the certificate chain you got from your CA, copy its contents, and paste it into LCM as well. Do *not* try to upload a certificate file directly to LCM; that will fail unless the file includes both the cert and the private key and that's silly.
|
||||||
|
|
||||||
|
Once the cert is successfully imported, go to the **Lifecycle Operations** component of LCM and navigate to the environment containing your vRA instance. Select the vRA product, hit the three-dot menu, and use the **Replace Certificate** option to replace the old and busted cert with the new HA-ready one. It will take a little bit for this to get applied. Don't move on until vRA services are back up.
|
||||||
|
|
||||||
|
#### Scale out vRA
|
||||||
|
Still on the vRA product page, click on the **+ Add Components** button.
|
||||||
|
|
||||||
|
On the **Infrastructure** page, tell LCM where to put the new VRA VMs.
|
||||||
|
|
||||||
|
On the **Network** page, tell it which network configuration to use.
|
||||||
|
|
||||||
|
On the **Components** page, scroll down a bit and click on **(+) > vRealize Automation Secondary Node** - twice. That will reveal a new section labeled **Cluster Virtual IP**. Put in the FQDN you configured for the Virtual Service, and tick the box to terminate SSL at the load balancer. Then scroll on down and enter the details for the additional vRA nodes, making sure that the IP addresses match the servers you added to the Virtual Service configuration and that the FQDNs match what's in the SSL cert.
|
||||||
|
|
||||||
|
Click on through to do the precheck and ultimately kick off the deployment. It'll take a while, but you'll eventually be able to connect to the NSX-ALB at `vra.domain.local` and get passed along to one of your three cluster nodes.
|
||||||
|
|
||||||
|
Have fun!
|
|
@ -0,0 +1,558 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-09-03T00:00:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/09/20210903_action_run_success.png
|
||||||
|
last_modified_at: "2021-09-20"
|
||||||
|
tags:
|
||||||
|
- vra
|
||||||
|
- abx
|
||||||
|
- powershell
|
||||||
|
title: Run scripts in guest OS with vRA ABX Actions
|
||||||
|
---
|
||||||
|
Thus far in my [vRealize Automation project](series/vra8), I've primarily been handing the payload over to vRealize Orchestrator to do the heavy lifting on the back end. This approach works really well for complex multi-part workflows (like when [generating unique hostnames](vra8-custom-provisioning-part-two#the-vro-workflow)), but it may be overkill for more linear tasks (such as just running some simple commands inside of a deployed guest OS). In this post, I'll explore how I use [vRA Action Based eXtensibility (ABX)](https://blogs.vmware.com/management/2020/09/vra-abx-flow.html) to do just that.
|
||||||
|
|
||||||
|
### The Goal
|
||||||
|
My ABX action is going to use PowerCLI to perform a few steps inside a deployed guest OS (Windows-only for this demonstration):
|
||||||
|
1. Auto-update VM tools (if needed).
|
||||||
|
2. Add specified domain users/groups to the local Administrators group.
|
||||||
|
3. Extend the C: volume to fill the VMDK.
|
||||||
|
4. Set up Windows Firewall to enable remote access.
|
||||||
|
5. Create a scheduled task to attempt to automatically apply any available Windows updates.
|
||||||
|
|
||||||
|
### Template Changes
|
||||||
|
#### Cloud Assembly
|
||||||
|
I'll need to start by updating the cloud template so that the requester can input an (optional) list of admin accounts to be added to the VM, and to enable specifying a disk size to override the default from the source VM template.
|
||||||
|
|
||||||
|
I will also add some properties to tell PowerCLI (and the `Invoke-VmScript` cmdlet in particular) how to connect to the VM.
|
||||||
|
|
||||||
|
##### Inputs section
|
||||||
|
I'll kick this off by going into Cloud Assembly and editing the `WindowsDemo` template I've been working on for the past few eons. I'll add a `diskSize` input:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site: [...]
|
||||||
|
image: [...]
|
||||||
|
size: [...]
|
||||||
|
diskSize:
|
||||||
|
title: 'System drive size'
|
||||||
|
default: 60
|
||||||
|
type: integer
|
||||||
|
minimum: 60
|
||||||
|
maximum: 200
|
||||||
|
network: [...]
|
||||||
|
adJoin: [...]
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
The default value is set to 60GB to match the VMDK attached to the source template; that's also the minimum value since shrinking disks gets messy.
|
||||||
|
|
||||||
|
I'll also drop in an `adminsList` input at the bottom of the section:
|
||||||
|
```yaml
|
||||||
|
[...]
|
||||||
|
poc_email: [...]
|
||||||
|
ticket: [...]
|
||||||
|
adminsList:
|
||||||
|
type: string
|
||||||
|
title: Administrators
|
||||||
|
description: Comma-separated list of domain accounts/groups which need admin access to this server.
|
||||||
|
default: ''
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Resources section
|
||||||
|
In the Resources section of the cloud template, I'm going to add a few properties that will tell the ABX script how to connect to the appropriate vCenter and then the VM.
|
||||||
|
- `vCenter`: The vCenter server where the VM will be deployed, and thus the server which PowerCLI will authenticate against. In this case, I've only got one vCenter, but a larger environment might have multiples. Defining this in the cloud template makes it easy to select automagically if needed. (For instance, if I had a `bow-vcsa` and a `dre-vcsa` for my different sites, I could do something like `vCenter: '${input.site}-vcsa.lab.bowdre.net'` here.)
|
||||||
|
- `vCenterUser`: The username with rights to the VM in vCenter. Again, this doesn't have to be a static assignment.
|
||||||
|
- `templateUser`: This is the account that will be used by `Invoke-VmScript` to log in to the guest OS. My template will use the default `Administrator` account for non-domain systems, but the `lab\vra` service account on domain-joined systems (using the `adJoin` input I [set up earlier](joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)).
|
||||||
|
|
||||||
|
I'll also include the `adminsList` input from earlier so that can get passed to ABX as well. And I'm going to add in an `adJoin` property (mapped to the [existing `input.adJoin`](joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template)) so that I'll have that to work with later.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
[...]
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
vCenter: vcsa.lab.bowdre.net
|
||||||
|
vCenterUser: vra@lab.bowdre.net
|
||||||
|
templateUser: '${input.adJoin ? "vra@lab" : "Administrator"}'
|
||||||
|
adminsList: '${input.adminsList}'
|
||||||
|
environment: '${input.environment}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
adJoin: '${input.adJoin}'
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
And I will add in a `storage` property as well which will automatically adjust the deployed VMDK size to match the specified input:
|
||||||
|
```yaml
|
||||||
|
[...]
|
||||||
|
description: '${input.description}'
|
||||||
|
networks: [...]
|
||||||
|
constraints: [...]
|
||||||
|
storage:
|
||||||
|
bootDiskCapacityInGB: '${input.diskSize}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties: [...]
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Complete template
|
||||||
|
Okay, all together now:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
diskSize:
|
||||||
|
title: 'System drive size'
|
||||||
|
default: 60
|
||||||
|
type: integer
|
||||||
|
minimum: 60
|
||||||
|
maximum: 200
|
||||||
|
network:
|
||||||
|
title: Network
|
||||||
|
type: string
|
||||||
|
adJoin:
|
||||||
|
title: Join to AD domain
|
||||||
|
type: boolean
|
||||||
|
default: true
|
||||||
|
staticDns:
|
||||||
|
title: Create static DNS record
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
oneOf:
|
||||||
|
- title: Development
|
||||||
|
const: D
|
||||||
|
- title: Testing
|
||||||
|
const: T
|
||||||
|
- title: Production
|
||||||
|
const: P
|
||||||
|
default: D
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
adminsList:
|
||||||
|
type: string
|
||||||
|
title: Administrators
|
||||||
|
description: Comma-separated list of domain accounts/groups which need admin access to this server.
|
||||||
|
default: ''
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
vCenter: vcsa.lab.bowdre.net
|
||||||
|
vCenterUser: vra@lab.bowdre.net
|
||||||
|
templateUser: '${input.adJoin ? "vra@lab" : "Administrator"}'
|
||||||
|
adminsList: '${input.adminsList}'
|
||||||
|
environment: '${input.environment}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
adJoin: '${input.adJoin}'
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
activeDirectory:
|
||||||
|
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
|
||||||
|
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
|
||||||
|
staticDns: '${input.staticDns}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
storage:
|
||||||
|
bootDiskCapacityInGB: '${input.diskSize}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${input.network}'
|
||||||
|
```
|
||||||
|
|
||||||
|
With the template sorted, I need to assign it a new version and release it to the catalog so that the changes will be visible to Service Broker:
|
||||||
|
![Releasing a new version of a Cloud Assembly template](/assets/images/posts-2021/08/20210831_cloud_assembly_new_version.png)
|
||||||
|
|
||||||
|
#### Service Broker custom form
|
||||||
|
I now need to also make some updates to the custom form configuration in Service Broker so that the new fields will appear on the request form. First things first, though: after switching to the Service Broker UI, I go to **Content & Policies > Content Sources**, open the linked content source, and click the **Save & Import** button to force Service Broker to pull in the latest versions from Cloud Assembly.
|
||||||
|
|
||||||
|
I can then go to **Content**, click the three-dot menu next to my `WindowsDemo` item, and select the **Customize Form** option. I drag-and-drop the `System drive size` from the *Schema Elements* section onto the canvas, placing it directly below the existing `Resource Size` field.
|
||||||
|
![Placing the system drive size field on the canvas](/assets/images/posts-2021/08/20210831_system_drive_size_placement.png)
|
||||||
|
|
||||||
|
With the field selected, I use the **Properties** section to edit the label with a unit so that users will better understand what they're requesting.
|
||||||
|
![System drive size label](/assets/images/posts-2021/08/20210831_system_drive_size_label.png)
|
||||||
|
|
||||||
|
On the **Values** tab, I change the *Step* option to `5` so that we won't wind up with users requesting a disk size of `62.357 GB` or anything crazy like that.
|
||||||
|
![System drive size step](/assets/images/posts-2021/08/20210831_system_drive_size_step.png)
|
||||||
|
|
||||||
|
I'll drag-and-drop the `Administrators` field to the canvas, and put it right below the VM description:
|
||||||
|
![Administrators field placement](/assets/images/posts-2021/08/20210831_administrators_placement.png)
|
||||||
|
|
||||||
|
I only want this field to be visible if the VM is going to be joined to the AD domain, so I'll set the *Visibility* accordingly:
|
||||||
|
![Administrators field visibility](/assets/images/posts-2021/08/20210831_administrators_visibility.png)
|
||||||
|
|
||||||
|
That should be everything I need to add to the custom form so I'll be sure to hit that big **Save** button before moving on.
|
||||||
|
|
||||||
|
### Extensibility
|
||||||
|
Okay, now it's time to actually make the stuff work on the back end. But before I get to writing the actual script, there's something else I'll need to do first. Remember how I added properties to store the usernames for vCenter and the VM template in the cloud template? My ABX action will also need to know the passwords for those accounts. I didn't add those to the cloud template since anything added as a property there (even if flagged as a secret!) would be visible in plain text to any external handlers (like vRO). Instead, I'll store those passwords as encrypted Action Constants.
|
||||||
|
|
||||||
|
#### Action Constants
|
||||||
|
From the vRA Cloud Assembly interface, I'll navigate to **Extensibility > Library > Actions** and then click the **Action Constants** button up top. I can then click **New Action Constant** and start creating the ones I need:
|
||||||
|
- `vCenterPassword`: for logging into vCenter.
|
||||||
|
- `templatePassWinWorkgroup`: for logging into non-domain VMs.
|
||||||
|
- `templatePassWinDomain`: for logging into VMs with the designated domain credentials.
|
||||||
|
|
||||||
|
I'll make sure to enable the *Encrypt the action constant value* toggle for each so they'll be protected.
|
||||||
|
![Creating an action constant](/assets/images/posts-2021/09/20210901_create_action_constant.png)
|
||||||
|
|
||||||
|
![Created action constants](/assets/images/posts-2021/09/20210901_action_constants.png)
|
||||||
|
|
||||||
|
Once all those constants are created I can move on to the meat of this little project:
|
||||||
|
|
||||||
|
#### ABX Action
|
||||||
|
I'll click back to **Extensibility > Library > Actions** and then **+ New Action**. I give the new action a clever title and description:
|
||||||
|
![Create a new action](/assets/images/posts-2021/09/20210901_create_action.png)]
|
||||||
|
|
||||||
|
I then hit the language dropdown near the top left and select to use `powershell` so that I can use those sweet, sweet PowerCLI cmdlets.
|
||||||
|
![Language selection](/assets/images/posts-2021/09/20210901_action_select_language.png)
|
||||||
|
|
||||||
|
And I'll pop over to the right side to map the Action Constants I created earlier so that I can reference them in the script I'm about to write:
|
||||||
|
![Mapping constants in action](/assets/images/posts-2021/09/20210901_map_constants_to_action.png)
|
||||||
|
|
||||||
|
Now for The Script:
|
||||||
|
```powershell
|
||||||
|
<# vRA 8.x ABX action to perform certain in-guest actions post-deploy:
|
||||||
|
Windows:
|
||||||
|
- auto-update VM tools
|
||||||
|
- add specified domain users/groups to local Administrators group
|
||||||
|
- extend C: volume to fill disk
|
||||||
|
- set up remote access
|
||||||
|
- create a scheduled task to (attempt to) apply Windows updates
|
||||||
|
|
||||||
|
## Action Secrets:
|
||||||
|
templatePassWinDomain # password for domain account with admin rights to the template (domain-joined deployments)
|
||||||
|
templatePassWinWorkgroup # password for local account with admin rights to the template (standalone deployments)
|
||||||
|
vCenterPassword # password for vCenter account passed from the cloud template
|
||||||
|
|
||||||
|
## Action Inputs:
|
||||||
|
## Inputs from deployment:
|
||||||
|
resourceNames[0] # VM name [BOW-DVRT-XXX003]
|
||||||
|
customProperties.vCenterUser # user for connecting to vCenter [lab\vra]
|
||||||
|
customProperties.vCenter # vCenter instance to connect to [vcsa.lab.bowdre.net]
|
||||||
|
customProperties.dnsDomain # long-form domain name [lab.bowdre.net]
|
||||||
|
customProperties.adminsList # list of domain users/groups to be added as local admins [john, lab\vra, vRA-Admins]
|
||||||
|
customProperties.adJoin # boolean to determine if the system will be joined to AD (true) or not (false)
|
||||||
|
customProperties.templateUser # username used for connecting to the VM through vmtools [Administrator] / [root]
|
||||||
|
#>
|
||||||
|
|
||||||
|
function handler($context, $inputs) {
|
||||||
|
# Initialize global variables
|
||||||
|
$vcUser = $inputs.customProperties.vCenterUser
|
||||||
|
$vcPassword = $context.getSecret($inputs."vCenterPassword")
|
||||||
|
$vCenter = $inputs.customProperties.vCenter
|
||||||
|
|
||||||
|
# Create vmtools connection to the VM
|
||||||
|
$vmName = $inputs.resourceNames[0]
|
||||||
|
Connect-ViServer -Server $vCenter -User $vcUser -Password $vcPassword -Force
|
||||||
|
$vm = Get-VM -Name $vmName
|
||||||
|
Write-Host "Waiting for VM Tools to start..."
|
||||||
|
if (-not (Wait-Tools -VM $vm -TimeoutSeconds 180)) {
|
||||||
|
Write-Error "Unable to establish connection with VM tools" -ErrorAction Stop
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect OS type
|
||||||
|
$count = 0
|
||||||
|
While (!$osType) {
|
||||||
|
Try {
|
||||||
|
$osType = ($vm | Get-View).Guest.GuestFamily.ToString()
|
||||||
|
$toolsStatus = ($vm | Get-View).Guest.ToolsStatus.ToString()
|
||||||
|
} Catch {
|
||||||
|
# 60s timeout
|
||||||
|
if ($count -ge 12) {
|
||||||
|
Write-Error "Timeout exceeded while waiting for tools." -ErrorAction Stop
|
||||||
|
break
|
||||||
|
}
|
||||||
|
Write-Host "Waiting for tools..."
|
||||||
|
$count++
|
||||||
|
Sleep 5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Write-Host "$vmName is a $osType and its tools status is $toolsStatus."
|
||||||
|
|
||||||
|
# Update tools on Windows if out of date
|
||||||
|
if ($osType.Equals("windowsGuest") -And $toolsStatus.Equals("toolsOld")) {
|
||||||
|
Write-Host "Updating VM Tools..."
|
||||||
|
Update-Tools $vm
|
||||||
|
Write-Host "Waiting for VM Tools to start..."
|
||||||
|
if (-not (Wait-Tools -VM $vm -TimeoutSeconds 180)) {
|
||||||
|
Write-Error "Unable to establish connection with VM tools" -ErrorAction Stop
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run OS-specific tasks
|
||||||
|
if ($osType.Equals("windowsGuest")) {
|
||||||
|
# Initialize Windows variables
|
||||||
|
$domainLong = $inputs.customProperties.dnsDomain
|
||||||
|
$adminsList = $inputs.customProperties.adminsList
|
||||||
|
$adJoin = $inputs.customProperties.adJoin
|
||||||
|
$templateUser = $inputs.customProperties.templateUser
|
||||||
|
$templatePassword = $adJoin.Equals("true") ? $context.getSecret($inputs."templatePassWinDomain") : $context.getSecret($inputs."templatePassWinWorkgroup")
|
||||||
|
|
||||||
|
# Add domain accounts to local administrators group
|
||||||
|
if ($adminsList.Length -gt 0 -And $adJoin.Equals("true")) {
|
||||||
|
# Standardize users entered without domain as DOMAIN\username
|
||||||
|
if ($adminsList.Length -gt 0) {
|
||||||
|
$domainShort = $domainLong.split('.')[0]
|
||||||
|
$adminsArray = @(($adminsList -Split ',').Trim())
|
||||||
|
For ($i=0; $i -lt $adminsArray.Length; $i++) {
|
||||||
|
If ($adminsArray[$i] -notmatch "$domainShort.*\\" -And $adminsArray[$i] -notmatch "@$domainShort") {
|
||||||
|
$adminsArray[$i] = $domainShort + "\" + $adminsArray[$i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
$admins = '"{0}"' -f ($adminsArray -join '","')
|
||||||
|
Write-Host "Administrators: $admins"
|
||||||
|
}
|
||||||
|
$adminScript = "Add-LocalGroupMember -Group Administrators -Member $admins"
|
||||||
|
Start-Sleep -s 10
|
||||||
|
Write-Host "Attempting to add administrator accounts..."
|
||||||
|
$runAdminScript = Invoke-VMScript -VM $vm -ScriptText $adminScript -GuestUser $templateUser -GuestPassword $templatePassword
|
||||||
|
if ($runAdminScript.ScriptOutput.Length -eq 0) {
|
||||||
|
Write-Host "Successfully added [$admins] to Administrators group."
|
||||||
|
} else {
|
||||||
|
Write-Host "Attempt to add [$admins] to Administrators group completed with warnings:`n" $runAdminScript.ScriptOutput "`n"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Write-Host "No admins to add..."
|
||||||
|
}
|
||||||
|
# Extend C: volume to fill system drive
|
||||||
|
$partitionScript = "`$Partition = Get-Volume -DriveLetter C | Get-Partition; `$Partition | Resize-Partition -Size (`$Partition | Get-PartitionSupportedSize).sizeMax"
|
||||||
|
Start-Sleep -s 10
|
||||||
|
Write-Host "Attempting to extend system volume..."
|
||||||
|
$runPartitionScript = Invoke-VMScript -VM $vm -ScriptText $partitionScript -GuestUser $templateUser -GuestPassword $templatePassword
|
||||||
|
if ($runPartitionScript.ScriptOutput.Length -eq 0) {
|
||||||
|
Write-Host "Successfully extended system partition."
|
||||||
|
} else {
|
||||||
|
Write-Host "Attempt to extend system volume completed with warnings:`n" $runPartitionScript.ScriptOutput "`n"
|
||||||
|
}
|
||||||
|
# Set up remote access
|
||||||
|
$remoteScript = "Enable-NetFirewallRule -DisplayGroup `"Remote Desktop`"
|
||||||
|
Enable-NetFirewallRule -DisplayGroup `"Windows Management Instrumentation (WMI)`"
|
||||||
|
Enable-NetFirewallRule -DisplayGroup `"File and Printer Sharing`"
|
||||||
|
Enable-PsRemoting
|
||||||
|
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name `"fDenyTSConnections`" -Value 0"
|
||||||
|
Start-Sleep -s 10
|
||||||
|
Write-Host "Attempting to enable remote access (RDP, WMI, File and Printer Sharing, PSRemoting)..."
|
||||||
|
$runRemoteScript = Invoke-VMScript -VM $vm -ScriptText $remoteScript -GuestUser $templateUser -GuestPassword $templatePassword
|
||||||
|
if ($runRemoteScript.ScriptOutput.Length -eq 0) {
|
||||||
|
Write-Host "Successfully enabled remote access."
|
||||||
|
} else {
|
||||||
|
Write-Host "Attempt to enable remote access completed with warnings:`n" $runRemoteScript.ScriptOutput "`n"
|
||||||
|
}
|
||||||
|
# Create scheduled task to apply updates
|
||||||
|
$updateScript = "`$action = New-ScheduledTaskAction -Execute 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe' -Argument '-NoProfile -WindowStyle Hidden -Command `"& {Install-WUUpdates -Updates (Start-WUScan)}`"'
|
||||||
|
`$trigger = New-ScheduledTaskTrigger -Once -At ([DateTime]::Now.AddMinutes(1))
|
||||||
|
`$settings = New-ScheduledTaskSettingsSet -Compatibility Win8 -Hidden
|
||||||
|
Register-ScheduledTask -Action `$action -Trigger `$trigger -Settings `$settings -TaskName `"Initial_Updates`" -User `"NT AUTHORITY\SYSTEM`" -RunLevel Highest
|
||||||
|
`$task = Get-ScheduledTask -TaskName `"Initial_Updates`"
|
||||||
|
`$task.Triggers[0].StartBoundary = [DateTime]::Now.AddMinutes(1).ToString(`"yyyy-MM-dd'T'HH:mm:ss`")
|
||||||
|
`$task.Triggers[0].EndBoundary = [DateTime]::Now.AddHours(3).ToString(`"yyyy-MM-dd'T'HH:mm:ss`")
|
||||||
|
`$task.Settings.AllowHardTerminate = `$True
|
||||||
|
`$task.Settings.DeleteExpiredTaskAfter = 'PT0S'
|
||||||
|
`$task.Settings.ExecutionTimeLimit = 'PT2H'
|
||||||
|
`$task.Settings.Volatile = `$False
|
||||||
|
`$task | Set-ScheduledTask"
|
||||||
|
Start-Sleep -s 10
|
||||||
|
Write-Host "Creating a scheduled task to apply updates..."
|
||||||
|
$runUpdateScript = Invoke-VMScript -VM $vm -ScriptText $updateScript -GuestUser $templateUser -GuestPassword $templatePassword
|
||||||
|
Write-Host "Created task:`n" $runUpdateScript.ScriptOutput "`n"
|
||||||
|
} elseif ($osType.Equals("linuxGuest")) {
|
||||||
|
#TODO
|
||||||
|
Write-Host "Linux systems not supported by this action... yet"
|
||||||
|
}
|
||||||
|
# Cleanup connection
|
||||||
|
Disconnect-ViServer -Server $vCenter -Force -Confirm:$false
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
I like to think that it's fairly well documented (but I've also been staring at / tweaking this for a while); here's the gist of what it's doing:
|
||||||
|
1. Capture vCenter login credentials from the Action Constants and the `customProperties` of the deployment (from the cloud template).
|
||||||
|
2. Use those creds to `Connect-ViServer` to the vCenter instance.
|
||||||
|
3. Find the VM object which matches the `resourceName` from the vRA deployment.
|
||||||
|
4. Wait for VM tools to be running and accessible on that VM.
|
||||||
|
5. Determine the OS type of the VM (Windows/Linux).
|
||||||
|
6. If it's Windows and the tools are out of date, update them and wait for the reboot to complete.
|
||||||
|
7. If it's Windows, move on:
|
||||||
|
8. If it needs to add accounts to the Administrators group, assemble the needed script and run it in the guest via `Invoke-VmScript`.
|
||||||
|
9. Assemble a script to expand the C: volume to fill whatever size VMDK is attached as HDD1, and run it in the guest via `Invoke-VmScript`.
|
||||||
|
10. Assemble a script to set common firewall exceptions for remote access, and run it in the guest via `Invoke-VmScript`.
|
||||||
|
11. Assemble a script to schedule a task to (attempt to) apply Windows updates, and run it in the guest via `Invoke-VmScript`.
|
||||||
|
|
||||||
|
It wouldn't be hard to customize the script to perform different actions (or even run against Linux systems - just set `$whateverScript = "apt update && apt upgrade"` (or whatever) and call it with `$runWhateverScript = Invoke-VMScript -VM $vm -ScriptText $whateverScript -GuestUser $templateUser -GuestPassword $templatePassword`), but this is as far as I'm going to take it for this demo.
|
||||||
|
|
||||||
|
#### Event subscription
|
||||||
|
Before I can test the new action, I'll need to first add an extensibility subscription so that the ABX action will get called during the deployment. So I head to **Extensibility > Subscriptions** and click the **New Subscription** button.
|
||||||
|
![Extensibility subscriptions](/assets/images/posts-2021/09/20210903_extensibility_subscriptions.png)
|
||||||
|
|
||||||
|
I'll be using this to call my new `configureGuest` action - so I'll name the subscription `Configure Guest`. I tie it to the `Compute Post Provision` event, and bind my action:
|
||||||
|
![Creating the new subscription](/assets/images/posts-2021/09/20210903_new_subscription_1.png)
|
||||||
|
|
||||||
|
I do have another subsciption on that event already, [`VM Post-Provisioning`](adding-vm-notes-and-custom-attributes-with-vra8#extensibility-subscription) which is used to modify the VM object with notes and custom attributes. I'd like to make sure that my work inside the guest happens after that other subscription is completed, so I'll enable blocking and give it a priority of `2`:
|
||||||
|
![Adding blocking to Configure Guest](/assets/images/posts-2021/09/20210903_new_subscription_2.png)
|
||||||
|
|
||||||
|
After hitting the **Save** button, I go back to that other `VM Post-Provisioning` subscription, set it to enable blocking, and give it a priority of `1`:
|
||||||
|
![Blocking VM Post-Provisioning](/assets/images/posts-2021/09/20210903_old_subscription_blocking.png)
|
||||||
|
|
||||||
|
This will ensure that the new subscription fires after the older one completes, and that should avoid any conflicts between the two.
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Alright, now let's see if it worked. I head into Service Broker to submit the deployment request:
|
||||||
|
![Submitting the test deployment](/assets/images/posts-2021/09/20210903_request.png)
|
||||||
|
|
||||||
|
Note that I've set the disk size to 65GB (up from the default of 60), and I'm adding `lab\testy` as a local admin on the deployed system.
|
||||||
|
|
||||||
|
Once the deployment finishes, I can switch back to Cloud Assembly and check **Extensibility > Activity > Action Runs** and then click on the `configureGuest` run to see how it did.
|
||||||
|
![Successful action run](/assets/images/posts-2021/09/20210903_action_run_success.png)
|
||||||
|
|
||||||
|
It worked!
|
||||||
|
|
||||||
|
The Log tab lets me see the progress as the execution progresses:
|
||||||
|
|
||||||
|
```
|
||||||
|
Logging in to server.
|
||||||
|
logged in to server vcsa.lab.bowdre.net:443
|
||||||
|
Read-only file system
|
||||||
|
09/03/2021 19:08:27 Get-VM Finished execution
|
||||||
|
09/03/2021 19:08:27 Get-VM
|
||||||
|
Waiting for VM Tools to start...
|
||||||
|
09/03/2021 19:08:29 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:08:29 Wait-Tools Finished execution
|
||||||
|
09/03/2021 19:08:29 Wait-Tools
|
||||||
|
09/03/2021 19:08:29 Get-View Finished execution
|
||||||
|
09/03/2021 19:08:29 Get-View
|
||||||
|
09/03/2021 19:08:29 Get-View Finished execution
|
||||||
|
09/03/2021 19:08:29 Get-View
|
||||||
|
BOW-PSVS-XXX001 is a windowsGuest and its tools status is toolsOld.
|
||||||
|
Updating VM Tools...
|
||||||
|
09/03/2021 19:08:30 Update-Tools 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:08:30 Update-Tools Finished execution
|
||||||
|
09/03/2021 19:08:30 Update-Tools
|
||||||
|
Waiting for VM Tools to start...
|
||||||
|
09/03/2021 19:09:00 Wait-Tools 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:09:00 Wait-Tools Finished execution
|
||||||
|
09/03/2021 19:09:00 Wait-Tools
|
||||||
|
Administrators: "lab\testy"
|
||||||
|
Attempting to add administrator accounts...
|
||||||
|
09/03/2021 19:09:10 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:09:10 Invoke-VMScript Finished execution
|
||||||
|
09/03/2021 19:09:10 Invoke-VMScript
|
||||||
|
Successfully added ["lab\testy"] to Administrators group.
|
||||||
|
Attempting to extend system volume...
|
||||||
|
09/03/2021 19:09:27 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:09:27 Invoke-VMScript Finished execution
|
||||||
|
09/03/2021 19:09:27 Invoke-VMScript
|
||||||
|
Successfully extended system partition.
|
||||||
|
Attempting to enable remote access (RDP, WMI, File and Printer Sharing, PSRemoting)...
|
||||||
|
09/03/2021 19:09:49 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:09:49 Invoke-VMScript Finished execution
|
||||||
|
09/03/2021 19:09:49 Invoke-VMScript
|
||||||
|
Successfully enabled remote access.
|
||||||
|
Creating a scheduled task to apply updates...
|
||||||
|
09/03/2021 19:10:12 Invoke-VMScript 5222b516-ae2c-5740-2926-77cd21441f27
|
||||||
|
09/03/2021 19:10:12 Invoke-VMScript Finished execution
|
||||||
|
09/03/2021 19:10:12 Invoke-VMScript
|
||||||
|
Created task:
|
||||||
|
|
||||||
|
TaskPath TaskName State
|
||||||
|
-------- -------- -----
|
||||||
|
\ Initial_Updates Ready
|
||||||
|
\ Initial_Updates Ready
|
||||||
|
```
|
||||||
|
|
||||||
|
So it *claims* to have successfully updated the VM tools, added `lab\testy` to the local `Administrators` group, extended the `C:` volume to fill the 65GB virtual disk, added firewall rules to permit remote access, and created a scheduled task to apply updates. I can open a console session to the VM to spot-check the results.
|
||||||
|
![Verifying local admins](/assets/images/posts-2021/09/20210903_verify_local_admins.png)
|
||||||
|
Yep, `testy` is an admin now!
|
||||||
|
|
||||||
|
![Verify disk size](/assets/images/posts-2021/09/20210903_verify_disk_size.png)
|
||||||
|
And `C:` fills the disk!
|
||||||
|
|
||||||
|
### Wrap-up
|
||||||
|
This is really just the start of what I've been able to do in-guest leveraging `Invoke-VmScript` from an ABX action. I've got a [slightly-larger version of this script](https://github.com/jbowdre/misc-scripts/blob/main/vRealize/configure_guest.ps1) which also performs similar actions in Linux guests as well. And I've also cobbled together ABX solutions for generating randomized passwords for local accounts and storing them in an organization's password management solution. I would like to get around to documenting those here in the future... we'll see.
|
||||||
|
|
||||||
|
In any case, hopefully this information might help someone else to get started down this path. I'd love to see whatever enhancements you are able to come up with!
|
|
@ -0,0 +1,501 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- Projects
|
||||||
|
date: "2021-10-28T00:00:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/10/20211028_wireguard_in_the_cloud.jpg
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- gcp
|
||||||
|
- cloud
|
||||||
|
- wireguard
|
||||||
|
- vpn
|
||||||
|
- homelab
|
||||||
|
- tasker
|
||||||
|
- automation
|
||||||
|
title: Cloud-hosted WireGuard VPN for remote homelab access
|
||||||
|
---
|
||||||
|
For a while now, I've been using an [OpenVPN Access Server](https://openvpn.net/access-server/) virtual appliance for remotely accessing my [homelab](vmware-home-lab-on-intel-nuc-9). That's worked _fine_ but it comes with a lot of overhead. It also requires maintaining an SSL certificate and forwarding three ports through my home router, in addition to managing a fairly complex software package and configurations. The free version of the OpenVPN server also only supports a maximum of two simultaneous connections. I recently ran into issues with the `certbot` automated SSL renewal process on my OpenVPN AS VM and decided that it might be time to look for a simpler solution.
|
||||||
|
|
||||||
|
I found that solution in [WireGuard](https://www.wireguard.com/), which provides an extremely efficient secure tunnel implemented directly in the Linux kernel. It has a much smaller (and easier-to-audit) codebase, requires minimal configuration, and uses the latest crypto wizardry to securely connect multiple systems. It took me an hour or so of fumbling to get WireGuard deployed and configured on a fresh (and minimal) Ubuntu 20.04 VM running on my ESXi 7 homelab host, and I was pretty happy with the performance, stability, and resource usage of the new setup. That new VM idled at a full _tenth_ of the memory usage of my OpenVPN AS, and it only required a single port to be forwarded into my home network.
|
||||||
|
|
||||||
|
Of course, I soon realized that the setup could be _even better:_ I'm now running a WireGuard server on the Google Cloud free tier, and I've configured the [VyOS virtual router I use for my homelab stuff](vmware-home-lab-on-intel-nuc-9#networking) to connect to that cloud-hosted server to create a secure tunnel between the two without needing to punch any holes in my local network (or consume any additional resources). I can then connect my client devices to the WireGuard server in the cloud. From there, traffic intended for my home network gets relayed to the VyOS router, and internet-bound traffic leaves Google Cloud directly. So my self-managed VPN isn't just good for accessing my home lab remotely, but also more generally for encrypting traffic when on WiFi networks I don't control - allowing me to replace the paid ProtonVPN subscription I had been using for that purpose.
|
||||||
|
|
||||||
|
It's a pretty slick setup, if I do say so myself. Anyway, this post will discuss how I implemented this, and what I learned along the way.
|
||||||
|
|
||||||
|
### WireGuard Concepts, in Brief
|
||||||
|
WireGuard does things a bit differently from other VPN solutions I've used in the past. For starters, there aren't any user accounts to manage, and in fact users don't really come into the picture at all. WireGuard also doesn't really distinguish between _client_ and _server_; the devices on both ends of a tunnel connection are _peers_, and they use the same software package and very similar configurations. Each WireGuard peer is configured with a virtual network interface with a private IP address used for the tunnel network, and a configuration file tells it which tunnel IP(s) will be used by the other peer(s). Each peer has its own cryptographic _private_ key, and the other peers get a copy of the corresponding _public_ key added to their configuration so that all the peers can recognize each other and encrypt/decrypt traffic appropriately. This mapping of peer addresses to public keys facilitates what WireGuard calls [Cryptokey Routing](https://www.wireguard.com/#cryptokey-routing).
|
||||||
|
|
||||||
|
Once the peers are configured, all it takes is bringing up the WireGuard virtual interface on each peer to establish the tunnel and start passing secure traffic.
|
||||||
|
|
||||||
|
You can read a lot more fascinating details about how this all works back on the [WireGuard homepage](https://www.wireguard.com/#conceptual-overview) (and even more in this [protocol description](https://www.wireguard.com/protocol/)) but this at least covers the key points I needed to grok prior to a successful initial deployment.
|
||||||
|
|
||||||
|
For my hybrid cloud solution, I also leaned heavily upon [this write-up of a WireGuard Site-to-Site configuration](https://gist.github.com/insdavm/b1034635ab23b8839bf957aa406b5e39) for how to get traffic flowing between my on-site environment, cloud-hosted WireGuard server, and "Road Warrior" client devices, and drew from [this documentation on implementing WireGuard in GCP](https://github.com/agavrel/wireguard_google_cloud) as well. The [VyOS documentation for configuring the built-in WireGuard interface](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html) was also quite helpful to me.
|
||||||
|
|
||||||
|
Okay, enough background; let's get this thing going.
|
||||||
|
|
||||||
|
### Google Cloud Setup
|
||||||
|
#### Instance Deployment
|
||||||
|
I started by logging into my Google Cloud account at https://console.cloud.google.com, and proceeded to create a new project (named `wireguard`) to keep my WireGuard-related resources together. I then navigated to **Compute Engine** and [created a new instance](https://console.cloud.google.com/compute/instancesAdd) inside that project. The basic setup is:
|
||||||
|
|
||||||
|
| Attribute | Value |
|
||||||
|
| --- | --- |
|
||||||
|
| Name | `wireguard` |
|
||||||
|
| Region | `us-east1` (or whichever [free-tier-eligible region](https://cloud.google.com/free/docs/gcp-free-tier/#compute) is closest) |
|
||||||
|
| Machine Type | `e2-micro` |
|
||||||
|
| Boot Disk Size | 10 GB |
|
||||||
|
| Boot Disk Image | Ubuntu 20.04 LTS |
|
||||||
|
|
||||||
|
![Instance creation](/assets/images/posts-2021/10/20211027_instance_creation.png)
|
||||||
|
|
||||||
|
The other defaults are fine, but I'll holding off on clicking the friendly blue "Create" button at the bottom and instead click to expand the **Networking, Disks, Security, Management, Sole-Tenancy** sections to tweak a few more things.
|
||||||
|
![Instance creation advanced settings](/assets/images/posts-2021/10/20211028_instance_advanced_settings.png)
|
||||||
|
|
||||||
|
##### Network Configuration
|
||||||
|
Expanding the **Networking** section of the request form lets me add a new `wireguard` network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the _IP Forwarding_ option so that the instance will be able to do router-like things.
|
||||||
|
|
||||||
|
By default, the new instance will get assigned a public IP address that I can use to access it externally - but this address is _ephemeral_ so it will change periodically. Normally I'd overcome this by [using ddclient to manage its dynamic DNS record](bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns), but (looking ahead) [VyOS's WireGuard interface configuration](https://docs.vyos.io/en/latest/configuration/interfaces/wireguard.html#interface-configuration) unfortunately only supports connecting to an IP rather than a hostname. That means I'll need to reserve a _static_ IP address for my instance.
|
||||||
|
|
||||||
|
I can do that by clicking on the _Default_ network interface to expand the configuration. While I'm here, I'll first change the **Network Service Tier** from _Premium_ to _Standard_ to save a bit of money on network egress fees. _(This might be a good time to mention that while the compute instance itself is free, I will have to spend [about $3/mo for the public IP](https://cloud.google.com/vpc/network-pricing#:~:text=internal%20IP%20addresses.-,External%20IP%20address%20pricing,-You%20are%20charged), as well as [$0.085/GiB for internet egress via the Standard tier](https://cloud.google.com/vpc/network-pricing#:~:text=or%20Cloud%20Interconnect.-,Standard%20Tier%20pricing,-Egress%20pricing%20is) (versus [$0.12/GiB on the Premium tier](https://cloud.google.com/vpc/network-pricing#:~:text=Premium%20Tier%20pricing)). So not entirely free, but still pretty damn cheap for a cloud-hosted VPN that I control completely.)_
|
||||||
|
|
||||||
|
Anyway, after switching to the cheaper Standard tier I can click on the **External IP** dropdown and select the option to _Create IP Address_. I give it the same name as my instance to make it easy to keep up with.
|
||||||
|
|
||||||
|
![Network configuration](/assets/images/posts-2021/10/20211027_network_settings.png)
|
||||||
|
|
||||||
|
##### Security Configuration
|
||||||
|
The **Security** section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose:
|
||||||
|
```sh
|
||||||
|
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard
|
||||||
|
```
|
||||||
|
|
||||||
|
Okay, now that I've got my keys, I can click the **Add Item** button and paste in the contents of `~/.ssh/id_ed25519_wireguard.pub`.
|
||||||
|
|
||||||
|
![Security configuration](/assets/images/posts-2021/10/20211027_security_settings.png)
|
||||||
|
|
||||||
|
And that's it for the pre-deploy configuration! Time to hit **Create** to kick it off.
|
||||||
|
|
||||||
|
![Do it!](/assets/images/posts-2021/10/20211027_creation_time.png)
|
||||||
|
|
||||||
|
The instance creation will take a couple of minutes but I can go ahead and get the firewall sorted while I wait.
|
||||||
|
|
||||||
|
#### Firewall
|
||||||
|
Google Cloud's default firewall configuration will let me reach my new server via SSH without needing to configure anything, but I'll need to add a new rule to allow the WireGuard traffic. I do this by going to **VPC > Firewall** and clicking the button at the top to **[Create Firewall Rule](https://console.cloud.google.com/networking/firewalls/add)**. I give it a name (`allow-wireguard-ingress`), select the rule target by specifying the `wireguard` network tag I had added to the instance, and set the source range to `0.0.0.0/0`. I'm going to use the default WireGuard port so select the _udp:_ checkbox and enter `51820`.
|
||||||
|
|
||||||
|
![Firewall rule creation](/assets/images/posts-2021/10/20211027_firewall.png)
|
||||||
|
|
||||||
|
I'll click **Create** and move on.
|
||||||
|
|
||||||
|
#### WireGuard Server Setup
|
||||||
|
Once the **Compute Engine > Instances** [page](https://console.cloud.google.com/compute/instances) indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH:
|
||||||
|
```sh
|
||||||
|
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP}
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Preparation
|
||||||
|
And, as always, I'll first make sure the OS is fully updated before doing anything else:
|
||||||
|
```sh
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I'll install `ufw` to easily manage the host firewall, `qrencode` to make it easier to generate configs for mobile clients, `openresolv` to avoid [this issue](https://superuser.com/questions/1500691/usr-bin-wg-quick-line-31-resolvconf-command-not-found-wireguard-debian/1500896), and `wireguard` to, um, guard the wires:
|
||||||
|
```sh
|
||||||
|
sudo apt install ufw qrencode openresolv wireguard
|
||||||
|
```
|
||||||
|
|
||||||
|
Configuring the host firewall with `ufw` is very straight forward:
|
||||||
|
```sh
|
||||||
|
# First, SSH:
|
||||||
|
sudo ufw allow 22/tcp
|
||||||
|
# and WireGuard:
|
||||||
|
sudo ufw allow 51820/udp
|
||||||
|
# Then turn it on:
|
||||||
|
sudo ufw enable
|
||||||
|
```
|
||||||
|
|
||||||
|
The last preparatory step is to enable packet forwarding in the kernel so that the instance will be able to route traffic between the remote clients and my home network (once I get to that point). I can configure that on-the-fly with:
|
||||||
|
```sh
|
||||||
|
sudo sysctl -w net.ipv4.ip_forward=1
|
||||||
|
```
|
||||||
|
|
||||||
|
To make it permanent, I'll edit `/etc/sysctl.conf` and uncomment the same line:
|
||||||
|
```sh
|
||||||
|
$ sudo vi /etc/sysctl.conf
|
||||||
|
# Uncomment the next line to enable packet forwarding for IPv4
|
||||||
|
net.ipv4.ip_forward=1
|
||||||
|
```
|
||||||
|
|
||||||
|
##### WireGuard Interface Config
|
||||||
|
I'll switch to the root user, move into the `/etc/wireguard` directory, and issue `umask 077` so that the files I'm about to create will have a very limited permission set (to be accessible by root, and _only_ root):
|
||||||
|
```sh
|
||||||
|
sudo -i
|
||||||
|
cd /etc/wireguard
|
||||||
|
umask 077
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I can use the `wg genkey` command to generate the server's private key, save it to a file called `server.key`, pass it through `wg pubkey` to generate the corresponding public key, and save that to `server.pub`:
|
||||||
|
```sh
|
||||||
|
wg genkey | tee server.key | wg pubkey > server.pub
|
||||||
|
```
|
||||||
|
|
||||||
|
As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
|
||||||
|
|
||||||
|
The format of the interface configuration file will need to look something like this:
|
||||||
|
```
|
||||||
|
[Interface] # this section defines the local WireGuard interface
|
||||||
|
Address = # CIDR-format IP address of the virtual WireGuard interface
|
||||||
|
ListenPort = # WireGuard listens on this port for incoming traffic (randomized if not specified)
|
||||||
|
PrivateKey = # private key used to encrypt traffic sent to other peers
|
||||||
|
MTU = # packet size
|
||||||
|
DNS = # optional DNS server(s) and search domain(s) used for the VPN
|
||||||
|
PostUp = # command executed by wg-quick wrapper when the interface comes up
|
||||||
|
PostDown = # command executed by wg-quick wrapper when the interface goes down
|
||||||
|
|
||||||
|
[Peer] # now we're talking about the other peers connecting to this instance
|
||||||
|
PublicKey = # public key used to decrypt traffic sent by this peer
|
||||||
|
AllowedIPs = # which IPs will be routed to this peer
|
||||||
|
```
|
||||||
|
|
||||||
|
There will be a single `[Interface]` section in each peer's configuration file, but they may include multiple `[Peer]` sections. For my config, I'll use the `10.200.200.0/24` network for WireGuard, and let this server be `10.200.200.1`, the VyOS router in my home lab `10.200.200.2`, and I'll assign IPs to the other peers from there. I found a note that Google Cloud uses an MTU size of `1460` bytes so that's what I'll set on this end. I'm going to configure WireGuard to use the VyOS router as the DNS server, and I'll specify my internal `lab.bowdre.net` search domain. Finally, I'll leverage the `PostUp` and `PostDown` directives to enable and disable NAT so that the server will be able to forward traffic between networks for me.
|
||||||
|
|
||||||
|
So here's the start of my GCP WireGuard server's `/etc/wireguard/wg0.conf`:
|
||||||
|
```sh
|
||||||
|
# /etc/wireguard/wg0.conf
|
||||||
|
[Interface]
|
||||||
|
Address = 10.200.200.1/24
|
||||||
|
ListenPort = 51820
|
||||||
|
PrivateKey = {GCP_PRIVATE_KEY}
|
||||||
|
MTU = 1460
|
||||||
|
DNS = 10.200.200.2, lab.bowdre.net
|
||||||
|
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
|
||||||
|
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens4 -j MASQUERADE
|
||||||
|
```
|
||||||
|
|
||||||
|
I don't have any other peers ready to add to this config yet, but I can go ahead and bring up the interface all the same. I'm going to use the `wg-quick` wrapper instead of calling `wg` directly since it simplifies a bit of the configuration, but first I'll need to enable the `wg-quick@{INTERFACE}` service so that it will run automatically at startup:
|
||||||
|
```sh
|
||||||
|
systemctl enable wg-quick@wg0
|
||||||
|
systemctl start wg-quick@wg0
|
||||||
|
```
|
||||||
|
|
||||||
|
I can now bring up the interface with `wg-quick up wg0` and check the status with `wg show`:
|
||||||
|
```
|
||||||
|
root@wireguard:~# wg-quick up wg0
|
||||||
|
[#] ip link add wg0 type wireguard
|
||||||
|
[#] wg setconf wg0 /dev/fd/63
|
||||||
|
[#] ip -4 address add 10.200.200.1/24 dev wg0
|
||||||
|
[#] ip link set mtu 1460 up dev wg0
|
||||||
|
[#] resolvconf -a wg0 -m 0 -x
|
||||||
|
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
|
||||||
|
root@wireguard:~# wg show
|
||||||
|
interface: wg0
|
||||||
|
public key: {GCP_PUBLIC_IP}
|
||||||
|
private key: (hidden)
|
||||||
|
listening port: 51820
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll come back here once I've got a peer config to add.
|
||||||
|
|
||||||
|
### Configure VyoS Router as WireGuard Peer
|
||||||
|
Comparatively, configuring WireGuard on VyOS is a bit more direct. I'll start by entering configuration mode and generating and binding a key pair for this interface:
|
||||||
|
```sh
|
||||||
|
configure
|
||||||
|
run generate pki wireguard key-pair install interface wg0
|
||||||
|
```
|
||||||
|
|
||||||
|
And then I'll configure the rest of the options needed for the interface:
|
||||||
|
```sh
|
||||||
|
set interfaces wireguard wg0 address '10.200.200.2/24'
|
||||||
|
set interfaces wireguard wg0 description 'VPN to GCP'
|
||||||
|
set interfaces wireguard wg0 peer wireguard-gcp address '{GCP_PUBLIC_IP}'
|
||||||
|
set interfaces wireguard wg0 peer wireguard-gcp allowed-ips '0.0.0.0/0'
|
||||||
|
set interfaces wireguard wg0 peer wireguard-gcp persistent-keepalive '25'
|
||||||
|
set interfaces wireguard wg0 peer wireguard-gcp port '51820'
|
||||||
|
set interfaces wireguard wg0 peer wireguard-gcp public-key '{GCP_PUBLIC_KEY}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that this time I'm allowing all IPs (`0.0.0.0/0`) so that this WireGuard interface will pass traffic intended for any destination (whether it's local, remote, or on the Internet). And I'm specifying a [25-second `persistent-keepalive` interval](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence) to help ensure that this NAT-ed tunnel stays up even when it's not actively passing traffic - after all, I'll need the GCP-hosted peer to be able to initiate the connection so I can access the home network remotely.
|
||||||
|
|
||||||
|
While I'm at it, I'll also add a static route to ensure traffic for the WireGuard tunnel finds the right interface:
|
||||||
|
```sh
|
||||||
|
set protocols static route 10.200.200.0/24 interface wg0
|
||||||
|
```
|
||||||
|
|
||||||
|
And I'll add the new `wg0` interface as a listening address for the VyOS DNS forwarder:
|
||||||
|
```sh
|
||||||
|
set service dns forwarding listen-address '10.200.200.2'
|
||||||
|
```
|
||||||
|
|
||||||
|
I can use the `compare` command to verify the changes I've made, and then apply and save the updated config:
|
||||||
|
```sh
|
||||||
|
compare
|
||||||
|
commit
|
||||||
|
save
|
||||||
|
```
|
||||||
|
|
||||||
|
I can check the status of WireGuard on VyOS (and view the public key!) like so:
|
||||||
|
```sh
|
||||||
|
$ show interfaces wireguard wg0 summary
|
||||||
|
interface: wg0
|
||||||
|
public key: {VYOS_PUBLIC_KEY}
|
||||||
|
private key: (hidden)
|
||||||
|
listening port: 43543
|
||||||
|
|
||||||
|
peer: {GCP_PUBLIC_KEY}
|
||||||
|
endpoint: {GCP_PUBLIC_IP}:51820
|
||||||
|
allowed ips: 0.0.0.0/0
|
||||||
|
transfer: 0 B received, 592 B sent
|
||||||
|
persistent keepalive: every 25 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
See? That part was much easier to set up! But it doesn't look like it's actually passing traffic yet... because while the VyOS peer has been configured with the GCP peer's public key, the GCP peer doesn't know anything about the VyOS peer yet.
|
||||||
|
|
||||||
|
So I'll copy `{VYOS_PUBLIC_KEY}` and SSH back to the GCP instance to finish that configuration. Once I'm there, I can edit `/etc/wireguard/wg0.conf` as root and add in a new `[Peer]` section at the bottom, like this:
|
||||||
|
```
|
||||||
|
[Peer]
|
||||||
|
# VyOS
|
||||||
|
PublicKey = {VYOS_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||||
|
```
|
||||||
|
|
||||||
|
This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
|
||||||
|
|
||||||
|
After saving the file, I can either restart WireGuard by bringing the interface down and back up (`wg-quick down wg0 && wg-quick up wg0`), or I can reload it on the fly with:
|
||||||
|
```sh
|
||||||
|
sudo -i
|
||||||
|
wg syncconf wg0 <(wg-quick strip wg0)
|
||||||
|
```
|
||||||
|
|
||||||
|
(I can't just use `wg syncconf wg0` directly since `/etc/wireguard/wg0.conf` includes the `PostUp`/`PostDown` commands which can only be parsed by the `wg-quick` wrapper, so I'm using `wg-quick strip {INTERFACE}` to grab the contents of the config file, remove the problematic bits, and then pass what's left to the `wg syncconf {INTERFACE}` command to update the current running config.)
|
||||||
|
|
||||||
|
Now I can check the status of WireGuard on the GCP end:
|
||||||
|
```sh
|
||||||
|
root@wireguard:~# wg show
|
||||||
|
interface: wg0
|
||||||
|
public key: {GCP_PUBLIC_KEY}
|
||||||
|
private key: (hidden)
|
||||||
|
listening port: 51820
|
||||||
|
|
||||||
|
peer: {VYOS_PUBLIC_KEY}
|
||||||
|
endpoint: {VYOS_PUBLIC_IP}:43990
|
||||||
|
allowed ips: 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||||
|
latest handshake: 55 seconds ago
|
||||||
|
transfer: 1.23 KiB received, 368 B sent
|
||||||
|
```
|
||||||
|
|
||||||
|
Hey, we're passing traffic now! And I can verify that I can ping stuff on my home and lab networks from the GCP instance:
|
||||||
|
```sh
|
||||||
|
john@wireguard:~$ ping -c 1 192.168.1.5
|
||||||
|
PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data.
|
||||||
|
64 bytes from 192.168.1.5: icmp_seq=1 ttl=127 time=35.6 ms
|
||||||
|
|
||||||
|
--- 192.168.1.5 ping statistics ---
|
||||||
|
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||||
|
rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms
|
||||||
|
|
||||||
|
john@wireguard:~$ ping -c 1 172.16.10.1
|
||||||
|
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
|
||||||
|
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=35.3 ms
|
||||||
|
|
||||||
|
--- 172.16.10.1 ping statistics ---
|
||||||
|
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||||
|
rtt min/avg/max/mdev = 35.275/35.275/35.275/0.000 ms
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool!
|
||||||
|
|
||||||
|
### Adding Additional Peers
|
||||||
|
So my GCP and VyOS peers are talking, but the ultimate goals here are for my Chromebook to have access to my homelab resources while away from home, and for my phones to have secure internet access when connected to WiFi networks I don't control. That means adding at least two more peers to the GCP server. WireGuard [offers downloads](https://www.wireguard.com/install/) for just about every operating system you can imagine, but I'll be using the [Android app](https://play.google.com/store/apps/details?id=com.wireguard.android) for both the Chromebook and phones.
|
||||||
|
|
||||||
|
#### Chromebook
|
||||||
|
The first step is to install the WireGuard Android app.
|
||||||
|
|
||||||
|
_Note: the version of the WireGuard app currently available on the Play Store (v1.0.20210926) [has an issue](https://www.reddit.com/r/WireGuard/comments/q11rt9/wireguard_1020210926_and_chromeos/) on Chrome OS that causes it to not pass traffic after the Chromebook has resumed from sleep. The workaround for this is to install an older version of the app (1.0.20210506) which can be obtained from [F-Droid](https://f-droid.org/en/packages/com.wireguard.android/). Doing so requires having the Linux environment enabled on Chrome OS and the **Develop Android Apps > Enable ADB Debugging** option enabled in the Chrome OS settings. The process for sideloading apps is [detailed here](https://developer.android.com/topic/arc/development-environment)._
|
||||||
|
|
||||||
|
Once it's installed, I open the app and click the "Plus" button to create a new tunnel, and select the _Create from scratch_ option. I click the circle-arrows icon at the right edge of the _Private key_ field, and that automatically generates this peer's private and public key pair. Simply clicking on the _Public key_ field will automatically copy the generated key to my clipboard, which will be useful for sharing it with the server. Otherwise I fill out the **Interface** section similarly to what I've done already:
|
||||||
|
|
||||||
|
| Parameter | Value |
|
||||||
|
| --- | --- |
|
||||||
|
| Name | `wireguard-gcp` |
|
||||||
|
| Private key | `{CB_PRIVATE_KEY}` |
|
||||||
|
| Public key | `{CB_PUBLIC_KEY}` |
|
||||||
|
| Addresses | `10.200.200.3/24` |
|
||||||
|
| Listen port | |
|
||||||
|
| DNS servers | `10.200.200.2` |
|
||||||
|
| MTU | |
|
||||||
|
|
||||||
|
I then click the **Add Peer** button to tell this client about the peer it will be connecting to - the GCP-hosted instance:
|
||||||
|
|
||||||
|
| Parameter | Value |
|
||||||
|
| --- | --- |
|
||||||
|
| Public key | `{GCP_PUBLIC_KEY}` |
|
||||||
|
| Pre-shared key | |
|
||||||
|
| Persistent keepalive | |
|
||||||
|
| Endpoint | `{GCP_PUBLIC_IP}:51820` |
|
||||||
|
| Allowed IPs | `0.0.0.0/0` |
|
||||||
|
|
||||||
|
I _shouldn't_ need the keepalive for the "Road Warrior" peers connecting to the GCP peer, but I can always set that later if I run into stability issues.
|
||||||
|
|
||||||
|
Now I can go ahead and save this configuration, but before I try (and fail) to connect I first need to tell the cloud-hosted peer about the Chromebook. So I fire up an SSH session to my GCP instance, become root, and edit the WireGuard configuration to add a new `[Peer]` section.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo -i
|
||||||
|
vi /etc/wireguard/wg0.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
Here's the new section that I'll add to the bottom of the config:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
[Peer]
|
||||||
|
# Chromebook
|
||||||
|
PublicKey = {CB_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 10.200.200.3/32
|
||||||
|
```
|
||||||
|
|
||||||
|
This one is acting as a single-node endpoint (rather than an entryway into other networks like the VyOS peer) so setting `AllowedIPs` to only the peer's IP makes sure that WireGuard will only send it traffic specifically intended for this peer.
|
||||||
|
|
||||||
|
So my complete `/etc/wireguard/wg0.conf` looks like this so far:
|
||||||
|
```sh
|
||||||
|
# /etc/wireguard/wg0.conf
|
||||||
|
[Interface]
|
||||||
|
Address = 10.200.200.1/24
|
||||||
|
ListenPort = 51820
|
||||||
|
PrivateKey = {GCP_PRIVATE_KEY}
|
||||||
|
MTU = 1460
|
||||||
|
DNS = 10.200.200.2, lab.bowdre.net
|
||||||
|
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
|
||||||
|
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens4 -j MASQUERADE
|
||||||
|
|
||||||
|
[Peer]
|
||||||
|
# VyOS
|
||||||
|
PublicKey = {VYOS_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||||
|
|
||||||
|
[Peer]
|
||||||
|
# Chromebook
|
||||||
|
PublicKey = {CB_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 10.200.200.3/32
|
||||||
|
```
|
||||||
|
|
||||||
|
Now to save the file and reload the WireGuard configuration again:
|
||||||
|
```sh
|
||||||
|
wg syncconf wg0 <(wg-quick strip wg0)
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point I can activate the connection in the WireGuard Android app, wait a few seconds, and check with `wg show` to confirm that the tunnel has been established successfully:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
root@wireguard:~# wg show
|
||||||
|
interface: wg0
|
||||||
|
public key: {GCP_PUBLIC_KEY}
|
||||||
|
private key: (hidden)
|
||||||
|
listening port: 51820
|
||||||
|
|
||||||
|
peer: {VYOS_PUBLIC_KEY}
|
||||||
|
endpoint: {VYOS_PUBLIC_IP}:43990
|
||||||
|
allowed ips: 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||||
|
latest handshake: 1 minute, 55 seconds ago
|
||||||
|
transfer: 200.37 MiB received, 16.32 MiB sent
|
||||||
|
|
||||||
|
peer: {CB_PUBLIC_KEY}
|
||||||
|
endpoint: {CB_PUBLIC_IP}:33752
|
||||||
|
allowed ips: 10.200.200.3/32
|
||||||
|
latest handshake: 48 seconds ago
|
||||||
|
transfer: 169.17 KiB received, 808.33 KiB sent
|
||||||
|
```
|
||||||
|
|
||||||
|
And I can even access my homelab when not at home!
|
||||||
|
![Remote access to my homelab!](/assets/images/posts-2021/10/20211028_remote_homelab.png)
|
||||||
|
|
||||||
|
#### Android Phone
|
||||||
|
Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings.
|
||||||
|
|
||||||
|
I'll start by SSHing to the GCP instance, elevating to root, setting the restrictive `umask` again, and creating a new folder to store client configurations.
|
||||||
|
```sh
|
||||||
|
sudo -i
|
||||||
|
umask 077
|
||||||
|
mkdir /etc/wireguard/clients
|
||||||
|
cd /etc/wireguard/clients
|
||||||
|
```
|
||||||
|
|
||||||
|
As before, I'll use the built-in `wg` commands to generate the private and public key pair:
|
||||||
|
```sh
|
||||||
|
wg genkey | tee phone1.key | wg pubkey > phone1.pub
|
||||||
|
```
|
||||||
|
|
||||||
|
I can then use those keys to assemble the config for the phone:
|
||||||
|
```sh
|
||||||
|
# /etc/wireguard/clients/phone1.conf
|
||||||
|
[Interface]
|
||||||
|
PrivateKey = {PHONE1_PRIVATE_KEY}
|
||||||
|
Address = 10.200.200.4/24
|
||||||
|
DNS = 10.200.200.2, lab.bowdre.net
|
||||||
|
|
||||||
|
[Peer]
|
||||||
|
PublicKey = {GCP_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 0.0.0.0/0
|
||||||
|
Endpoint = {GCP_PUBLIC_IP}:51820
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also add the interface address and corresponding public key to a new `[Peer]` section of `/etc/wireguard/wg0.conf`:
|
||||||
|
```sh
|
||||||
|
[Peer]
|
||||||
|
PublicKey = {PHONE1_PUBLIC_KEY}
|
||||||
|
AllowedIPs = 10.200.200.4/32
|
||||||
|
```
|
||||||
|
|
||||||
|
And reload the WireGuard config:
|
||||||
|
```sh
|
||||||
|
wg syncconf wg0 <(wg-quick strip wg0)
|
||||||
|
```
|
||||||
|
|
||||||
|
Back in the `clients/` directory, I can use `qrencode` to render the phone configuration file (keys and all!) as a QR code:
|
||||||
|
```sh
|
||||||
|
qrencode -t ansiutf8 < phone1.conf
|
||||||
|
```
|
||||||
|
![QR code config](/assets/images/posts-2021/10/20211028_qrcode_config.png)
|
||||||
|
|
||||||
|
And then I just open the WireGuard app on my phone and use the **Scan from QR Code** option. After a successful scan, it'll prompt me to name the new tunnel, and then I should be able to connect right away.
|
||||||
|
![Successful mobile connection](/assets/images/posts-2021/10/20211028_wireguard_mobile.png)
|
||||||
|
|
||||||
|
I can even access my vSphere lab environment - not that it offers a great mobile experience...
|
||||||
|
![vSphere mobile sucks](/assets/images/posts-2021/10/20211028_mobile_vsphere_sucks.jpg)
|
||||||
|
|
||||||
|
Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
rm -f /etc/wireguard/clients/*
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Bonus: Automation!
|
||||||
|
I've [written before](auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker) about a set of [Tasker](https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm) profiles I put together so that my phone would automatically connect to a VPN whenever it connects to a WiFi network I don't control. It didn't take much effort at all to adapt the profile to work with my new WireGuard setup.
|
||||||
|
|
||||||
|
Two quick pre-requisites first:
|
||||||
|
1. Open the WireGuard Android app, tap the three-dot menu button at the top right, expand the Advanced section, and enable the _Allow remote control apps_ so that Tasker will be permitted to control WireGuard.
|
||||||
|
2. Exclude the WireGuard app from Android's battery optimization so that it doesn't have any problems running in the background. On (Pixel-flavored) Android 12, this can be done by going to **Settings > Apps > See all apps > WireGuard > Battery** and selecting the _Unrestricted_ option.
|
||||||
|
|
||||||
|
On to the Tasker config. The only changes will be in the [VPN on Strange Wifi](auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker#vpn-on-strange-wifi) profile. I'll remove the OpenVPN-related actions from the Enter and Exit tasks and replace them with the built-in **Tasker > Tasker Function WireGuard Set Tunnel** action.
|
||||||
|
|
||||||
|
For the Enter task, I'll set the tunnel status to `true` and specify the name of the tunnel as configured in the WireGuard app; the Exit task gets the status set to `false` to disable the tunnel. Both actions will be conditional upon the `%TRUSTED_WIFI` variable being unset.
|
||||||
|
![Tasker setup](/assets/images/posts-2021/10/20211028_tasker_setup.png)
|
||||||
|
|
||||||
|
```
|
||||||
|
Profile: VPN on Strange WiFi
|
||||||
|
Settings: Notification: no
|
||||||
|
State: Wifi Connected [ SSID:* MAC:* IP:* Active:Any ]
|
||||||
|
|
||||||
|
Enter Task: ConnectVPN
|
||||||
|
A1: Tasker Function [
|
||||||
|
Function: WireGuardSetTunnel(true,wireguard-gcp) ]
|
||||||
|
If [ %TRUSTED_WIFI !Set ]
|
||||||
|
|
||||||
|
Exit Task: DisconnectVPN
|
||||||
|
A1: Tasker Function [
|
||||||
|
Function: WireGuardSetTunnel(false,wireguard-gcp) ]
|
||||||
|
If [ %TRUSTED_WIFI !Set ]
|
||||||
|
```
|
||||||
|
|
||||||
|
_Automagic!_
|
||||||
|
|
||||||
|
#### Other Peers
|
||||||
|
Any additional peers that need to be added in the future will likely follow one of the above processes. The steps are always to generate the peer's key pair, use the private key to populate the `[Interface]` portion of the peer's config, configure the `[Peer]` section with the _public_ key, allowed IPs, and endpoint address of the peer it will be connecting to, and then to add the new peer's _public_ key and internal WireGuard IP to a new `[Peer]` section of the existing peer's config.
|
||||||
|
|
120
content/post/2021-11-05-fixing-403-error-ssc-8-6-vra-idm.md
Normal file
|
@ -0,0 +1,120 @@
|
||||||
|
---
|
||||||
|
categories:
|
||||||
|
- vRA8
|
||||||
|
date: "2021-11-05T00:00:00Z"
|
||||||
|
header:
|
||||||
|
teaser: assets/images/posts-2021/11/20211105_ssc_403.png
|
||||||
|
tags:
|
||||||
|
- vra
|
||||||
|
- lcm
|
||||||
|
- salt
|
||||||
|
- openssl
|
||||||
|
- certs
|
||||||
|
title: Fixing 403 error on SaltStack Config 8.6 integrated with vRA and vIDM
|
||||||
|
---
|
||||||
|
I've been wanting to learn a bit more about [SaltStack Config](https://www.vmware.com/products/vrealize-automation/saltstack-config.html) so I recently deployed SSC 8.6 to my environment (using vRealize Suite Lifecycle Manager to do so as [described here](https://cosmin.gq/2021/02/02/deploying-saltstack-config-via-lifecycle-manager-in-a-vra-environment/)). I selected the option to integrate with my pre-existing vRA and vIDM instances so that I wouldn't have to manage authentication directly since I recall that the LDAP authentication piece was a little clumsy the last time I tried it.
|
||||||
|
|
||||||
|
### The Problem
|
||||||
|
Unfortunately I ran into a problem immediately after the deployment completed:
|
||||||
|
![403 error from SSC](/assets/images/posts-2021/11/20211105_ssc_403.png)
|
||||||
|
|
||||||
|
Instead of being redirected to the vIDM authentication screen, I get a 403 Forbidden error.
|
||||||
|
|
||||||
|
I used SSH to log in to the SSC appliance as `root`, and I found this in the `/var/log/raas/raas` log file:
|
||||||
|
```
|
||||||
|
2021-11-05 18:37:47,705 [var.lib.raas.unpack._MEIV8zDs3.raas.mods.vra.params ][ERROR :252 ][Webserver:6170] SSL Exception - https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery may be using a self-signed certificate HTTPSConnectionPool(host='vra.lab.bowdre.net', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zc2MubGFiLmJvd2RyZS5uZXQvaWRlbnRpdHkvYXBpL2NvcmUvYXV0aG4vY3Nw&redirect_uri=https%3A%2F%2Fssc.lab.bowdre.net%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-299XZv71So (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))
|
||||||
|
2021-11-05 18:37:47,928 [tornado.application ][ERROR :1792][Webserver:6170] Uncaught exception GET /csp/gateway/am/api/loggedin/user/profile (192.168.1.100)
|
||||||
|
HTTPServerRequest(protocol='https', host='ssc.lab.bowdre.net', method='GET', uri='/csp/gateway/am/api/loggedin/user/profile', version='HTTP/1.1', remote_ip='192.168.1.100')
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "urllib3/connectionpool.py", line 706, in urlopen
|
||||||
|
File "urllib3/connectionpool.py", line 382, in _make_request
|
||||||
|
File "urllib3/connectionpool.py", line 1010, in _validate_conn
|
||||||
|
File "urllib3/connection.py", line 421, in connect
|
||||||
|
File "urllib3/util/ssl_.py", line 429, in ssl_wrap_socket
|
||||||
|
File "urllib3/util/ssl_.py", line 472, in _ssl_wrap_socket_impl
|
||||||
|
File "ssl.py", line 423, in wrap_socket
|
||||||
|
File "ssl.py", line 870, in _create
|
||||||
|
File "ssl.py", line 1139, in do_handshake
|
||||||
|
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
|
||||||
|
```
|
||||||
|
|
||||||
|
Further, attempting to pull down that URL with `curl` also failed:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
|
||||||
|
curl: (60) SSL certificate problem: self signed certificate in certificate chain
|
||||||
|
More details here: https://curl.se/docs/sslcerts.html
|
||||||
|
|
||||||
|
curl failed to verify the legitimacy of the server and therefore could not
|
||||||
|
establish a secure connection to it. To learn more about this situation and
|
||||||
|
how to fix it, please visit the web page mentioned above.
|
||||||
|
```
|
||||||
|
|
||||||
|
In my homelab, I am indeed using self-signed certificates. I also encountered the same issue in my lab at work, though, and I'm using certs issued by our enterprise CA there. I had run into a similar problem with previous versions of SSC, but the [quick-and-dirty workaround to disable certificate verification](https://communities.vmware.com/t5/VMware-vRealize-Discussions/SaltStack-Config-Integration-show-Blank-Page/td-p/2863973) doesn't seem to work anymore.
|
||||||
|
|
||||||
|
### The Solution
|
||||||
|
Clearly I needed to import either the vRA system's certificate (for my homelab) or the certificate chain for my enterprise CA (for my work environment) into SSC's certificate store so that it will trust vRA. But how?
|
||||||
|
|
||||||
|
I fumbled around for a bit and managed to get the required certs added to the system certificate store so that my `curl` test would succeed, but trying to access the SSC web UI still gave me a big middle finger. I eventually found [this documentation](https://docs.vmware.com/en/VMware-vRealize-Automation-SaltStack-Config/8.6/install-configure-saltstack-config/GUID-21A87CE2-8184-4F41-B71B-0FCBB93F21FC.html#troubleshooting-saltstack-config-environments-with-vrealize-automation-that-use-selfsigned-certificates-3) which describes how to configure SSC to work with self-signed certs, and it held the missing detail of how to tell the SaltStack Returner-as-a-Service (RaaS) component that it should use that system certificate store.
|
||||||
|
|
||||||
|
So here's what I did to get things working in my homelab:
|
||||||
|
1. Point a browser to my vRA instance, click on the certificate error to view the certificate details, and then export the _CA_ certificate to a local file. (For a self-signed cert issued by LCM, this will likely be called something like `Automatically generated one-off CA authority for vRA`.)
|
||||||
|
![Exporting the self-signed CA cert](/assets/images/posts-2021/11/20211105_export_selfsigned_ca.png)
|
||||||
|
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
|
||||||
|
3. Append the certificate to the end of the system `ca-bundle.crt`:
|
||||||
|
```sh
|
||||||
|
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
```
|
||||||
|
4. Test that I can now `curl` from vRA without a certificate error:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
|
||||||
|
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""}
|
||||||
|
```
|
||||||
|
5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding
|
||||||
|
```
|
||||||
|
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
```
|
||||||
|
above the `ExecStart` line:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# cat /usr/lib/systemd/system/raas.service
|
||||||
|
[Unit]
|
||||||
|
Description=The SaltStack Enterprise API Server
|
||||||
|
After=network.target
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=raas
|
||||||
|
Group=raas
|
||||||
|
# to be able to bind port < 1024
|
||||||
|
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||||
|
NoNewPrivileges=yes
|
||||||
|
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
|
||||||
|
PermissionsStartOnly=true
|
||||||
|
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
|
||||||
|
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
|
||||||
|
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
ExecStart=/usr/bin/raas
|
||||||
|
TimeoutStopSec=90
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
6. Stop and restart the `raas` service:
|
||||||
|
```sh
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl stop raas
|
||||||
|
systemctl start raas
|
||||||
|
```
|
||||||
|
7. And then try to visit the SSC URL again. This time, it redirects successfully to vIDM:
|
||||||
|
![Successful vIDM redirect](/assets/images/posts-2021/11/20211105_vidm_login.png)
|
||||||
|
8. Log in and get salty:
|
||||||
|
![Get salty!](/assets/images/posts-2021/11/20211105_get_salty.png)
|
||||||
|
|
||||||
|
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
|
||||||
|
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
|
||||||
|
2. Use `openssl` to extract the individual certificates:
|
||||||
|
```sh
|
||||||
|
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem
|
||||||
|
```
|
||||||
|
Copy it to the SSC appliance, and then pick up with Step 3 above.
|
||||||
|
|
||||||
|
I'm eager to dive deeper with SSC and figure out how best to leverage it with vRA. I'll let you know if/when I figure out any cool tricks!
|
||||||
|
|
||||||
|
In the meantime, maybe my struggles today can help you get past similar hurdles in your SSC deployments.
|
|
@ -0,0 +1 @@
|
||||||
|
*{box-sizing:border-box}html{line-height:1.6}body{margin:0;font-family:sans-serif;background:#353b43;color:#afbac4}h1,h2,h3,h4,h5,h6{color:#fff}a{color:#57cc8a;transition:color .35s;text-decoration:none}a:hover{color:#fff}code{font-family:monospace,monospace;font-size:1em;color:rgba(175,186,196,.8)}pre{font-size:1rem;line-height:1.2em;margin:0;overflow:auto}pre code{font-size:.8em}::selection{background:rgba(175,186,196,.25)}::-moz-selection{background:rgba(175,186,196,.25)}.app-header{padding:2.5em;background:#242930;text-align:center}.app-header-avatar{width:15rem;height:15rem;border-radius:100%;border:.5rem solid #57cc8a}.app-container{padding:2.5rem}.app-header-social{font-size:2em;color:#fff}.app-header-social a{margin:0 .1em}@media(min-width:940px){.app-header{position:fixed;top:0;left:0;width:20rem;min-height:100vh}.app-container{max-width:65rem;margin-left:20rem}}.error-404{text-align:center}.error-404-title{text-transform:uppercase}.icon{display:inline-block;width:1em;height:1em;vertical-align:-.125em}.pagination{display:block;list-style:none;padding:0;font-size:.8em;text-align:center;margin:3em 0}.page-item{display:inline-block}.page-item .page-link{display:block;padding:.285em .8em}.page-item.active .page-link{color:#fff;border-radius:2em;background:#57cc8a}.post-title{color:#fff}.post-content>pre,.post-content .highlight{margin:1em 0}.post-content>pre,.post-content .highlight>pre,.post-content .highlight>div{border-left:.4em solid rgba(87,204,138,.8);padding:.5em 1em}.post-content img{max-width:100%}.post-meta{font-size:.8em}.posts-list{padding:0}.posts-list-item{list-style:none;padding:.4em 0}.posts-list-item:not(:last-child){border-bottom:1px dashed rgba(255,255,255,.3)}.posts-list-item-description{display:block;font-size:.8em}.tag{display:inline-block;margin-right:.2em;padding:0 .6em;font-size:.9em;border-radius:.2em;white-space:nowrap;background:rgba(255,255,255,.1);transition:color .35s,background .35s}.tag:hover{transition:color .25s,background .05s;background:rgba(255,255,255,.3)}.tags-list{padding:0}.tags-list-item{list-style:none;padding:.4em 0}.tags-list-item:not(:last-child){border-bottom:1px dashed rgba(255,255,255,.3)}@media(min-width:450px){.tags-list{display:flex;flex-wrap:wrap}.tags-list-item{width:calc(50% - 1em)}.tags-list-item:nth-child(even){margin-left:1em}.tags-list-item:nth-last-child(2){border:none}}
|
|
@ -0,0 +1 @@
|
||||||
|
{"Target":"css/main.min.4a7ec8660f9a44b08c4da97c5f2e31b1192df1d4d0322e65c0dbbc6ecb1b863f.css","MediaType":"text/css","Data":{"Integrity":"sha256-Sn7IZg+aRLCMTal8Xy4xsRkt8dTQMi5lwNu8bssbhj8="}}
|
1
static/CNAME
Normal file
|
@ -0,0 +1 @@
|
||||||
|
virtuallypotato.com
|
BIN
static/assets/images/bio-photo.jpg
Normal file
After Width: | Height: | Size: 66 KiB |
BIN
static/assets/images/favs/android-chrome-192x192.png
Normal file
After Width: | Height: | Size: 18 KiB |
BIN
static/assets/images/favs/android-chrome-512x512.png
Normal file
After Width: | Height: | Size: 63 KiB |
BIN
static/assets/images/favs/apple-touch-icon.png
Normal file
After Width: | Height: | Size: 12 KiB |
9
static/assets/images/favs/browserconfig.xml
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<browserconfig>
|
||||||
|
<msapplication>
|
||||||
|
<tile>
|
||||||
|
<square150x150logo src="/assets/images/mstile-150x150.png"/>
|
||||||
|
<TileColor>#da532c</TileColor>
|
||||||
|
</tile>
|
||||||
|
</msapplication>
|
||||||
|
</browserconfig>
|
BIN
static/assets/images/favs/favicon-16x16.png
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
static/assets/images/favs/favicon-32x32.png
Normal file
After Width: | Height: | Size: 2 KiB |
BIN
static/assets/images/favs/favicon.ico
Normal file
After Width: | Height: | Size: 15 KiB |
BIN
static/assets/images/favs/mstile-150x150.png
Normal file
After Width: | Height: | Size: 10 KiB |
35
static/assets/images/favs/safari-pinned-tab.svg
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
<?xml version="1.0" standalone="no"?>
|
||||||
|
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
|
||||||
|
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
||||||
|
<svg version="1.0" xmlns="http://www.w3.org/2000/svg"
|
||||||
|
width="1024.000000pt" height="1024.000000pt" viewBox="0 0 1024.000000 1024.000000"
|
||||||
|
preserveAspectRatio="xMidYMid meet">
|
||||||
|
<metadata>
|
||||||
|
Created by potrace 1.14, written by Peter Selinger 2001-2017
|
||||||
|
</metadata>
|
||||||
|
<g transform="translate(0.000000,1024.000000) scale(0.100000,-0.100000)"
|
||||||
|
fill="#000000" stroke="none">
|
||||||
|
<path d="M6509 9416 c-2 -2 -49 -7 -104 -10 -55 -4 -113 -9 -130 -11 -16 -3
|
||||||
|
-46 -7 -65 -9 -72 -9 -138 -21 -330 -61 -57 -11 -252 -73 -339 -107 -397 -155
|
||||||
|
-624 -335 -861 -683 -30 -44 -57 -82 -60 -85 -3 -3 -23 -32 -45 -65 -232 -352
|
||||||
|
-428 -574 -730 -826 -249 -209 -510 -382 -905 -599 -285 -157 -497 -277 -548
|
||||||
|
-311 -24 -16 -45 -29 -47 -29 -10 0 -351 -236 -415 -287 -8 -6 -53 -42 -100
|
||||||
|
-80 -84 -66 -97 -78 -209 -177 -60 -54 -322 -316 -357 -358 -12 -14 -50 -61
|
||||||
|
-85 -104 -35 -44 -66 -81 -69 -84 -3 -3 -29 -36 -57 -75 -246 -338 -427 -731
|
||||||
|
-508 -1105 -52 -240 -63 -356 -59 -620 1 -124 5 -234 7 -246 3 -11 8 -45 11
|
||||||
|
-75 36 -335 185 -790 397 -1212 132 -264 338 -612 466 -787 18 -25 37 -50 41
|
||||||
|
-55 25 -36 148 -170 207 -226 159 -153 300 -250 510 -352 207 -101 449 -176
|
||||||
|
655 -203 25 -3 56 -7 70 -9 166 -28 598 -27 813 0 20 3 56 7 80 11 90 11 277
|
||||||
|
45 382 69 61 14 121 28 135 31 14 2 34 8 45 11 11 4 25 8 30 9 36 7 139 37
|
||||||
|
190 54 33 11 61 21 62 21 8 -1 304 100 322 108 9 5 43 19 76 31 77 30 254 104
|
||||||
|
325 137 798 370 1433 821 1899 1348 180 203 216 255 401 570 62 105 137 226
|
||||||
|
168 270 53 74 228 257 447 466 243 233 570 627 770 929 17 25 33 47 36 50 6 5
|
||||||
|
74 114 158 253 81 135 193 349 255 490 9 20 23 54 32 74 33 73 116 299 135
|
||||||
|
368 47 164 79 298 94 390 27 160 39 316 38 465 -1 102 -10 246 -18 288 -2 11
|
||||||
|
-7 42 -11 69 -6 46 -8 57 -19 106 -3 12 -8 38 -11 57 -13 72 -89 334 -119 405
|
||||||
|
-7 17 -31 76 -55 132 -110 267 -269 520 -439 698 -114 120 -232 213 -344 270
|
||||||
|
-21 11 -118 79 -215 150 -97 72 -214 155 -260 183 -180 115 -419 209 -632 251
|
||||||
|
-119 23 -180 34 -270 46 -25 3 -56 8 -70 10 -14 2 -59 6 -100 10 -41 4 -88 9
|
||||||
|
-105 12 -38 5 -560 14 -566 9z"/>
|
||||||
|
</g>
|
||||||
|
</svg>
|
After Width: | Height: | Size: 2.1 KiB |
19
static/assets/images/favs/site.webmanifest
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
{
|
||||||
|
"name": "",
|
||||||
|
"short_name": "",
|
||||||
|
"icons": [
|
||||||
|
{
|
||||||
|
"src": "/assets/images/android-chrome-192x192.png",
|
||||||
|
"sizes": "192x192",
|
||||||
|
"type": "image/png"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"src": "/assets/images/android-chrome-512x512.png",
|
||||||
|
"sizes": "512x512",
|
||||||
|
"type": "image/png"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"theme_color": "#ffffff",
|
||||||
|
"background_color": "#ffffff",
|
||||||
|
"display": "standalone"
|
||||||
|
}
|
BIN
static/assets/images/posts-2020/-9apQIUci.png
Normal file
After Width: | Height: | Size: 98 KiB |
BIN
static/assets/images/posts-2020/-Fuvz-GmF.png
Normal file
After Width: | Height: | Size: 119 KiB |
BIN
static/assets/images/posts-2020/-PHf9oUyM.png
Normal file
After Width: | Height: | Size: 533 KiB |
BIN
static/assets/images/posts-2020/-aPGJhSvz.png
Normal file
After Width: | Height: | Size: 542 KiB |
BIN
static/assets/images/posts-2020/-lp1-DGiM.png
Normal file
After Width: | Height: | Size: 86 KiB |
BIN
static/assets/images/posts-2020/0-9BaWJqq.png
Normal file
After Width: | Height: | Size: 44 KiB |
BIN
static/assets/images/posts-2020/0-h1flLZs.png
Normal file
After Width: | Height: | Size: 17 KiB |
BIN
static/assets/images/posts-2020/09RIXJc12.png
Normal file
After Width: | Height: | Size: 917 KiB |
BIN
static/assets/images/posts-2020/09faF5-Fm.png
Normal file
After Width: | Height: | Size: 56 KiB |
BIN
static/assets/images/posts-2020/0ZYcORuiU.png
Normal file
After Width: | Height: | Size: 90 KiB |
BIN
static/assets/images/posts-2020/0fSl55whe.png
Normal file
After Width: | Height: | Size: 237 KiB |
BIN
static/assets/images/posts-2020/1LDP5zxCU.gif
Normal file
After Width: | Height: | Size: 211 KiB |
BIN
static/assets/images/posts-2020/1NJvDeA7r.png
Normal file
After Width: | Height: | Size: 53 KiB |
BIN
static/assets/images/posts-2020/2LTaCEdWH.png
Normal file
After Width: | Height: | Size: 37 KiB |
BIN
static/assets/images/posts-2020/2MXGpB9Zd.png
Normal file
After Width: | Height: | Size: 369 KiB |
BIN
static/assets/images/posts-2020/2fbKJc5Y6.png
Normal file
After Width: | Height: | Size: 93 KiB |
BIN
static/assets/images/posts-2020/2g57odtq2.jpeg
Normal file
After Width: | Height: | Size: 552 KiB |
BIN
static/assets/images/posts-2020/2otDJvqRP.png
Normal file
After Width: | Height: | Size: 275 KiB |
BIN
static/assets/images/posts-2020/2xe34VJym.png
Normal file
After Width: | Height: | Size: 476 KiB |
BIN
static/assets/images/posts-2020/3-UIo1Ykn.png
Normal file
After Width: | Height: | Size: 226 KiB |
BIN
static/assets/images/posts-2020/34xD8tbli.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
static/assets/images/posts-2020/3BQnEd0bY.png
Normal file
After Width: | Height: | Size: 498 KiB |
BIN
static/assets/images/posts-2020/3FHDx82pi.png
Normal file
After Width: | Height: | Size: 87 KiB |
BIN
static/assets/images/posts-2020/3SW0gmJL2.jpeg
Normal file
After Width: | Height: | Size: 338 KiB |
BIN
static/assets/images/posts-2020/3vQER.png
Normal file
After Width: | Height: | Size: 187 KiB |
BIN
static/assets/images/posts-2020/42n3aMim5.png
Normal file
After Width: | Height: | Size: 493 KiB |
BIN
static/assets/images/posts-2020/4B6wN8QeG.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
static/assets/images/posts-2020/4WQ8HWJ2N.png
Normal file
After Width: | Height: | Size: 185 KiB |
BIN
static/assets/images/posts-2020/4X1dPG_Rq.png
Normal file
After Width: | Height: | Size: 54 KiB |
BIN
static/assets/images/posts-2020/4dNwfNNDY.png
Normal file
After Width: | Height: | Size: 306 KiB |
BIN
static/assets/images/posts-2020/4flvfGC54.png
Normal file
After Width: | Height: | Size: 72 KiB |
BIN
static/assets/images/posts-2020/4o5bqRiTJ.png
Normal file
After Width: | Height: | Size: 364 KiB |
BIN
static/assets/images/posts-2020/4rbWYw2GD.png
Normal file
After Width: | Height: | Size: 136 KiB |
BIN
static/assets/images/posts-2020/5ATk99aPW.png
Normal file
After Width: | Height: | Size: 134 KiB |
BIN
static/assets/images/posts-2020/5PD1H7b1O.png
Normal file
After Width: | Height: | Size: 212 KiB |
BIN
static/assets/images/posts-2020/5QFTPHp5H.png
Normal file
After Width: | Height: | Size: 98 KiB |
BIN
static/assets/images/posts-2020/5bWfqh4ZSE.png
Normal file
After Width: | Height: | Size: 52 KiB |
BIN
static/assets/images/posts-2020/6-auEYd-W.png
Normal file
After Width: | Height: | Size: 210 KiB |
BIN
static/assets/images/posts-2020/630ix7uVw.png
Normal file
After Width: | Height: | Size: 173 KiB |
BIN
static/assets/images/posts-2020/65ECa7nej.png
Normal file
After Width: | Height: | Size: 79 KiB |
BIN
static/assets/images/posts-2020/6Gpxapzd3.png
Normal file
After Width: | Height: | Size: 112 KiB |
BIN
static/assets/images/posts-2020/6HBIUf6KE.png
Normal file
After Width: | Height: | Size: 416 KiB |
BIN
static/assets/images/posts-2020/6IRPHhr6u.png
Normal file
After Width: | Height: | Size: 17 KiB |
BIN
static/assets/images/posts-2020/6PA6lIOcP.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
static/assets/images/posts-2020/6k06ySON7.png
Normal file
After Width: | Height: | Size: 135 KiB |
BIN
static/assets/images/posts-2020/6yo39lXI7.png
Normal file
After Width: | Height: | Size: 752 KiB |
BIN
static/assets/images/posts-2020/7MfV-1uiO.png
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
static/assets/images/posts-2020/7Sb3j2PS3.png
Normal file
After Width: | Height: | Size: 68 KiB |